WorldWideScience

Sample records for automated content-based image

  1. CONTENT-BASED AUTOFOCUSING IN AUTOMATED MICROSCOPY

    Directory of Open Access Journals (Sweden)

    Peter Hamm

    2010-11-01

    Full Text Available Autofocusing is the fundamental step when it comes to image acquisition and analysis with automated microscopy devices. Despite all efforts that have been put into developing a reliable autofocus system, recent methods still lack robustness towards different microscope modes and distracting artefacts. This paper presents a novel automated focusing approach that is generally applicable to different microscope modes (bright-field, phase contrast, Differential Interference Contrast (DIC and fluorescence microscopy. The main innovation consists in a Content-based focus search that makes use of a priori knowledge about the observed objects by employing local object features and Boosted Learning. Hence, this method turns away from common autofocus approaches that apply solely whole image frequency measurements to obtain the focus plane. Thus, it is possible to exclude artefacts from being brought into focus calculation as well as locating the in-focus layer of specific microscopic objects.

  2. Multimedia input in automated image annotation and content-based retrieval

    Science.gov (United States)

    Srihari, Rohini K.

    1995-03-01

    This research explores the interaction of linguistic and photographic information in an integrated text/image database. By utilizing linguistic descriptions of a picture (speech and text input) coordinated with pointing references to the picture, we extract information useful in two aspects: image interpretation and image retrieval. In the image interpretation phase, objects and regions mentioned in the text are identified; the annotated image is stored in a database for future use. We incorporate techniques from our previous research on photo understanding using accompanying text: a system, PICTION, which identifies human faces in a newspaper photograph based on the caption. In the image retrieval phase, images matching natural language queries are presented to a user in a ranked order. This phase combines the output of (1) the image interpretation/annotation phase, (2) statistical text retrieval methods, and (3) image retrieval methods (e.g., color indexing). The system allows both point and click querying on a given image as well as intelligent querying across the entire text/image database.

  3. CONTENT BASED BATIK IMAGE RETRIEVAL

    Directory of Open Access Journals (Sweden)

    A. Haris Rangkuti

    2014-01-01

    Full Text Available Content Based Batik Image Retrieval (CBBIR is an area of research that focuses on image processing based on characteristic motifs of batik. Basically the image has a unique batik motif compared with other images. Its uniqueness lies in the characteristics possessed texture and shape, which has a unique and distinct characteristics compared with other image characteristics. To study this batik image must start from a preprocessing stage, in which all its color images must be removed with a grayscale process. Proceed with the feature extraction process taking motifs characteristic of every kind of batik using the method of edge detection. After getting the characteristic motifs seen visually, it will be calculated by using 4 texture characteristic function is the mean, energy, entropy and stadard deviation. Characteristic function will be added as needed. The results of the calculation of characteristic functions will be made more specific using the method of wavelet transform Daubechies type 2 and invariant moment. The result will be the index value of every type of batik. Because each motif there are the same but have different sizes, so any kind of motive would be divided into three sizes: Small, medium and large. The perfomance of Batik Image similarity using this method about 90-92%.

  4. Metadata for Content-Based Image Retrieval

    Directory of Open Access Journals (Sweden)

    Adrian Sterca

    2010-12-01

    Full Text Available This paper presents an image retrieval technique that combines content based image retrieval with pre-computed metadata-based image retrieval. The resulting system will have the advantages of both approaches: the speed/efficiency of metadata-based image retrieval and the accuracy/power of content-based image retrieval.

  5. Content-based vessel image retrieval

    Science.gov (United States)

    Mukherjee, Satabdi; Cohen, Samuel; Gertner, Izidor

    2016-05-01

    This paper describes an approach to vessel classification from satellite images using content based image retrieval methodology. Content-based image retrieval is an important problem in both medical imaging and surveillance applications. In many cases the archived reference database is not fully structured, thus making content-based image retrieval a challenging problem. In addition, in surveillance applications, the query image may be affected by weather or/and geometric distortions. Our approach of content-based vessel image retrieval consists of two phases. First, we create a structured reference database, then for each new query image of a vessel we find the closest cluster of images in the structured reference database, thus identifying and classifying the vessel. Then we update the closest cluster with new query image.

  6. Definition of an automated Content-Based Image Retrieval (CBIR system for the comparison of dermoscopic images of pigmented skin lesions

    Directory of Open Access Journals (Sweden)

    Manganaro Mario

    2009-08-01

    Full Text Available Abstract Background New generations of image-based diagnostic machines are based on digital technologies for data acquisition; consequently, the diffusion of digital archiving systems for diagnostic exams preservation and cataloguing is rapidly increasing. To overcome the limits of current state of art text-based access methods, we have developed a novel content-based search engine for dermoscopic images to support clinical decision making. Methods To this end, we have enrolled, from 2004 to 2008, 3415 caucasian patients and collected 24804 dermoscopic images corresponding to 20491 pigmented lesions with known pathology. The images were acquired with a well defined dermoscopy system and stored to disk in 24-bit per pixel TIFF format using interactive software developed in C++, in order to create a digital archive. Results The analysis system of the images consists in the extraction of the low-level representative features which permits the retrieval of similar images in terms of colour and texture from the archive, by using a hierarchical multi-scale computation of the Bhattacharyya distance of all the database images representation with respect to the representation of user submitted (query. Conclusion The system is able to locate, retrieve and display dermoscopic images similar in appearance to one that is given as a query, using a set of primitive features not related to any specific diagnostic method able to visually characterize the image. Similar search engine could find possible usage in all sectors of diagnostic imaging, or digital signals, which could be supported by the information available in medical archives.

  7. Material Recognition for Content Based Image Retrieval

    NARCIS (Netherlands)

    Geusebroek, J.M.

    2002-01-01

    One of the open problems in content-based Image Retrieval is the recognition of material present in an image. Knowledge about the set of materials present gives important semantic information about the scene under consideration. For example, detecting sand, sky, and water certainly classifies the im

  8. Content Based Image Retrieval through Clustering

    Directory of Open Access Journals (Sweden)

    Sandhya

    2012-06-01

    Full Text Available Content-based image retrieval (CBIR is a technique usedfor extracting similar images from an image database.CBIR system is required to access images effectively andefficiently using information contained in image databases.Here, K-Means is to be used for Image retrieval. The Kmeansmethod can be applied only in those cases when themean of a cluster is defined. The K-means method is notsuitable for discovering clusters with non-convex shapes orclusters of very different size. In this paper, CBIR,clustering and K-Means are defined. With the help of these,the data consisting images can be grouped and retrieved.

  9. Content based Image Retrieval from Forensic Image Databases

    Directory of Open Access Journals (Sweden)

    Swati A. Gulhane

    2015-03-01

    Full Text Available Due to the proliferation of video and image data in digital form, Content based Image Retrieval has become a prominent research topic. In forensic sciences, digital data have been widely used such as criminal images, fingerprints, scene images and so on. Therefore, the arrangement of such large image data becomes a big issue such as how to get an interested image fast. There is a great need for developing an efficient technique for finding the images. In order to find an image, image has to be represented with certain features. Color, texture and shape are three important visual features of an image. Searching for images using color, texture and shape features has attracted much attention. There are many content based image retrieval techniques in the literature. This paper gives the overview of different existing methods used for content based image retrieval and also suggests an efficient image retrieval method for digital image database of criminal photos, using dynamic dominant color, texture and shape features of an image which will give an effective retrieval result.

  10. PERFORMANCE EVALUATION OF CONTENT BASED IMAGE RETRIEVAL FOR MEDICAL IMAGES

    Directory of Open Access Journals (Sweden)

    SASI KUMAR. M

    2013-04-01

    Full Text Available Content-based image retrieval (CBIR technology benefits not only large image collections management, but also helps clinical care, biomedical research, and education. Digital images are found in X-Rays, MRI, CT which are used for diagnosing and planning treatment schedules. Thus, visual information management is challenging as the data quantity available is huge. Currently, available medical databases utilization is limited image retrieval issues. Archived digital medical images retrieval is always challenging and this is being researched more as images are of great importance in patient diagnosis, therapy, medical reference, and medical training. In this paper, an image matching scheme using Discrete Sine Transform for relevant feature extraction is presented. The efficiency of different algorithm for classifying the features to retrieve medical images is investigated.

  11. Multi Feature Content Based Image Retrieval

    Directory of Open Access Journals (Sweden)

    Rajshree S. Dubey,

    2010-09-01

    Full Text Available There are numbers of methods prevailing for Image Mining Techniques. This Paper includes the features of four techniques I,e Color Histogram, Color moment, Texture, and Edge Histogram Descriptor. The nature of the Image is basically based on the Human Perception of the Image. The Machine interpretation of the Image is based on the Contours and surfaces of the Images. The study of the Image Mining is a very challenging task because it involves the Pattern Recognition which is a very important tool for the Machine Vision system. A combination of four feature extraction methods namely color istogram, Color Moment, texture, and Edge Histogram Descriptor. There is a provision to add new features in future for better retrievalefficiency. In this paper the combination of the four techniques are used and the Euclidian distances are calculated of the every features are added and the averages are made .The user interface is provided by the Mat lab. The image properties analyzed in this work are by using computer vision and image processing algorithms. For colorthe histogram of images are computed, for texture co occurrence matrix based entropy, energy, etc, are calculated and for edge density it is Edge Histogram Descriptor (EHD that is found. For retrieval of images, the averages of the four techniques are made and the resultant Image is retrieved.

  12. Rotational invariant similarity measurement for content-based image indexing

    Science.gov (United States)

    Ro, Yong M.; Yoo, Kiwon

    2000-04-01

    We propose a similarity matching technique for contents based image retrieval. The proposed technique is invariant from rotated images. Since image contents for image indexing and retrieval would be arbitrarily extracted from still image or key frame of video, the rotation invariant property of feature description of image is important for general application of contents based image indexing and retrieval. In this paper, we propose a rotation invariant similarity measurement in cooperating with texture featuring base on HVS. To simplify computational complexity, we employed hierarchical similarity distance searching. To verify the method, experiments with MPEG-7 data set are performed.

  13. An Efficient Content Based Image Retrieval Scheme

    Directory of Open Access Journals (Sweden)

    Zukuan WEI

    2013-11-01

    Full Text Available Due to the recent improvements in digital photography and storage capacity, storing large amounts of images has been made possible. Consequently efficient means to retrieve images matching a user’s query are needed. In this paper, we propose a framework based on a bipartite graph model (BGM for semantic image retrieval. BGM is a scalable data structure that aids semantic indexing in an efficient manner, and it can also be incrementally updated. Firstly, all the images are segmented into several regions with image segmentation algorithm, pre-trained SVMs are used to annotate each region, and final label is obtained by merging all the region labels. Then we use the set of images and the set of region labels to build a bipartite graph. When a query is given, a query node, initially containing a fixed number of labels, is created to attach to the bipartite graph. The node then distributes the labels based on the edge weight between the node and its neighbors. Image nodes receiving the most labels represent the most relevant images. Experimental results demonstrate that our proposed technique is promising.

  14. Information Theoretic Similarity Measures for Content Based Image Retrieval.

    Science.gov (United States)

    Zachary, John; Iyengar, S. S.

    2001-01-01

    Content-based image retrieval is based on the idea of extracting visual features from images and using them to index images in a database. Proposes similarity measures and an indexing algorithm based on information theory that permits an image to be represented as a single number. When used in conjunction with vectors, this method displays…

  15. Content Based Image Retrieval by Multi Features using Image Blocks

    Directory of Open Access Journals (Sweden)

    Arpita Mathur

    2013-12-01

    Full Text Available Content based image retrieval (CBIR is an effective method of retrieving images from large image resources. CBIR is a technique in which images are indexed by extracting their low level features like, color, texture, shape, and spatial location, etc. Effective and efficient feature extraction mechanisms are required to improve existing CBIR performance. This paper presents a novel approach of CBIR system in which higher retrieval efficiency is achieved by combining the information of image features color, shape and texture. The color feature is extracted using color histogram for image blocks, for shape feature Canny edge detection algorithm is used and the HSB extraction in blocks is used for texture feature extraction. The feature set of the query image are compared with the feature set of each image in the database. The experiments show that the fusion of multiple features retrieval gives better retrieval results than another approach used by Rao et al. This paper presents comparative study of performance of the two different approaches of CBIR system in which the image features color, shape and texture are used.

  16. Content-based retrieval based on binary vectors for 2-D medical images

    Institute of Scientific and Technical Information of China (English)

    龚鹏; 邹亚东; 洪海

    2003-01-01

    In medical research and clinical diagnosis, automated or computer-assisted classification and retrieval methods are highly desirable to offset the high cost of manual classification and manipulation by medical experts. To facilitate the decision-making in the health-care and the related areas, in this paper, a two-step content-based medical image retrieval algorithm is proposed. Firstly, in the preprocessing step, the image segmentation is performed to distinguish image objects, and on the basis of the ...

  17. AN INTELLIGENT CONTENT BASED IMAGE RETRIEVAL SYSTEM FOR MAMMOGRAM IMAGE ANALYSIS

    Directory of Open Access Journals (Sweden)

    K. VAIDEHI

    2015-11-01

    Full Text Available An automated segmentation method which dynamically selects the parenchymal region of interest (ROI based on the patients breast size is proposed from which, statistical features are derived. SVM classifier is used to model the derived features to classify the breast tissue as dense, glandular and fatty. Then K-nn with different distance metrics namely city-block, Euclidean and Chebchev is used to retrieve the first k similar images closest to the given query image. The proposed method was tested with MIAS database and achieves an average precision of 86.15%. The results reveals that the proposed method could be employed for effective content based mammograms retrieval.

  18. Content Based Image Retrieval : Classification Using Neural Networks

    Directory of Open Access Journals (Sweden)

    Shereena V.B

    2014-10-01

    Full Text Available In a content-based image retrieval system (CBIR, the main issue is to extract the image features that effectively represent the image contents in a database. Such an extraction requires a detailed evaluation of retrieval performance of image features. This paper presents a review of fundamental aspects of content based image retrieval including feature extraction of color and texture features. Commonly used color features including color moments, color histogram and color correlogram and Gabor texture are compared. The paper reviews the increase in efficiency of image retrieval when the color and texture features are combined. The similarity measures based on which matches are made and images are retrieved are also discussed. For effective indexing and fast searching of images based on visual features, neural network based pattern learning can be used to achieve effective classification.

  19. Content Based Image Retrieval : Classification Using Neural Networks

    Directory of Open Access Journals (Sweden)

    Shereena V.B

    2014-11-01

    Full Text Available In a content-based image retrieval system (CBIR, the main issue is to extract the image features that effectively represent the image contents in a database. Such an extraction requires a detailed evaluation of retrieval performance of image features. This paper presents a review of fundamental aspects of content based image retrieval including feature extraction of color and texture features. Commonly used color features including color moments, color histogram and color correlogram and Gabor texture are compared. The paper reviews the increase in efficiency of image retrieval when the color and texture features are combined. The similarity measures based on which matches are made and images are retrieved are also discussed. For effective indexing and fast searching of images based on visual features, neural network based pattern learning can be used to achieve effective classification.

  20. Human-Centered Content-Based Image Retrieval

    NARCIS (Netherlands)

    Broek, van den Egon L.

    2005-01-01

    Retrieval of images that lack a (suitable) annotations cannot be achieved through (traditional) Information Retrieval (IR) techniques. Access through such collections can be achieved through the application of computer vision techniques on the IR problem, which is baptized Content-Based Image Retrie

  1. Active index for content-based medical image retrieval.

    Science.gov (United States)

    Chang, S K

    1996-01-01

    This paper introduces the active index for content-based medical image retrieval. The dynamic nature of the active index is its most important characteristic. With an active index, we can effectively and efficiently handle smart images that respond to accessing, probing and other actions. The main applications of the active index are to prefetch image and multimedia data, and to facilitate similarity retrieval. The experimental active index system is described.

  2. Content-based Image Retrieval by Spatial Similarity

    Directory of Open Access Journals (Sweden)

    Archana M. Kulkarn

    2002-07-01

    Full Text Available Similarity-based retrieval of images is an important task in image databases. Most of the user's queries are on retrieving those database images that are spatially similar to a query image. In defence strategies, one wants to know a number of armoured vehicles, such as battle tanks, portable missile launching vehicles, etc. moving towards it, so that one can decide counter strategy. Content-based spatial similarity retrieval of images can be used to locate spatial relationship of various objects in a specific area from the aerial photographs and to retrieve images similar to the query image from image database. A content-based image retrieval system that efficiently and effectively retrieves information from a defence image database along with the architecture for retrieving images by spatial similarity is presented. A robust algorithm SIMdef for retrieval by spatial similarity is proposed that utilises both directional and topological relations for computing similarity between images, retrieves similar images and recognises images even after they undergo modelling transformations (translation, scale and rotation. A case study for some of the common objects, used in defence applications using SIMdef algorithm, has been done.

  3. Towards Better Retrievals in Content -Based Image Retrieval System

    Directory of Open Access Journals (Sweden)

    Kumar Vaibhava

    2014-04-01

    Full Text Available -This paper presents a Content-Based Image Retrieval (CBIR System called DEICBIR-2. The system retrieves images similar to a given query image by searching in the provided image database.Standard MPEG-7 image descriptors are used to find the relevant images which are similar to thegiven query image. Direct use of the MPEG-7 descriptors for creating the image database and retrieval on the basis of nearest neighbor does not yield accurate retrievals. To further improve the retrieval results, B-splines are used for ensuring smooth and continuous edges of the images in the edge-based descriptors. Relevance feedback is also implemented with user intervention. These additional features improve the retrieval performance of DEICBIR-2 significantly. Computational performance on a set of query images is presented and the performance of the proposed system is much superior to the performance of DEICBIR[9] on the same database and on the same set of query images.

  4. Content-Based Image Retrial Based on Hadoop

    Directory of Open Access Journals (Sweden)

    DongSheng Yin

    2013-01-01

    Full Text Available Generally, time complexity of algorithms for content-based image retrial is extremely high. In order to retrieve images on large-scale databases efficiently, a new way for retrieving based on Hadoop distributed framework is proposed. Firstly, a database of images features is built by using Speeded Up Robust Features algorithm and Locality-Sensitive Hashing and then perform the search on Hadoop platform in a parallel way specially designed. Considerable experimental results show that it is able to retrieve images based on content on large-scale cluster and image sets effectively.

  5. Content Based Image Retrieval Based on Color: A Survey

    Directory of Open Access Journals (Sweden)

    Mussarat Yasmin

    2015-11-01

    Full Text Available Information sharing, interpretation and meaningful expression have used digital images in the past couple of decades very usefully and extensively. This extensive use not only evolved the digital communication world with ease and usability but also produced unwanted difficulties around the use of digital images. Because of their extensive usage it sometimes becomes harder to filter images based on their visual contents. To overcome these problems, Content Based Image Retrieval (CBIR was introduced as one of the recent ways to find specific images in massive databases of digital images for efficiency or in other words for continuing the use of digital images in information sharing. In the past years, many systems of CBIR have been anticipated, developed and brought into usage as an outcome of huge research done in CBIR domain. Based on the contents of images, different approaches of CBIR have different implementations for searching images resulting in different measures of performance and accuracy. Some of them are in fact very effective approaches for fast and efficient content based image retrieval. This research highlights the hard work done by researchers to develop the image retrieval techniques based on the color of images. These techniques along with their pros and cons as well as their application in relevant fields are discussed in the survey paper. Moreover, the techniques are also categorized on the basis of common approach used.

  6. System refinement for content based satellite image retrieval

    Directory of Open Access Journals (Sweden)

    NourElDin Laban

    2012-06-01

    Full Text Available We are witnessing a large increase in satellite generated data especially in the form of images. Hence intelligent processing of the huge amount of data received by dozens of earth observing satellites, with specific satellite image oriented approaches, presents itself as a pressing need. Content based satellite image retrieval (CBSIR approaches have mainly been driven so far by approaches dealing with traditional images. In this paper we introduce a novel approach that refines image retrieval process using the unique properties to satellite images. Our approach uses a Query by polygon (QBP paradigm for the content of interest instead of using the more conventional rectangular query by image approach. First, we extract features from the satellite images using multiple tiling sizes. Accordingly the system uses these multilevel features within a multilevel retrieval system that refines the retrieval process. Our multilevel refinement approach has been experimentally validated against the conventional one yielding enhanced precision and recall rates.

  7. Graph Based Segmentation in Content Based Image Retrieval

    Directory of Open Access Journals (Sweden)

    P. S. Suhasini

    2008-01-01

    Full Text Available Problem statement: Traditional image retrieval systems are content based image retrieval systems which rely on low-level features for indexing and retrieval of images. CBIR systems fail to meet user expectations because of the gap between the low level features used by such systems and the high level perception of images by humans. To meet the requirement as a preprocessing step Graph based segmentation is used in Content Based Image Retrieval (CBIR. Approach: Graph based segmentation is has the ability to preserve detail in low-variability image regions while ignoring detail in high-variability regions. After segmentation the features are extracted for the segmented images, texture features using wavelet transform and color features using histogram model and the segmented query image features are compared with the features of segmented data base images. The similarity measure used for texture features is Euclidean distance measure and for color features Quadratic distance approach. Results: The experimental results demonstrate about 12% improvement in the performance for color feature with segmentation. Conclusions/Recommendations: Along with this improvement Neural network learning can be embedded in this system to reduce the semantic gap.

  8. Retrieval Architecture with Classified Query for Content Based Image Recognition

    Directory of Open Access Journals (Sweden)

    Rik Das

    2016-01-01

    Full Text Available The consumer behavior has been observed to be largely influenced by image data with increasing familiarity of smart phones and World Wide Web. Traditional technique of browsing through product varieties in the Internet with text keywords has been gradually replaced by the easy accessible image data. The importance of image data has portrayed a steady growth in application orientation for business domain with the advent of different image capturing devices and social media. The paper has described a methodology of feature extraction by image binarization technique for enhancing identification and retrieval of information using content based image recognition. The proposed algorithm was tested on two public datasets, namely, Wang dataset and Oliva and Torralba (OT-Scene dataset with 3688 images on the whole. It has outclassed the state-of-the-art techniques in performance measure and has shown statistical significance.

  9. Content-based Image Retrieval by Information Theoretic Measure

    Directory of Open Access Journals (Sweden)

    Madasu Hanmandlu

    2011-09-01

    Full Text Available Content-based image retrieval focuses on intuitive and efficient methods for retrieving images from databases based on the content of the images. A new entropy function that serves as a measure of information content in an image termed as 'an information theoretic measure' is devised in this paper. Among the various query paradigms, 'query by example' (QBE is adopted to set a query image for retrieval from a large image database. In this paper, colour and texture features are extracted using the new entropy function and the dominant colour is considered as a visual feature for a particular set of images. Thus colour and texture features constitute the two-dimensional feature vector for indexing the images. The low dimensionality of the feature vector speeds up the atomic query. Indices in a large database system help retrieve the images relevant to the query image without looking at every image in the database. The entropy values of colour and texture and the dominant colour are considered for measuring the similarity. The utility of the proposed image retrieval system based on the information theoretic measures is demonstrated on a benchmark dataset.Defence Science Journal, 2011, 61(5, pp.415-430, DOI:http://dx.doi.org/10.14429/dsj.61.1177

  10. Deeply learnt hashing forests for content based image retrieval in prostate MR images

    Science.gov (United States)

    Shah, Amit; Conjeti, Sailesh; Navab, Nassir; Katouzian, Amin

    2016-03-01

    Deluge in the size and heterogeneity of medical image databases necessitates the need for content based retrieval systems for their efficient organization. In this paper, we propose such a system to retrieve prostate MR images which share similarities in appearance and content with a query image. We introduce deeply learnt hashing forests (DL-HF) for this image retrieval task. DL-HF effectively leverages the semantic descriptiveness of deep learnt Convolutional Neural Networks. This is used in conjunction with hashing forests which are unsupervised random forests. DL-HF hierarchically parses the deep-learnt feature space to encode subspaces with compact binary code words. We propose a similarity preserving feature descriptor called Parts Histogram which is derived from DL-HF. Correlation defined on this descriptor is used as a similarity metric for retrieval from the database. Validations on publicly available multi-center prostate MR image database established the validity of the proposed approach. The proposed method is fully-automated without any user-interaction and is not dependent on any external image standardization like image normalization and registration. This image retrieval method is generalizable and is well-suited for retrieval in heterogeneous databases other imaging modalities and anatomies.

  11. Multimedia Content Based Image Retrieval Iii: Local Tetra Pattern

    Directory of Open Access Journals (Sweden)

    Nagaraja G S

    2014-06-01

    Full Text Available Content Based Image Retrieval methods face several challenges while presentation of results and precision levels due to various specific applications. To improve the performance and address these problems a novel algorithm Local Tetra Pattern (LTrP is proposed which is coded in four direction instead of two direction used in Local Binary Pattern (LBP, Local Derivative Pattern (LDP andLocal Ternary Pattern(LTP.To retrieve the images the surrounding neighbor pixel value is calculated by gray level difference, which gives the relation between various multisorting algorithms using LBP, LDP, LTP and LTrP for sorting the images. This method mainly uses low level features such as color, texture and shape layout for image retrieval.

  12. The Use of QBIC Content-Based Image Retrieval System

    Directory of Open Access Journals (Sweden)

    Ching-Yi Wu

    2004-03-01

    Full Text Available The fast increase in digital images has caught increasing attention on the development of image retrieval technologies. Content-based image retrieval (CBIR has become an important approach in retrieving image data from a large collection. This article reports our results on the use and users study of a CBIR system. Thirty-eight students majored in art and design were invited to use the IBM’s OBIC (Query by Image Content system through the Internet. Data from their information needs, behaviors, and retrieval strategies were collected through an in-depth interview, observation, and self-described think-aloud process. Important conclusions are:(1)There are four types of information needs for image data: implicit, inspirational, ever-changing, and purposive. The types of needs may change during the retrieval process. (2)CBIR is suitable for the example-type query, text retrieval is suitable for the scenario-type query, and image browsing is suitable for the symbolic query. (3)Different from text retrieval, detailed description of the query condition may lead to retrieval failure more easily. (4)CBIR is suitable for the domain-specific image collection, not for the images on the Word-Wide Web.[Article content in Chinese

  13. Relevance Feedback in Content Based Image Retrieval: A Review

    Directory of Open Access Journals (Sweden)

    Manesh B. Kokare

    2011-01-01

    Full Text Available This paper provides an overview of the technical achievements in the research area of relevance feedback (RF in content-based image retrieval (CBIR. Relevance feedback is a powerful technique in CBIR systems, in order to improve the performance of CBIR effectively. It is an open research area to the researcher to reduce the semantic gap between low-level features and high level concepts. The paper covers the current state of art of the research in relevance feedback in CBIR, various relevance feedback techniques and issues in relevance feedback are discussed in detail.

  14. Content Based Image Recognition by Information Fusion with Multiview Features

    Directory of Open Access Journals (Sweden)

    Rik Das

    2015-09-01

    Full Text Available Substantial research interest has been observed in the field of object recognition as a vital component for modern intelligent systems. Content based image classification and retrieval have been considered as two popular techniques for identifying the object of interest. Feature extraction has played the pivotal role towards successful implementation of the aforesaid techniques. The paper has presented two novel techniques of feature extraction from diverse image categories both in spatial domain and in frequency domain. The multi view features from the image categories were evaluated for classification and retrieval performances by means of a fusion based recognition architecture. The experimentation was carried out with four different popular public datasets. The proposed fusion framework has exhibited an average increase of 24.71% and 20.78% in precision rates for classification and retrieval respectively, when compared to state-of-the art techniques. The experimental findings were validated with a paired t test for statistical significance.

  15. Design of Content Based Image Retrieval Scheme for Diabetic Retinopathy Images using Harmony Search Algorithm.

    Science.gov (United States)

    Sivakamasundari, J; Natarajan, V

    2015-01-01

    Diabetic Retinopathy (DR) is a disorder that affects the structure of retinal blood vessels due to long-standing diabetes mellitus. Automated segmentation of blood vessel is vital for periodic screening and timely diagnosis. An attempt has been made to generate continuous retinal vasculature for the design of Content Based Image Retrieval (CBIR) application. The typical normal and abnormal retinal images are preprocessed to improve the vessel contrast. The blood vessels are segmented using evolutionary based Harmony Search Algorithm (HSA) combined with Otsu Multilevel Thresholding (MLT) method by best objective functions. The segmentation results are validated with corresponding ground truth images using binary similarity measures. The statistical, textural and structural features are obtained from the segmented images of normal and DR affected retina and are analyzed. CBIR in medical image retrieval applications are used to assist physicians in clinical decision-support techniques and research fields. A CBIR system is developed using HSA based Otsu MLT segmentation technique and the features obtained from the segmented images. Similarity matching is carried out between the features of query and database images using Euclidean Distance measure. Similar images are ranked and retrieved. The retrieval performance of CBIR system is evaluated in terms of precision and recall. The CBIR systems developed using HSA based Otsu MLT and conventional Otsu MLT methods are compared. The retrieval performance such as precision and recall are found to be 96% and 58% for CBIR system using HSA based Otsu MLT segmentation. This automated CBIR system could be recommended for use in computer assisted diagnosis for diabetic retinopathy screening.

  16. Content-Based Image Retrieval for Semiconductor Process Characterization

    Directory of Open Access Journals (Sweden)

    Kenneth W. Tobin

    2002-07-01

    Full Text Available Image data management in the semiconductor manufacturing environment is becoming more problematic as the size of silicon wafers continues to increase, while the dimension of critical features continues to shrink. Fabricators rely on a growing host of image-generating inspection tools to monitor complex device manufacturing processes. These inspection tools include optical and laser scattering microscopy, confocal microscopy, scanning electron microscopy, and atomic force microscopy. The number of images that are being generated are on the order of 20,000 to 30,000 each week in some fabrication facilities today. Manufacturers currently maintain on the order of 500,000 images in their data management systems for extended periods of time. Gleaning the historical value from these large image repositories for yield improvement is difficult to accomplish using the standard database methods currently associated with these data sets (e.g., performing queries based on time and date, lot numbers, wafer identification numbers, etc.. Researchers at the Oak Ridge National Laboratory have developed and tested a content-based image retrieval technology that is specific to manufacturing environments. In this paper, we describe the feature representation of semiconductor defect images along with methods of indexing and retrieval, and results from initial field-testing in the semiconductor manufacturing environment.

  17. Content-based image database system for epilepsy.

    Science.gov (United States)

    Siadat, Mohammad-Reza; Soltanian-Zadeh, Hamid; Fotouhi, Farshad; Elisevich, Kost

    2005-09-01

    We have designed and implemented a human brain multi-modality database system with content-based image management, navigation and retrieval support for epilepsy. The system consists of several modules including a database backbone, brain structure identification and localization, segmentation, registration, visual feature extraction, clustering/classification and query modules. Our newly developed anatomical landmark localization and brain structure identification method facilitates navigation through an image data and extracts useful information for segmentation, registration and query modules. The database stores T1-, T2-weighted and FLAIR MRI and ictal/interictal SPECT modalities with associated clinical data. We confine the visual feature extractors within anatomical structures to support semantically rich content-based procedures. The proposed system serves as a research tool to evaluate a vast number of hypotheses regarding the condition such as resection of the hippocampus with a relatively small volume and high average signal intensity on FLAIR. Once the database is populated, using data mining tools, partially invisible correlations between different modalities of data, modeled in database schema, can be discovered. The design and implementation aspects of the proposed system are the main focus of this paper.

  18. Novel Approach to Content Based Image Retrieval Using Evolutionary Computing

    Directory of Open Access Journals (Sweden)

    Muhammad Imran

    2014-08-01

    Full Text Available Content Based Image Retrieval (CBIR is an active research area in multimedia domain in this era of information technology. One of the challenges of CBIR is to bridge the gap between low level features and high level semantic. In this study we investigate the Particle Swarm Optimization (PSO, a stochastic algorithm and Genetic Algorithm (GA for CBIR to overcome this drawback. We proposed a new CBIR system based on the PSO and GA coupled with Support Vector Machine (SVM. GA and PSO both are evolutionary algorithms and in this study are used to increase the number of relevant images. SVM is used to perform final classification. To check the performance of the proposed technique, rich experiments are performed using coral dataset. The proposed technique achieves higher accuracy compared to the previously introduced techniques (FEI, FIRM, simplicity, simple HIST and WH.

  19. Content-based image retrieval with ontological ranking

    Science.gov (United States)

    Tsai, Shen-Fu; Tsai, Min-Hsuan; Huang, Thomas S.

    2010-02-01

    Images are a much more powerful medium of expression than text, as the adage says: "One picture is worth a thousand words." It is because compared with text consisting of an array of words, an image has more degrees of freedom and therefore a more complicated structure. However, the less limited structure of images presents researchers in the computer vision community a tough task of teaching machines to understand and organize images, especially when a limit number of learning examples and background knowledge are given. The advance of internet and web technology in the past decade has changed the way human gain knowledge. People, hence, can exchange knowledge with others by discussing and contributing information on the web. As a result, the web pages in the internet have become a living and growing source of information. One is therefore tempted to wonder whether machines can learn from the web knowledge base as well. Indeed, it is possible to make computer learn from the internet and provide human with more meaningful knowledge. In this work, we explore this novel possibility on image understanding applied to semantic image search. We exploit web resources to obtain links from images to keywords and a semantic ontology constituting human's general knowledge. The former maps visual content to related text in contrast to the traditional way of associating images with surrounding text; the latter provides relations between concepts for machines to understand to what extent and in what sense an image is close to the image search query. With the aid of these two tools, the resulting image search system is thus content-based and moreover, organized. The returned images are ranked and organized such that semantically similar images are grouped together and given a rank based on the semantic closeness to the input query. The novelty of the system is twofold: first, images are retrieved not only based on text cues but their actual contents as well; second, the grouping

  20. Content Based Image Retrieval Using Singular Value Decomposition

    Directory of Open Access Journals (Sweden)

    K. Harshini

    2012-10-01

    Full Text Available A computer application which automatically identifies or verifies a person from a digital image or a video frame from a video source, one of the ways to do this is by com-paring selected facial features from the image and a facial database. Content based image retrieval (CBIR, a technique for retrieving images on the basis of automatically derived features. This paper focuses on a low-dimensional feature based indexing technique for achieving efficient and effective retrieval performance. An appearance based face recognition method called singular value decomposition (SVD is proposed in this paper and is different from principal component analysis (PCA, which effectively considers only Euclidean structure of face space for analysis which lead to poor classification performance in case of great facial variations such as expression, lighting, occlusion and so on, due to the fact the image gray value matrices on which they manipulate are very sensitive to these facial variations. We consider the fact that every image matrix can always have the well known singular value decomposition (SVD and can be regarded as a composition of a set of base images generated by SVD and we further point out that base images are sensitive to the composition of face image. Finally our experimental results show that SVD has the advantage of providing a better representation and achieves lower error rates in face recognition but it has the disadvantage that it drags the performance evaluation. So, in order to overcome that, we conducted experiments by introducing a controlling parameter ‘α’, which ranges from 0 to 1, and we achieved better results for α=0.4 when compared with the other values of ‘α’. Key words: Singular value decomposition (SVD, Euclidean distance, original gray value matrix (OGVM.

  1. Weighted feature fusion for content-based image retrieval

    Science.gov (United States)

    Soysal, Omurhan A.; Sumer, Emre

    2016-07-01

    The feature descriptors such as SIFT (Scale Invariant Feature Transform), SURF (Speeded-up Robust Features) and ORB (Oriented FAST and Rotated BRIEF) are known as the most commonly used solutions for the content-based image retrieval problems. In this paper, a novel approach called "Weighted Feature Fusion" is proposed as a generic solution instead of applying problem-specific descriptors alone. Experiments were performed on two basic data sets of the Inria in order to improve the precision of retrieval results. It was found that in cases where the descriptors were used alone the proposed approach yielded 10-30% more accurate results than the ORB alone. Besides, it yielded 9-22% and 12-29% less False Positives compared to the SIFT alone and SURF alone, respectively.

  2. Automating the construction of scene classifiers for content-based video retrieval

    NARCIS (Netherlands)

    Israël, Menno; Broek, van den Egon L.; Putten, van der Peter; Khan, L.; Petrushin, V.A.

    2004-01-01

    This paper introduces a real time automatic scene classifier within content-based video retrieval. In our envisioned approach end users like documentalists, not image processing experts, build classifiers interactively, by simply indicating positive examples of a scene. Classification consists of a

  3. Content-based image hashing using wave atoms

    Institute of Scientific and Technical Information of China (English)

    Liu Fang; Leung Hon-Yin; Cheng Lee-Ming; Ji Xiao-Yong

    2012-01-01

    It is well known that robustness,fragility,and security are three important criteria of image hashing; however how to build a system that can strongly meet these three criteria is still a challenge.In this paper,a content-based image hashing scheme using wave atoms is proposed,which satisfies the above criteria.Compared with traditional transforms like wavelet transform and discrete cosine transform (DCT),wave atom transform is adopted for the sparser expansion and better characteristics of texture feature extraction which shows better performance in both robustness and fragility.In addition,multi-frequency detection is presented to provide an application-defined trade-off.To ensure the security of the proposed approach and its resistance to a chosen-plaintext attack,a randomized pixel modulation based on the Rényi chaotic map is employed,combining with the nonliner wave atom transform.The experimental results reveal that the proposed scheme is robust against content-preserving manipulations and has a good discriminative capability to malicious tampering.

  4. Global Descriptor Attributes Based Content Based Image Retrieval of Query Images

    Directory of Open Access Journals (Sweden)

    Jaykrishna Joshi

    2015-02-01

    Full Text Available The need for efficient content-based image retrieval system has increased hugely. Efficient and effective retrieval techniques of images are desired because of the explosive growth of digital images. Content based image retrieval (CBIR is a promising approach because of its automatic indexing retrieval based on their semantic features and visual appearance. In this proposed system we investigate method for describing the contents of images which characterizes images by global descriptor attributes, where global features are extracted to make system more efficient by using color features which are color expectancy, color variance, skewness and texture feature correlation.

  5. Segmentation and Content-Based Watermarking for Color Image and Image Region Indexing and Retrieval

    Directory of Open Access Journals (Sweden)

    Nikolaos V. Boulgouris

    2002-04-01

    Full Text Available In this paper, an entirely novel approach to image indexing is presented using content-based watermarking. The proposed system uses color image segmentation and watermarking in order to facilitate content-based indexing, retrieval and manipulation of digital images and image regions. A novel segmentation algorithm is applied on reduced images and the resulting segmentation mask is embedded in the image using watermarking techniques. In each region of the image, indexing information is additionally embedded. In this way, the proposed system is endowed with content-based access and indexing capabilities which can be easily exploited via a simple watermark detection process. Several experiments have shown the potential of this approach.

  6. Content based no-reference image quality metrics

    OpenAIRE

    Marini,, A.C.

    2012-01-01

    Images are playing a more and more important role in sharing, expressing, mining and exchanging information in our daily lives. Now we can all easily capture and share images anywhere and anytime. Since digital images are subject to a wide variety of distortions during acquisition, processing, compression, storage, transmission and reproduction; it becomes necessary to assess the Image Quality. In this thesis, starting from an organized overview of available Image Quality Assessment methods, ...

  7. CONTENT BASED IMAGE RETRIEVAL USING DOMINANT COLOR, TEXTURE AND SHAPE

    Directory of Open Access Journals (Sweden)

    M.BABU RAO,

    2011-04-01

    Full Text Available In these days people are interested in using digital images. So the size of the image database is increasing enormously. Lot of interest is paid to find images in the database. There is a great need for developing an efficient technique for finding the images. In order to find an image, image has to be represented with certain features. Color, texture and shape are three important visual features of an image. In this paper we propose an efficient image retrieval technique which uses dynamic dominant color, texture and shape features of an image. An image is uniformly divided into 8 coarse partitions as a first step. After the above coarse partition, thecentroid of each partition (“color Bin” in MPEG-7 is selected as its dominant color. Texture of an image is obtained by using Gray Level Co-occurrence Matrix (GLCM. Color and texture features are normalized. Shape information is captured in terms of edge images computed using Gradient Vector Flow fields. Invariant moments are then used to record the shape features. The combination of the color and texture features of an image in conjunction with the shape features provide a robust feature set for image retrieval.Weighted Euclidean distance of color, texture and shape features is used in retrieving the similar images. The efficiency of the method is demonstrated with the results.

  8. Retrieving biomedical images through content-based learning from examples using fine granularity

    Science.gov (United States)

    Jiang, Hao; Xu, Songhua; Lau, Francis C. M.

    2012-02-01

    Traditional content-based image retrieval methods based on learning from examples analyze and attempt to understand high-level semantics of an image as a whole. They typically apply certain case-based reasoning technique to interpret and retrieve images through measuring the semantic similarity or relatedness between example images and search candidate images. The drawback of such a traditional content-based image retrieval paradigm is that the summation of imagery contents in an image tends to lead to tremendous variation from image to image. Hence, semantically related images may only exhibit a small pocket of common elements, if at all. Such variability in image visual composition poses great challenges to content-based image retrieval methods that operate at the granularity of entire images. In this study, we explore a new content-based image retrieval algorithm that mines visual patterns of finer granularities inside a whole image to identify visual instances which can more reliably and generically represent a given search concept. We performed preliminary experiments to validate our new idea for content-based image retrieval and obtained very encouraging results.

  9. Application of fuzzy logic in content-based image retrieval

    Institute of Scientific and Technical Information of China (English)

    WANG Xiao-ling; XIE Kang-lin

    2008-01-01

    We propose a fuzzy logic-based image retrieval system, in which the image similarity can be inferred in a nonlinear manner as human thinking. In the fuzzy inference process, weight assignments of multi-image features were resolved impliedly. Each fuzzy rule was embedded into the subjectivity of human perception of image contents. A color histogram called the average area histogram is proposed to represent the color features. Experimental results show the efficiency and feasibility of the proposed algorithms.

  10. Fuzzy Content-Based Retrieval in Image Databases.

    Science.gov (United States)

    Wu, Jian Kang; Narasimhalu, A. Desai

    1998-01-01

    Proposes a fuzzy-image database model and a concept of fuzzy space; describes fuzzy-query processing in fuzzy space and fuzzy indexing on complete fuzzy vectors; and uses an example image database, the computer-aided facial-image inference and retrieval system (CAFIIR), for explanation throughout. (Author/LRW)

  11. mage Mining using Content Based Image Retrieval System

    OpenAIRE

    Rajshree S. Dubey; Niket Bhargava; Rajnish Choubey

    2010-01-01

    The image depends on the Human perception and is also based on the Machine Vision System. The Image Retrieval is based on the color Histogram, texture. The perception of the Human System of Image is based on the Human Neurons which hold the 1012 of Information; the Human brain continuously learns with the sensory organs like eye which transmits the Image to the brain which interprets the Image. The research challenge is that how the brain processes the informationin the semantic manner is hot...

  12. Content Based Image Retrieval Using Local Color Histogram

    Directory of Open Access Journals (Sweden)

    Metty Mustikasari, Eri Prasetyo,, Suryadi Harmanto

    2014-01-01

    Full Text Available —This paper proposes a technique to retrieve images based on color feature using local histogram. The image is divided into nine sub blocks of equal size. The color of each sub-block is extracted by quantifying the HSV color space into 12x6x6 histogram. In this retrieval system Euclidean distance and City block distance are used to measure similarity of images. This algorithm is tested by using Corel image database. The performance of retrieval system is measured in terms of its recall and precision. The effectiveness of retrieval system is also measured based on AVRR (Average Rank of Relevant Images and IAVRR (Ideal Average Rank of Relevant Images which is proposed by Faloutsos. The experimental results show that the retrieval system has a good performance and the evaluation results of city block has achieved higher retrieval performance than the evaluation results of the Euclidean distance.

  13. A NEW CONTENT BASED IMAGE RETRIEVAL SYSTEM USING GMM AND RELEVANCE FEEDBACK

    Directory of Open Access Journals (Sweden)

    N. Shanmugapriya

    2014-01-01

    Full Text Available Content-Based Image Retrieval (CBIR is also known as Query By Image Content (QBIC is the application of computer vision techniques and it gives solution to the image retrieval problem such as searching digital images in large databases. The need to have a versatile and general purpose Content Based Image Retrieval (CBIR system for a very large image database has attracted focus of many researchers of information-technology-giants and leading academic institutions for development of CBIR techniques. Due to the development of network and multimedia technologies, users are not fulfilled by the traditional information retrieval techniques. So nowadays the Content Based Image Retrieval (CBIR are becoming a source of exact and fast retrieval. Texture and color are the important features of Content Based Image Retrieval Systems. In the proposed method, images can be retrieved using color-based, texture-based and color and texture-based. Algorithms such as auto color correlogram and correlation for extracting color based images, Gaussian mixture models for extracting texture based images. In this study, Query point movement is used as a relevance feedback technique for Content Based Image Retrieval systems. Thus the proposed method achieves better performance and accuracy in retrieving images.

  14. Content-based Image Retrieval Using Color Histogram

    Institute of Scientific and Technical Information of China (English)

    HUANG Wen-bei; HE Liang; GU Jun-zhong

    2006-01-01

    This paper introduces the principles of using color histogram to match images in CBIR. And a prototype CBIR system is designed with color matching function. A new method using 2-dimensional color histogram based on hue and saturation to extract and represent color information of an image is presented. We also improve the Euclidean-distance algorithm by adding Center of Color to it. The experiment shows modifications made to Euclidean-distance significantly elevate the quality and efficiency of retrieval.

  15. Indexing, learning and content-based retrieval for special purpose image databases

    NARCIS (Netherlands)

    Huiskes, M.J.; Pauwels, E.J.

    2004-01-01

    This chapter deals with content-based image retrieval in special purpose image databases. As image data is amassed ever more effortlessly, building efficient systems for searching and browsing of image databases becomes increasingly urgent. We provide an overview of the current state-of-the art by t

  16. Indexing, learning and content-based retrieval for special purpose image databases

    NARCIS (Netherlands)

    Huiskes, M.J.; Pauwels, E.J.

    2005-01-01

    This chapter deals with content-based image retrieval in special purpose image databases. As image data is amassed ever more effortlessly, building efficient systems for searching and browsing of image databases becomes increasingly urgent. We provide an overview of the current state-of-the art by t

  17. Content Based Medical Image Retrieval for Histopathological, CT and MRI Images

    Directory of Open Access Journals (Sweden)

    Swarnambiga AYYACHAMY

    2013-09-01

    Full Text Available A content based approach is followed for medical images. The purpose of this study is to access the stability of these methods for medical image retrieval. The methods used in color based retrieval for histopathological images are color co-occurrence matrix (CCM and histogram with meta features. For texture based retrieval GLCM (gray level co-occurrence matrix and local binary pattern (LBP were used. For shape based retrieval canny edge detection and otsu‘s method with multivariable threshold were used. Texture and shape based retrieval were implemented using MRI (magnetic resonance images. The most remarkable characteristics of the article are its content based approach for each medical imaging modality. Our efforts were focused on the initial visual search. From our experiment, histogram with meta features in color based retrieval for histopathological images shows a precision of 60 % and recall of 30 %. Whereas GLCM in texture based retrieval for MRI images shows a precision of 70 % and recall of 20 %. Shape based retrieval for MRI images shows a precision of 50% and recall of 25 %. The retrieval results shows that this simple approach is successful.

  18. Image Content Based Retrieval System using Cosine Similarity for Skin Disease Images

    Directory of Open Access Journals (Sweden)

    Sukhdeep Kaur

    2013-09-01

    Full Text Available A content based image retrieval system (CBIR is proposed to assist the dermatologist for diagnosis of skin diseases. First, after collecting the various skin disease images and their text information (disease name, symptoms and cure etc, a test database (for query image and a train database of 460 images approximately (for image matching are prepared. Second, features are extracted by calculating the descriptive statistics. Third, similarity matching using cosine similarity and Euclidian distance based on the extracted features is discussed. Fourth, for better results first four images are selected during indexing and their related text information is shown in the text file. Last, the results shown are compared according to doctor’s description and according to image content in terms of precision and recall and also in terms of a self developed scoring system.

  19. BI-LEVEL CLASSIFICATION OF COLOR INDEXED IMAGE HISTOGRAMS FOR CONTENT BASED IMAGE RETRIEVAL

    Directory of Open Access Journals (Sweden)

    Karpagam Vilvanathan

    2013-01-01

    Full Text Available This dissertation proposes content based image classification and retrieval with Classification and Regression Tree (CART. A simple CBIR system (WH is designed and proved to be efficient even in the presence of distorted and noisy images. WH exhibits good performance in terms of precision, without using any intensive image processing feature extraction techniques. Unique indexed color histogram and wavelet decomposition based horizontal, vertical and diagonal image attributes have been chosen as the primary attributes in the design of the retrieval system. The output feature vectors of the WH method serve as input to the proposed decision tree based image classification and retrieval system. The performance of the proposed content based image classification and retrieval system is evaluated with the standard SIMPLIcity dataset which has been used in several previous works. The performance of the system is measured with precision as the metric. Holdout validation and k-fold cross validation are used to validate the results. The proposed system performs obviously better than SIMPLIcity and all the other compared methods.

  20. Content Based Image Retrieval using Hierarchical and K-Means Clustering Techniques

    Directory of Open Access Journals (Sweden)

    V.S.V.S. Murthy

    2010-03-01

    Full Text Available In this paper we present an image retrieval system that takes an image as the input query and retrieves images based on image content. Content Based Image Retrieval is an approach for retrieving semantically-relevant images from an image database based on automatically-derived image features. The unique aspect of the system is the utilization of hierarchical and k-means clustering techniques. The proposed procedure consists of two stages. First, here we are going to filter most of the images in the hierarchical clustering and then apply the clustered images to KMeans, so that we can get better favored image results.

  1. Comparison of color representations for content-based image retrieval in dermatology

    NARCIS (Netherlands)

    Bosman, Hedde H.W.J.; Petkov, Nicolai; Jonkman, Marcel F.

    2010-01-01

    Background/purpose: We compare the effectiveness of 10 different color representations in a content-based image retrieval task for dermatology. Methods: As features, we use the average colors of healthy and lesion skin in an image. The extracted features are used to retrieve similar images from a da

  2. From Content-Based Image Retrieval by Shape to Image Annotation

    Directory of Open Access Journals (Sweden)

    MOCANU, I.

    2010-11-01

    Full Text Available In many areas such as commerce, medical investigations, and others, large collections of digital images are being created. Search operations inside these collections of images are usually based on low-level features of objects contained in an image: color, shape, texture. Although such techniques of content-based image retrieval are useful, they are strongly limited by their inability to consider the meaning of images. Moreover, specifying a query in terms of low level features may not be very simple. Image annotation, in which images are associated with keywords describing their semantics, is a more effective way of image retrieval and queries can be naturally specified by the user. The paper presents a combined set of methods for image retrieval, in which both low level features and semantic properties are taken into account when retrieving images. First, it describes some methods for image representation and retrieval based on shape, and proposes a new such method, which overcomes some of the existing limitations. Then, it describes a new method for image semantic annotation based on a genetic algorithm, which is further improved from two points of view: the obtained solution value - using an anticipatory genetic algorithm, and the execution time - using a parallel genetic algorithm.

  3. Content Based Image Retrieval using Color Boosted Salient Points and Shape features of an image.

    Directory of Open Access Journals (Sweden)

    Hiremath P. S

    2008-02-01

    Full Text Available Salient points are locations in an image where there is a significant variation withrespect to a chosen image feature. Since the set of salient points in an imagecapture important local characteristics of that image, they can form the basis of agood image representation for content-based image retrieval (CBIR. Salientfeatures are generally determined from the local differential structure of images.They focus on the shape saliency of the local neighborhood. Most of thesedetectors are luminance based which have the disadvantage that thedistinctiveness of the local color information is completely ignored in determiningsalient image features. To fully exploit the possibilities of salient point detection incolor images, color distinctiveness should be taken into account in addition toshape distinctiveness. This paper presents a method for salient pointsdetermination based on color saliency. The color and texture information aroundthese points of interest serve as the local descriptors of the image. In addition,the shape information is captured in terms of edge images computed usingGradient Vector Flow fields. Invariant moments are then used to record theshape features. The combination of the local color, texture and the global shapefeatures provides a robust feature set for image retrieval. The experimentalresults demonstrate the efficacy of the method.

  4. Learning effective color features for content based image retrieval in dermatology

    NARCIS (Netherlands)

    Bunte, Kerstin; Biehl, Michael; Jonkman, Marcel F.; Petkov, Nicolai

    2011-01-01

    We investigate the extraction of effective color features for a content-based image retrieval (CBIR) application in dermatology. Effectiveness is measured by the rate of correct retrieval of images from four color classes of skin lesions. We employ and compare two different methods to learn favorabl

  5. Comparison of color representations for content-based image retrieval in dermatology

    OpenAIRE

    Bosman, Hedde H.W.J.; Petkov, Nicolai; Jonkman, Marcel F.

    2010-01-01

    Background/purpose: We compare the effectiveness of 10 different color representations in a content-based image retrieval task for dermatology. Methods: As features, we use the average colors of healthy and lesion skin in an image. The extracted features are used to retrieve similar images from a database using a k-nearest-neighbor search and Euclidean distance. The images in the database are divided into four different color categories. We measure the effectiveness of retrieval by the averag...

  6. Adapting content-based image retrieval techniques for the semantic annotation of medical images.

    Science.gov (United States)

    Kumar, Ashnil; Dyer, Shane; Kim, Jinman; Li, Changyang; Leong, Philip H W; Fulham, Michael; Feng, Dagan

    2016-04-01

    The automatic annotation of medical images is a prerequisite for building comprehensive semantic archives that can be used to enhance evidence-based diagnosis, physician education, and biomedical research. Annotation also has important applications in the automatic generation of structured radiology reports. Much of the prior research work has focused on annotating images with properties such as the modality of the image, or the biological system or body region being imaged. However, many challenges remain for the annotation of high-level semantic content in medical images (e.g., presence of calcification, vessel obstruction, etc.) due to the difficulty in discovering relationships and associations between low-level image features and high-level semantic concepts. This difficulty is further compounded by the lack of labelled training data. In this paper, we present a method for the automatic semantic annotation of medical images that leverages techniques from content-based image retrieval (CBIR). CBIR is a well-established image search technology that uses quantifiable low-level image features to represent the high-level semantic content depicted in those images. Our method extends CBIR techniques to identify or retrieve a collection of labelled images that have similar low-level features and then uses this collection to determine the best high-level semantic annotations. We demonstrate our annotation method using retrieval via weighted nearest-neighbour retrieval and multi-class classification to show that our approach is viable regardless of the underlying retrieval strategy. We experimentally compared our method with several well-established baseline techniques (classification and regression) and showed that our method achieved the highest accuracy in the annotation of liver computed tomography (CT) images.

  7. Content-Based Image Retrieval Based on Electromagnetism-Like Mechanism

    Directory of Open Access Journals (Sweden)

    Hamid A. Jalab

    2013-01-01

    Full Text Available Recently, many researchers in the field of automatic content-based image retrieval have devoted a remarkable amount of research looking for methods to retrieve the best relevant images to the query image. This paper presents a novel algorithm for increasing the precision in content-based image retrieval based on electromagnetism optimization technique. The electromagnetism optimization is a nature-inspired technique that follows the collective attraction-repulsion mechanism by considering each image as an electrical charge. The algorithm is composed of two phases: fitness function measurement and electromagnetism optimization technique. It is implemented on a database with 8,000 images spread across 80 classes with 100 images in each class. Eight thousand queries are fired on the database, and the overall average precision is computed. Experimental results of the proposed approach have shown significant improvement in the retrieval performance in regard to precision.

  8. A Novel Technique for Shape Feature Extraction Using Content Based Image Retrieval

    Directory of Open Access Journals (Sweden)

    Dhanoa Jaspreet Singh

    2016-01-01

    Full Text Available With the advent of technology and multimedia information, digital images are increasing very quickly. Various techniques are being developed to retrieve/search digital information or data contained in the image. Traditional Text Based Image Retrieval System is not plentiful. Since it is time consuming as it require manual image annotation. Also, the image annotation differs with different peoples. An alternate to this is Content Based Image Retrieval (CBIR system. It retrieves/search for image using its contents rather the text, keywords etc. A lot of exploration has been compassed in the range of Content Based Image Retrieval (CBIR with various feature extraction techniques. Shape is a significant image feature as it reflects the human perception. Moreover, Shape is quite simple to use by the user to define object in an image as compared to other features such as Color, texture etc. Over and above, if applied alone, no descriptor will give fruitful results. Further, by combining it with an improved classifier, one can use the positive features of both the descriptor and classifier. So, a tryout will be made to establish an algorithm for accurate feature (Shape extraction in Content Based Image Retrieval (CBIR. The main objectives of this project are: (a To propose an algorithm for shape feature extraction using CBIR, (b To evaluate the performance of proposed algorithm and (c To compare the proposed algorithm with state of art techniques.

  9. Content-Based Image Retrieval Using Support Vector Machine in digital image processing techniques

    Directory of Open Access Journals (Sweden)

    G.V.Hari Prasad

    2012-04-01

    Full Text Available The rapid growth of computer technologies and the ad-vent of the World Wide Web have increased the amount and the complexity of multimedia information. A content-based image retrieval (CBIR system has been developed as an ef-ficient image retrieval tool, whereby the user can provide their query to the system to allow it to retrieve the user’s desired image from the image database. However, the tradi-tional relevance feedback of CBIR has some limitations that will decrease the performance of the CBIR system, such as the imbalance oftraining-set problem, classification prob-lem, limited information from user problem, and insuffi-cient trainingset problem. Therefore, in this study, we pro-posed an enhanced relevance-feedback method to support the user query based on the representative image selection and weight ranking of the images retrieved. The support vector machine (SVM has been used to support the learn-ing process to reduce the semantic gap between the user and the CBIR system. From these experiments, the proposed learning method has enabled users to improve their search results based on the performance of CBIR system. In addi-tion, the experiments also proved that by solving the imbal-ance training set issue, the performance of CBIR could be improved.

  10. A Content-Based Parallel Image Retrieval System on Cluster Architectures

    Institute of Scientific and Technical Information of China (English)

    ZHOU Bing; SHEN Jun-yi; PENG Qin-ke

    2004-01-01

    We propose a content-based parallel image retrieval system to achieve high responding ability.Our system is developed on cluster architectures.It has several retrieval servers to supply the service of content-based image retrieval.It adopts the Browser/Server (B/S) mode.The users could visit our system though web pages.It uses the symmetrical color-spatial features (SCSF) to represent the content of an image.The SCSF is effective and efficient for image matching because it is independent of image distortion such as rotation and flip as well as it increases the matching accuracy.The SCSF was organized by M-tree, which could speedup the searching procedure.Our experiments show that the image matching is quickly and efficiently with the use of SCSF.And with the support of several retrieval servers, the system could respond to many users at mean time.

  11. Design of Content-Based Retrieval System in Remote Sensing Image Database

    Institute of Scientific and Technical Information of China (English)

    LI Feng; ZENG Zhiming; HU Yanfeng; FU Kun

    2006-01-01

    To retrieve the object region efficaciously from massive remote sensing image database, a model for content-based retrieval of remote sensing image is given according to the characters of remote sensing image application firstly, and then the algorithm adopted for feature extraction and multidimensional indexing, and relevance feedback by this model are analyzed in detail. Finally, the contents intending to be researched about this model are proposed.

  12. Content-Based Image Retrieval Using Texture Color Shape and Region

    Directory of Open Access Journals (Sweden)

    Syed Hamad Shirazi

    2016-01-01

    Full Text Available Interests to accurately retrieve required images from databases of digital images are growing day by day. Images are represented by certain features to facilitate accurate retrieval of the required images. These features include Texture, Color, Shape and Region. It is a hot research area and researchers have developed many techniques to use these feature for accurate retrieval of required images from the databases. In this paper we present a literature survey of the Content Based Image Retrieval (CBIR techniques based on Texture, Color, Shape and Region. We also review some of the state of the art tools developed for CBIR.

  13. Performance Evaluation of Content Based Image Retrieval on Feature Optimization and Selection Using Swarm Intelligence

    Directory of Open Access Journals (Sweden)

    Kirti Jain

    2016-03-01

    Full Text Available The diversity and applicability of swarm intelligence is increasing everyday in the fields of science and engineering. Swarm intelligence gives the features of the dynamic features optimization concept. We have used swarm intelligence for the process of feature optimization and feature selection for content-based image retrieval. The performance of content-based image retrieval faced the problem of precision and recall. The value of precision and recall depends on the retrieval capacity of the image. The basic raw image content has visual features such as color, texture, shape and size. The partial feature extraction technique is based on geometric invariant function. Three swarm intelligence algorithms were used for the optimization of features: ant colony optimization, particle swarm optimization (PSO, and glowworm optimization algorithm. Coral image dataset and MatLab software were used for evaluating performance.

  14. ImageGrouper: a group-oriented user interface for content-based image retrieval and digital image arrangement

    NARCIS (Netherlands)

    Nakazato, Munehiro; Manola, Ljubomir; Huang, Thomas S.

    2003-01-01

    In content-based image retrieval (CBIR), experimental (trial-and-error) query with relevance feedback is essential for successful retrieval. Unfortunately, the traditional user interfaces are not suitable for trying different combinations of query examples. This is because first, these systems assum

  15. A picture is worth a thousand words : content-based image retrieval techniques

    NARCIS (Netherlands)

    Thomée, Bart

    2010-01-01

    In my dissertation I investigate techniques for improving the state of the art in content-based image retrieval. To place my work into context, I highlight the current trends and challenges in my field by analyzing over 200 recent articles. Next, I propose a novel paradigm called ‘artificial imagina

  16. Design Approach for Content-based Image Retrieval using Gabor-Zernike features

    Directory of Open Access Journals (Sweden)

    Abhinav Deshpande

    2012-04-01

    Full Text Available The process of extraction of different features from an image is known as Content-based Image Retrieval.Color,Texture and Shape are the major features of an image and play a vital role in the representation of an image..In this paper, a novel method is proposed to extract the region of interest(ROI from an image,prior to extraction of salient features of an image.The image is subjected to normalization so that the noise components due to Gaussian or other types of noises which are present in the image are eliminated and thesuccessfull extraction of various features of an image can be accomplished. Gabor Filters are used to extract the texture feature from an image whereas Zernike Moments can be used to extract the shape feature.The combination of Gabor feature and Zernike feature can be combined to extract Gabor-Zernike Features from an image.

  17. Content based image retrieval using local binary pattern operator and data mining techniques.

    Science.gov (United States)

    Vatamanu, Oana Astrid; Frandeş, Mirela; Lungeanu, Diana; Mihalaş, Gheorghe-Ioan

    2015-01-01

    Content based image retrieval (CBIR) concerns the retrieval of similar images from image databases, using feature vectors extracted from images. These feature vectors globally define the visual content present in an image, defined by e.g., texture, colour, shape, and spatial relations between vectors. Herein, we propose the definition of feature vectors using the Local Binary Pattern (LBP) operator. A study was performed in order to determine the optimum LBP variant for the general definition of image feature vectors. The chosen LBP variant is then subsequently used to build an ultrasound image database, and a database with images obtained from Wireless Capsule Endoscopy. The image indexing process is optimized using data clustering techniques for images belonging to the same class. Finally, the proposed indexing method is compared to the classical indexing technique, which is nowadays widely used.

  18. Creating a large-scale content-based airphoto image digital library.

    Science.gov (United States)

    Zhu, B; Ramsey, M; Chen, H

    2000-01-01

    This paper describes a content-based image retrieval digital library that supports geographical image retrieval over a testbed of 800 aerial photographs, each 25 megabytes in size. In addition, this paper also introduces a methodology to evaluate the performance of the algorithms in the prototype system. There are two major contributions: we suggest an approach that incorporates various image processing techniques including Gabor filters, image enhancement and image compression, as well as information analysis techniques such as the self-organizing map (SOM) into an effective large-scale geographical image retrieval system. We present two experiments that evaluate the performance of the Gabor-filter-extracted features along with the corresponding similarity measure against that of human perception, addressing the lack of studies in assessing the consistency between an image representation algorithm or an image categorization method and human mental model.

  19. Automating Shallow Seismic Imaging

    Energy Technology Data Exchange (ETDEWEB)

    Steeples, Don W.

    2004-12-09

    This seven-year, shallow-seismic reflection research project had the aim of improving geophysical imaging of possible contaminant flow paths. Thousands of chemically contaminated sites exist in the United States, including at least 3,700 at Department of Energy (DOE) facilities. Imaging technologies such as shallow seismic reflection (SSR) and ground-penetrating radar (GPR) sometimes are capable of identifying geologic conditions that might indicate preferential contaminant-flow paths. Historically, SSR has been used very little at depths shallower than 30 m, and even more rarely at depths of 10 m or less. Conversely, GPR is rarely useful at depths greater than 10 m, especially in areas where clay or other electrically conductive materials are present near the surface. Efforts to image the cone of depression around a pumping well using seismic methods were only partially successful (for complete references of all research results, see the full Final Technical Report, DOE/ER/14826-F), but peripheral results included development of SSR methods for depths shallower than one meter, a depth range that had not been achieved before. Imaging at such shallow depths, however, requires geophone intervals of the order of 10 cm or less, which makes such surveys very expensive in terms of human time and effort. We also showed that SSR and GPR could be used in a complementary fashion to image the same volume of earth at very shallow depths. The primary research focus of the second three-year period of funding was to develop and demonstrate an automated method of conducting two-dimensional (2D) shallow-seismic surveys with the goal of saving time, effort, and money. Tests involving the second generation of the hydraulic geophone-planting device dubbed the ''Autojuggie'' showed that large numbers of geophones can be placed quickly and automatically and can acquire high-quality data, although not under rough topographic conditions. In some easy

  20. Texture based feature extraction methods for content based medical image retrieval systems.

    Science.gov (United States)

    Ergen, Burhan; Baykara, Muhammet

    2014-01-01

    The developments of content based image retrieval (CBIR) systems used for image archiving are continued and one of the important research topics. Although some studies have been presented general image achieving, proposed CBIR systems for archiving of medical images are not very efficient. In presented study, it is examined the retrieval efficiency rate of spatial methods used for feature extraction for medical image retrieval systems. The investigated algorithms in this study depend on gray level co-occurrence matrix (GLCM), gray level run length matrix (GLRLM), and Gabor wavelet accepted as spatial methods. In the experiments, the database is built including hundreds of medical images such as brain, lung, sinus, and bone. The results obtained in this study shows that queries based on statistics obtained from GLCM are satisfied. However, it is observed that Gabor Wavelet has been the most effective and accurate method.

  1. AN EFFICIENT/ENHANCED CONTENT BASED IMAGE RETRIEVAL FOR A COMPUTATIONAL ENGINE

    Directory of Open Access Journals (Sweden)

    K. V. Shriram

    2014-01-01

    Full Text Available A picture or image is worth a thousand words. It is very much pertinent to the field of image processing. In the recent years, much advancement in VLSI technologies has triggered the abundant availability of powerful processors in the market. With the prices of RAM are having come down, the databases could be used to store information on the about art works, medical images like CT scan, satellite images, nature photography, album images, images of convicts i.e., criminals for security purpose, giving rise to a massive data having a diverse image set collection. This leads us to the problem of relevant image retrieval from a huge database having diverse image set collection. Web search engines are always expected to deliver flawless results in a short span of time including accuracy and speed. An image search engine also comes under the same roof. The results of an image search should match with the best available image from in the database. Content Based Image Retrieval (CBIR has been proposed to enable these image search engines with impeccable results. In this CBIR technology, using only color and texture as parameters for zeroing in on an imagemay not help in fetching the best result. Also most of the existing systems uses keyword based search which could yield inappropriate results. All the above mentioned drawbacks in CBIR have been addressed in this research. A complete analysis of CBIR including a combination of features has been carried out, implemented and tested.

  2. Content-Based Image Retrieval using Color Moment and Gabor Texture Feature

    Directory of Open Access Journals (Sweden)

    K. Hemachandran

    2012-09-01

    Full Text Available Content based image retrieval (CBIR has become one of the most active research areas in the past few years. Many indexing techniques are based on global feature distributions. However, these global distributions have limited discriminating power because they are unable to capture local image information. In this paper, we propose a content-based image retrieval method which combines color and texture features. To improve the discriminating power of color indexing techniques, we encode a minimal amount of spatial information in the color index. As its color features, an image is divided horizontally into three equal non-overlapping regions. From each region in the image, we extract the first three moments of the color distribution, from each color channel and store them in the index i.e., for a HSV color space, we store 27 floating point numbers per image. As its texture feature, Gabor texture descriptors are adopted. We assign weights to each feature respectively and calculate the similarity with combined features of color and texture using Canberra distance as similarity measure. Experimental results show that the proposed method has higher retrieval accuracy than other conventional methods combining color moments and texture features based on global features approach.

  3. A Novel Optimization-Based Approach for Content-Based Image Retrieval

    Directory of Open Access Journals (Sweden)

    Manyu Xiao

    2013-01-01

    Full Text Available Content-based image retrieval is nowadays one of the possible and promising solutions to manage image databases effectively. However, with the large number of images, there still exists a great discrepancy between the users’ expectations (accuracy and efficiency and the real performance in image retrieval. In this work, new optimization strategies are proposed on vocabulary tree building, retrieval, and matching methods. More precisely, a new clustering strategy combining classification and conventional K-Means method is firstly redefined. Then a new matching technique is built to eliminate the error caused by large-scaled scale-invariant feature transform (SIFT. Additionally, a new unit mechanism is proposed to reduce the cost of indexing time. Finally, the numerical results show that excellent performances are obtained in both accuracy and efficiency based on the proposed improvements for image retrieval.

  4. Efficient content-based low-altitude images correlated network and strips reconstruction

    Science.gov (United States)

    He, Haiqing; You, Qi; Chen, Xiaoyong

    2017-01-01

    The manual intervention method is widely used to reconstruct strips for further aerial triangulation in low-altitude photogrammetry. Clearly the method for fully automatic photogrammetric data processing is not an expected way. In this paper, we explore a content-based approach without manual intervention or external information for strips reconstruction. Feature descriptors in the local spatial patterns are extracted by SIFT to construct vocabulary tree, in which these features are encoded in terms of TF-IDF numerical statistical algorithm to generate new representation for each low-altitude image. Then images correlated network is reconstructed by similarity measure, image matching and geometric graph theory. Finally, strips are reconstructed automatically by tracing straight lines and growing adjacent images gradually. Experimental results show that the proposed approach is highly effective in automatically rearranging strips of lowaltitude images and can provide rough relative orientation for further aerial triangulation.

  5. Content Based Image Retrieval Using Exact Legendre Moments and Support Vector Machine

    CERN Document Server

    Rao, Ch Srinivasa; Mohan, B Chandra; 10.5121/ijma.2010.2206

    2010-01-01

    Content Based Image Retrieval (CBIR) systems based on shape using invariant image moments, viz., Moment Invariants (MI) and Zernike Moments (ZM) are available in the literature. MI and ZM are good at representing the shape features of an image. However, non-orthogonality of MI and poor reconstruction of ZM restrict their application in CBIR. Therefore, an efficient and orthogonal moment based CBIR system is needed. Legendre Moments (LM) are orthogonal, computationally faster, and can represent image shape features compactly. CBIR system using Exact Legendre Moments (ELM) for gray scale images is proposed in this work. Superiority of the proposed CBIR system is observed over other moment based methods, viz., MI and ZM in terms of retrieval efficiency and retrieval time. Further, the classification efficiency is improved by employing Support Vector Machine (SVM) classifier. Improved retrieval results are obtained over existing CBIR algorithm based on Stacked Euler Vector (SERVE) combined with Modified Moment In...

  6. A COMPARATIVE STUDY OF DIMENSION REDUCTION TECHNIQUES FOR CONTENT-BASED IMAGE RETRIEVAL

    Directory of Open Access Journals (Sweden)

    G. Sasikala

    2010-08-01

    Full Text Available Efficient and effective retrieval techniques of images are desired because of the explosive growth of digital images. Content-based image retrieval is a promising approach because of its automatic indexing and retrieval based on their semantic features and visual appearance. This paper discusses the method for dimensionality reduction called Maximum Margin Projection (MMP. MMP aims at maximizing the margin between positive and negative sample at each neighborhood. It is designed for discovering the local manifold structure. Therefore, MMP is likely to be more suitable for image retrieval systems, where nearest neighbor search is usually involved. The performance of these approaches is measured by a user evaluation. It is found that the MMP based technique provides more functionalities and capabilities to support the features of information seeking behavior and produces better performance in searching images.

  7. Exploring access to scientific literature using content-based image retrieval

    Science.gov (United States)

    Deserno, Thomas M.; Antani, Sameer; Long, Rodney

    2007-03-01

    The number of articles published in the scientific medical literature is continuously increasing, and Web access to the journals is becoming common. Databases such as SPIE Digital Library, IEEE Xplore, indices such as PubMed, and search engines such as Google provide the user with sophisticated full-text search capabilities. However, information in images and graphs within these articles is entirely disregarded. In this paper, we quantify the potential impact of using content-based image retrieval (CBIR) to access this non-text data. Based on the Journal Citations Report (JCR), the journal Radiology was selected for this study. In 2005, 734 articles were published electronically in this journal. This included 2,587 figures, which yields a rate of 3.52 figures per article. Furthermore, 56.4% of these figures are composed of several individual panels, i.e. the figure combines different images and/or graphs. According to the Image Cross-Language Evaluation Forum (ImageCLEF), the error rate of automatic identification of medical images is about 15%. Therefore, it is expected that, by applying ImageCLEF-like techniques, already 95.5% of articles could be retrieved by means of CBIR. The challenge for CBIR in scientific literature, however, is the use of local texture properties to analyze individual image panels in composite illustrations. Using local features for content-based image representation, 8.81 images per article are available, and the predicted correctness rate may increase to 98.3%. From this study, we conclude that CBIR may have a high impact in medical literature research and suggest that additional research in this area is warranted.

  8. Content Based Image Retrieval using Novel Gaussian Fuzzy Feed Forward-Neural Network

    Directory of Open Access Journals (Sweden)

    C. R.B. Durai

    2011-01-01

    Full Text Available Problem statement: With extensive digitization of images, diagrams and paintings, traditional keyword based search has been found to be inefficient for retrieval of the required data. Content-Based Image Retrieval (CBIR system responds to image queries as input and relies on image content, using techniques from computer vision and image processing to interpret and understand it, while using techniques from information retrieval and databases to rapidly locate and retrieve images suiting an input query. CBIR finds extensive applications in the field of medicine as it assists a doctor to make better decisions by referring the CBIR system and gain confidence. Approach: Various methods have been proposed for CBIR using image low level image features like histogram, color layout, texture and analysis of the image in the frequency domain. Similarly various classification algorithms like Naïve Bayes classifier, Support Vector Machine, Decision tree induction algorithms and Neural Network based classifiers have been studied extensively. We proposed to extract features from an image using Discrete Cosine Transform, extract relevant features using information gain and Gaussian Fuzzy Feed Forward Neural Network algorithm for classification. Results and Conclusion: We apply our proposed procedure to 180 brain MRI images of which 72 images were used for testing and the remaining for training. The classification accuracy obtained was 95.83% for a three class problem. This research focused on a narrow search, where further investigation is needed to evaluate larger classes.

  9. Semantic query processing and annotation generation for content-based retrieval of histological images

    Science.gov (United States)

    Tang, Lilian H.; Hanka, Rudolf; Ip, Horace H. S.; Cheung, Kent K. T.; Lam, Ringo

    2000-05-01

    In this paper we present a semantic content representation scheme and the associated techniques for supporting (1) query by image examples or by natural language in a histological image database and (2) automatic annotation generation for images through image semantic analysis. In this research, various types of query are analyzed by either a semantic analyzer or a natural language analyzer to extract high level concepts and histological information, which are subsequently converted into an internal semantic content representation structure code-named 'Papillon.' Papillon serves not only as an intermediate representation scheme but also stores the semantic content of the image that will be used to match against the semantic index structure within the image database during query processing. During the image database population phase, all images that are going to be put into the database will go through the same processing so that every image would have its semantic content represented by a Papillon structure. Since the Papillon structure for an image contains high level semantic information of the image, it forms the basis of the technique that automatically generates textual annotation for the input images. Papillon bridges the gap between different media in the database, allows complicated intelligent browsing to be carried out efficiently, and also provides a well- defined semantic content representation scheme for different content processing engines developed for content-based retrieval.

  10. A Content based CT Lung Image Retrieval by DCT Matrix and Feature Vector Technique

    Directory of Open Access Journals (Sweden)

    J.Bridget Nirmala

    2012-03-01

    Full Text Available Most of the image retrieval systems are still incapable of providing retrieval result with high retrieval accuracy and less computational complexity. Image Retrieval technique to retrieve similar and relevant Computed Tomography (CT images of lung from a large database of images. During the process of retrieval, a query image which contains the affected area / abnormal region is given as an input to retrieve similar images which contain affected area/abnormal region from the database. DCT Matrix (DCTM is a kind of commonly used color feature representation in image retrieval. This paper describes a content based image retrieval (CBIR that represent each image in database by a vector of feature values called DCT vector matrix(8x8. Using this DCTM row and column feature vector values considered as a query image which is compared with existing database to cull out more similar and relevant images. The experimental result shows that 97% of images can be retrieved correctly using this technique

  11. Natural Language Processing Versus Content-Based Image Analysis for Medical Document Retrieval.

    Science.gov (United States)

    Névéol, Aurélie; Deserno, Thomas M; Darmoni, Stéfan J; Güld, Mark Oliver; Aronson, Alan R

    2008-09-18

    One of the most significant recent advances in health information systems has been the shift from paper to electronic documents. While research on automatic text and image processing has taken separate paths, there is a growing need for joint efforts, particularly for electronic health records and biomedical literature databases. This work aims at comparing text-based versus image-based access to multimodal medical documents using state-of-the-art methods of processing text and image components. A collection of 180 medical documents containing an image accompanied by a short text describing it was divided into training and test sets. Content-based image analysis and natural language processing techniques are applied individually and combined for multimodal document analysis. The evaluation consists of an indexing task and a retrieval task based on the "gold standard" codes manually assigned to corpus documents. The performance of text-based and image-based access, as well as combined document features, is compared. Image analysis proves more adequate for both the indexing and retrieval of the images. In the indexing task, multimodal analysis outperforms both independent image and text analysis. This experiment shows that text describing images can be usefully analyzed in the framework of a hybrid text/image retrieval system.

  12. Feature Extraction with Ordered Mean Values for Content Based Image Classification

    Directory of Open Access Journals (Sweden)

    Sudeep Thepade

    2014-01-01

    Full Text Available Categorization of images into meaningful classes by efficient extraction of feature vectors from image datasets has been dependent on feature selection techniques. Traditionally, feature vector extraction has been carried out using different methods of image binarization done with selection of global, local, or mean threshold. This paper has proposed a novel technique for feature extraction based on ordered mean values. The proposed technique was combined with feature extraction using discrete sine transform (DST for better classification results using multitechnique fusion. The novel methodology was compared to the traditional techniques used for feature extraction for content based image classification. Three benchmark datasets, namely, Wang dataset, Oliva and Torralba (OT-Scene dataset, and Caltech dataset, were used for evaluation purpose. Performance measure after evaluation has evidently revealed the superiority of the proposed fusion technique with ordered mean values and discrete sine transform over the popular approaches of single view feature extraction methodologies for classification.

  13. User-Based Interaction for Content-Based Image Retrieval by Mining User Navigation Patterns.

    Directory of Open Access Journals (Sweden)

    A. Srinagesh

    2013-09-01

    Full Text Available In Internet, Multimedia and Image Databases image searching is a necessity. Content-Based Image Retrieval (CBIR is an approach for image retrieval. With User interaction included in CBIR with Relevance Feedback (RF techniques, the results are obtained by giving more number of iterative feedbacks for large databases is not an efficient method for real- time applications. So, we propose a new approach which converges rapidly and can aptly be called as Navigation Pattern-Based Relevance Feedback (NPRF with User-based interaction mode. We combined NPRF with RF techniques with three concepts viz., query Re-weighting (QR, Query Expansion (QEX and Query Point Movement (QPM. By using, these three techniques efficient results are obtained by giving a small number of feedbacks. The efficiency of the proposed method with results is proved by calculating Precision, Recall and Evaluation measures.

  14. A Survey On: Content Based Image Retrieval Systems Using Clustering Techniques For Large Data sets

    Directory of Open Access Journals (Sweden)

    Monika Jain

    2011-12-01

    Full Text Available Content-based image retrieval (CBIR is a new but widely adopted method for finding images from vastand unannotated image databases. As the network and development of multimedia technologies arebecoming more popular, users are not satisfied with the traditional information retrieval techniques. Sonowadays the content based image retrieval (CBIR are becoming a source of exact and fast retrieval. Inrecent years, a variety of techniques have been developed to improve the performance of CBIR. Dataclustering is an unsupervised method for extraction hidden pattern from huge data sets. With large datasets, there is possibility of high dimensionality. Having both accuracy and efficiency for high dimensionaldata sets with enormous number of samples is a challenging arena. In this paper the clustering techniquesare discussed and analysed. Also, we propose a method HDK that uses more than one clustering techniqueto improve the performance of CBIR.This method makes use of hierachical and divide and conquer KMeansclustering technique with equivalency and compatible relation concepts to improve the performanceof the K-Means for using in high dimensional datasets. It also introduced the feature like color, texture andshape for accurate and effective retrieval system.

  15. A novel evolutionary approach for optimizing content-based image indexing algorithms.

    Science.gov (United States)

    Saadatmand-Tarzjan, Mahdi; Moghaddam, Hamid Abrishami

    2007-02-01

    Optimization of content-based image indexing and retrieval (CBIR) algorithms is a complicated and time-consuming task since each time a parameter of the indexing algorithm is changed, all images in the database should be indexed again. In this paper, a novel evolutionary method called evolutionary group algorithm (EGA) is proposed for complicated time-consuming optimization problems such as finding optimal parameters of content-based image indexing algorithms. In the new evolutionary algorithm, the image database is partitioned into several smaller subsets, and each subset is used by an updating process as training patterns for each chromosome during evolution. This is in contrast to genetic algorithms that use the whole database as training patterns for evolution. Additionally, for each chromosome, a parameter called age is defined that implies the progress of the updating process. Similarly, the genes of the proposed chromosomes are divided into two categories: evolutionary genes that participate to evolution and history genes that save previous states of the updating process. Furthermore, a new fitness function is defined which evaluates the fitness of the chromosomes of the current population with different ages in each generation. We used EGA to optimize the quantization thresholds of the wavelet-correlogram algorithm for CBIR. The optimal quantization thresholds computed by EGA improved significantly all the evaluation measures including average precision, average weighted precision, average recall, and average rank for the wavelet-correlogram method.

  16. Content Based Radiographic Images Indexing and Retrieval Using Pattern Orientation Histogram

    Directory of Open Access Journals (Sweden)

    Abolfazl Lakdashti

    2008-06-01

    Full Text Available Introduction: Content Based Image Retrieval (CBIR is a method of image searching and retrieval in a  database. In medical applications, CBIR is a tool used by physicians to compare the previous and current  medical images associated with patients pathological conditions. As the volume of pictorial information  stored in medical image databases is in progress, efficient image indexing and retrieval is increasingly  becoming a necessity.  Materials and Methods: This paper presents a new content based radiographic image retrieval approach  based on histogram of pattern orientations, namely pattern orientation histogram (POH. POH represents  the  spatial  distribution  of  five  different  pattern  orientations:  vertical,  horizontal,  diagonal  down/left,  diagonal down/right and non-orientation. In this method, a given image is first divided into image-blocks  and  the  frequency  of  each  type  of  pattern  is  determined  in  each  image-block.  Then,  local  pattern  histograms for each of these image-blocks are computed.   Results: The method was compared to two well known texture-based image retrieval methods: Tamura  and  Edge  Histogram  Descriptors  (EHD  in  MPEG-7  standard.  Experimental  results  based  on  10000  IRMA  radiography  image  dataset,  demonstrate  that  POH  provides  better  precision  and  recall  rates  compared to Tamura and EHD. For some images, the recall and precision rates obtained by POH are,  respectively, 48% and 18% better than the best of the two above mentioned methods.    Discussion and Conclusion: Since we exploit the absolute location of the pattern in the image as well as  its global composition, the proposed matching method can retrieve semantically similar medical images.

  17. Content-based high-resolution remote sensing image retrieval with local binary patterns

    Science.gov (United States)

    Wang, A. P.; Wang, S. G.

    2006-10-01

    Texture is a very important feature in image analysis including content-based image retrieval (CBIR). A common way of retrieving images is to calculate the similarity of features between a sample images and the other images in a database. This paper applies a novel texture analysis approach, local binary patterns (LBP) operator, to 1m Ikonos images retrieval and presents an improved LBP histogram spatially enhanced LBP (SEL) histogram with spatial information by dividing the LBP labeled images into k*k regions. First different neighborhood P and scale factor R were chosen to scan over the whole images, so that their labeled LBP and local variance (VAR) images were calculated, from which we got the LBP, LBP/VAR, and VAR histograms and SEL histograms. The histograms were used as the features for CBIR and a non-parametric statistical test G-statistic was used for similarity measure. The result showed that LBP/VAR based features got a very high retrieval rate with certain values of P and R, and SEL features that are more robust to illumination changes than LBP/VAR also obtained higher retrieval rate than LBP histograms. The comparison to Gabor filter confirmed the effectiveness of the presented approach in CBIR.

  18. Automated Orientation of Aerial Images

    DEFF Research Database (Denmark)

    Høhle, Joachim

    2002-01-01

    Methods for automated orientation of aerial images are presented. They are based on the use of templates, which are derived from existing databases, and area-based matching. The characteristics of available database information and the accuracy requirements for map compilation and orthoimage...

  19. An Extended Image Hashing Concept: Content-Based Fingerprinting Using FJLT

    Directory of Open Access Journals (Sweden)

    Xudong Lv

    2009-01-01

    Full Text Available Dimension reduction techniques, such as singular value decomposition (SVD and nonnegative matrix factorization (NMF, have been successfully applied in image hashing by retaining the essential features of the original image matrix. However, a concern of great importance in image hashing is that no single solution is optimal and robust against all types of attacks. The contribution of this paper is threefold. First, we introduce a recently proposed dimension reduction technique, referred as Fast Johnson-Lindenstrauss Transform (FJLT, and propose the use of FJLT for image hashing. FJLT shares the low distortion characteristics of a random projection, but requires much lower computational complexity. Secondly, we incorporate Fourier-Mellin transform into FJLT hashing to improve its performance under rotation attacks. Thirdly, we propose a new concept, namely, content-based fingerprint, as an extension of image hashing by combining different hashes. Such a combined approach is capable of tackling all types of attacks and thus can yield a better overall performance in multimedia identification. To demonstrate the superior performance of the proposed schemes, receiver operating characteristics analysis over a large image database and a large class of distortions is performed and compared with the state-of-the-art image hashing using NMF.

  20. Prospective Study for Semantic Inter-Media Fusion in Content-Based Medical Image Retrieval

    CERN Document Server

    Teodorescu, Roxana; Leow, Wee-Kheng; Cretu, Vladimir

    2008-01-01

    One important challenge in modern Content-Based Medical Image Retrieval (CBMIR) approaches is represented by the semantic gap, related to the complexity of the medical knowledge. Among the methods that are able to close this gap in CBMIR, the use of medical thesauri/ontologies has interesting perspectives due to the possibility of accessing on-line updated relevant webservices and to extract real-time medical semantic structured information. The CBMIR approach proposed in this paper uses the Unified Medical Language System's (UMLS) Metathesaurus to perform a semantic indexing and fusion of medical media. This fusion operates before the query processing (retrieval) and works at an UMLS-compliant conceptual indexing level. Our purpose is to study various techniques related to semantic data alignment, preprocessing, fusion, clustering and retrieval, by evaluating the various techniques and highlighting future research directions. The alignment and the preprocessing are based on partial text/image retrieval feedb...

  1. Content-Based Image Retrieval using Local Features Descriptors and Bag-of-Visual Words

    Directory of Open Access Journals (Sweden)

    Mohammed Alkhawlani

    2015-09-01

    Full Text Available Image retrieval is still an active research topic in the computer vision field. There are existing several techniques to retrieve visual data from large databases. Bag-of-Visual Word (BoVW is a visual feature descriptor that can be used successfully in Content-based Image Retrieval (CBIR applications. In this paper, we present an image retrieval system that uses local feature descriptors and BoVW model to retrieve efficiently and accurately similar images from standard databases. The proposed system uses SIFT and SURF techniques as local descriptors to produce image signatures that are invariant to rotation and scale. As well as, it uses K-Means as a clustering algorithm to build visual vocabulary for the features descriptors that obtained of local descriptors techniques. To efficiently retrieve much more images relevant to the query, SVM algorithm is used. The performance of the proposed system is evaluated by calculating both precision and recall. The experimental results reveal that this system performs well on two different standard datasets.

  2. Local texton XOR patterns: A new feature descriptor for content-based image retrieval

    Directory of Open Access Journals (Sweden)

    Anu Bala

    2016-03-01

    Full Text Available In this paper, a novel feature descriptor, local texton XOR patterns (LTxXORP is proposed for content-based image retrieval. The proposed method collects the texton XOR pattern which gives the structure of the query image or database image. First, the RGB (red, green, blue color image is converted into HSV (hue, saturation and value color space. Second, the V color space is divided into overlapping subblocks of size 2 × 2 and textons are collected based on the shape of the textons. Then, exclusive OR (XOR operation is performed on the texton image between the center pixel and its surrounding neighbors. Finally, the feature vector is constructed based on the LTxXORPs and HSV histograms. The performance of the proposed method is evaluated by testing on benchmark database, Corel-1K, Corel-5K and Corel-10K in terms of precision, recall, average retrieval precision (ARP and average retrieval rate (ARR. The results after investigation show a significant improvement as compared to the state-of-the-art features for image retrieval.

  3. A web-accessible content-based cervicographic image retrieval system

    Science.gov (United States)

    Xue, Zhiyun; Long, L. Rodney; Antani, Sameer; Jeronimo, Jose; Thoma, George R.

    2008-03-01

    Content-based image retrieval (CBIR) is the process of retrieving images by directly using image visual characteristics. In this paper, we present a prototype system implemented for CBIR for a uterine cervix image (cervigram) database. This cervigram database is a part of data collected in a multi-year longitudinal effort by the National Cancer Institute (NCI), and archived by the National Library of Medicine (NLM), for the study of the origins of, and factors related to, cervical precancer/cancer. Users may access the system with any Web browser. The system is built with a distributed architecture which is modular and expandable; the user interface is decoupled from the core indexing and retrieving algorithms, and uses open communication standards and open source software. The system tries to bridge the gap between a user's semantic understanding and image feature representation, by incorporating the user's knowledge. Given a user-specified query region, the system returns the most similar regions from the database, with respect to attributes of color, texture, and size. Experimental evaluation of the retrieval performance of the system on "groundtruth" test data illustrates its feasibility to serve as a possible research tool to aid the study of the visual characteristics of cervical neoplasia.

  4. Keyframes Global Map Establishing Method for Robot Localization through Content-Based Image Matching

    Directory of Open Access Journals (Sweden)

    Tianyang Cao

    2017-01-01

    Full Text Available Self-localization and mapping are important for indoor mobile robot. We report a robust algorithm for map building and subsequent localization especially suited for indoor floor-cleaning robots. Common methods, for example, SLAM, can easily be kidnapped by colliding or disturbed by similar objects. Therefore, keyframes global map establishing method for robot localization in multiple rooms and corridors is needed. Content-based image matching is the core of this method. It is designed for the situation, by establishing keyframes containing both floor and distorted wall images. Image distortion, caused by robot view angle and movement, is analyzed and deduced. And an image matching solution is presented, consisting of extraction of overlap regions of keyframes extraction and overlap region rebuild through subblocks matching. For improving accuracy, ceiling points detecting and mismatching subblocks checking methods are incorporated. This matching method can process environment video effectively. In experiments, less than 5% frames are extracted as keyframes to build global map, which have large space distance and overlap each other. Through this method, robot can localize itself by matching its real-time vision frames with our keyframes map. Even with many similar objects/background in the environment or kidnapping robot, robot localization is achieved with position RMSE <0.5 m.

  5. Content-based image retrieval for interstitial lung diseases using classification confidence

    Science.gov (United States)

    Dash, Jatindra Kumar; Mukhopadhyay, Sudipta; Prabhakar, Nidhi; Garg, Mandeep; Khandelwal, Niranjan

    2013-02-01

    Content Based Image Retrieval (CBIR) system could exploit the wealth of High-Resolution Computed Tomography (HRCT) data stored in the archive by finding similar images to assist radiologists for self learning and differential diagnosis of Interstitial Lung Diseases (ILDs). HRCT findings of ILDs are classified into several categories (e.g. consolidation, emphysema, ground glass, nodular etc.) based on their texture like appearances. Therefore, analysis of ILDs is considered as a texture analysis problem. Many approaches have been proposed for CBIR of lung images using texture as primitive visual content. This paper presents a new approach to CBIR for ILDs. The proposed approach makes use of a trained neural network (NN) to find the output class label of query image. The degree of confidence of the NN classifier is analyzed using Naive Bayes classifier that dynamically takes a decision on the size of the search space to be used for retrieval. The proposed approach is compared with three simple distance based and one classifier based texture retrieval approaches. Experimental results show that the proposed technique achieved highest average percentage precision of 92.60% with lowest standard deviation of 20.82%.

  6. Content-based image retrieval of digitized histopathology in boosted spectrally embedded spaces

    Science.gov (United States)

    Sridhar, Akshay; Doyle, Scott; Madabhushi, Anant

    2015-01-01

    Context: Content-based image retrieval (CBIR) systems allow for retrieval of images from within a database that are similar in visual content to a query image. This is useful for digital pathology, where text-based descriptors alone might be inadequate to accurately describe image content. By representing images via a set of quantitative image descriptors, the similarity between a query image with respect to archived, annotated images in a database can be computed and the most similar images retrieved. Recently, non-linear dimensionality reduction methods have become popular for embedding high-dimensional data into a reduced-dimensional space while preserving local object adjacencies, thereby allowing for object similarity to be determined more accurately in the reduced-dimensional space. However, most dimensionality reduction methods implicitly assume, in computing the reduced-dimensional representation, that all features are equally important. Aims: In this paper we present boosted spectral embedding(BoSE), which utilizes a boosted distance metric to selectively weight individual features (based on training data) to subsequently map the data into a reduced-dimensional space. Settings and Design: BoSE is evaluated against spectral embedding (SE) (which employs equal feature weighting) in the context of CBIR of digitized prostate and breast cancer histopathology images. Materials and Methods: The following datasets, which were comprised of a total of 154 hematoxylin and eosin stained histopathology images, were used: (1) Prostate cancer histopathology (benign vs. malignant), (2) estrogen receptor (ER) + breast cancer histopathology (low vs. high grade), and (3) HER2+ breast cancer histopathology (low vs. high levels of lymphocytic infiltration). Statistical Analysis Used: We plotted and calculated the area under precision-recall curves (AUPRC) and calculated classification accuracy using the Random Forest classifier. Results: BoSE outperformed SE both in terms of

  7. Content-based image retrieval of digitized histopathology in boosted spectrally embedded spaces

    Directory of Open Access Journals (Sweden)

    Akshay Sridhar

    2015-01-01

    Full Text Available Context : Content-based image retrieval (CBIR systems allow for retrieval of images from within a database that are similar in visual content to a query image. This is useful for digital pathology, where text-based descriptors alone might be inadequate to accurately describe image content. By representing images via a set of quantitative image descriptors, the similarity between a query image with respect to archived, annotated images in a database can be computed and the most similar images retrieved. Recently, non-linear dimensionality reduction methods have become popular for embedding high-dimensional data into a reduced-dimensional space while preserving local object adjacencies, thereby allowing for object similarity to be determined more accurately in the reduced-dimensional space. However, most dimensionality reduction methods implicitly assume, in computing the reduced-dimensional representation, that all features are equally important. Aims : In this paper we present boosted spectral embedding (BoSE, which utilizes a boosted distance metric to selectively weight individual features (based on training data to subsequently map the data into a reduced-dimensional space. Settings and Design : BoSE is evaluated against spectral embedding (SE (which employs equal feature weighting in the context of CBIR of digitized prostate and breast cancer histopathology images. Materials and Methods : The following datasets, which were comprised of a total of 154 hematoxylin and eosin stained histopathology images, were used: (1 Prostate cancer histopathology (benign vs. malignant, (2 estrogen receptor (ER + breast cancer histopathology (low vs. high grade, and (3 HER2+ breast cancer histopathology (low vs. high levels of lymphocytic infiltration. Statistical Analysis Used : We plotted and calculated the area under precision-recall curves (AUPRC and calculated classification accuracy using the Random Forest classifier. Results : BoSE outperformed SE both

  8. Local tetra patterns: a new feature descriptor for content-based image retrieval.

    Science.gov (United States)

    Murala, Subrahmanyam; Maheshwari, R P; Balasubramanian, R

    2012-05-01

    In this paper, we propose a novel image indexing and retrieval algorithm using local tetra patterns (LTrPs) for content-based image retrieval (CBIR). The standard local binary pattern (LBP) and local ternary pattern (LTP) encode the relationship between the referenced pixel and its surrounding neighbors by computing gray-level difference. The proposed method encodes the relationship between the referenced pixel and its neighbors, based on the directions that are calculated using the first-order derivatives in vertical and horizontal directions. In addition, we propose a generic strategy to compute nth-order LTrP using (n - 1)th-order horizontal and vertical derivatives for efficient CBIR and analyze the effectiveness of our proposed algorithm by combining it with the Gabor transform. The performance of the proposed method is compared with the LBP, the local derivative patterns, and the LTP based on the results obtained using benchmark image databases viz., Corel 1000 database (DB1), Brodatz texture database (DB2), and MIT VisTex database (DB3). Performance analysis shows that the proposed method improves the retrieval result from 70.34%/44.9% to 75.9%/48.7% in terms of average precision/average recall on database DB1, and from 79.97% to 85.30% and 82.23% to 90.02% in terms of average retrieval rate on databases DB2 and DB3, respectively, as compared with the standard LBP.

  9. Indexing of Content-Based Image Retrieval System with Image Understanding Approach

    Institute of Scientific and Technical Information of China (English)

    李学龙; 刘政凯; 俞能海

    2003-01-01

    This paper presents a novel efficient semantic image classification algorithm for high-level feature indexing of high-dimension image database. Experiments show that the algorithm performs well. The size of the train set and the test set is 7 537 and 5 000 respectively. Based on this theory, another ground is built with 12,000 images, which are divided into three classes: city, landscape and person, the total result of the classifications is 88.92%, meanwhile, some preliminary results are presented for image understanding based on semantic image classification and low level features. The groundtruth for the experiments is built with the images from Corel database, photos and some famous face databases.

  10. AN EFFICIENT CONTENT BASED IMAGE RETRIEVAL USING COLOR AND TEXTURE OF IMAGE SUBBLOCKS

    Directory of Open Access Journals (Sweden)

    CH.KAVITHA,

    2011-02-01

    Full Text Available Image retrieval is an active research area in image processing, pattern recognition, and computer vision. For the purpose of effectively retrieving more similar images from the digital image databases, this paper uses the local HSV color and Gray level co-occurrence matrix (GLCM texture features. The image is divided into sub blocks of equal size. Then the color and texture features of each sub-block are computed. Color of each sub-block is extracted by quantifying the HSV color space into non-equal intervals and the color feature is represented by cumulative color histogram. Texture of each sub-block is obtained by using gray level co-occurrence matrix. An integrated matching scheme based on Most Similar Highest Priority (MSHP principle is used to compare the query and target image. The adjacency matrix of a bipartite graph is formed using the sub-blocks of query and target image. This matrix is used for matching the images. Euclidean distance measure is used in retrieving the similar images. As the experimental results indicated, the proposed technique indeed outperforms other retrieval schemes interms of average precision.

  11. Color Histogram and DBC Co-Occurrence Matrix for Content Based Image Retrieval

    Directory of Open Access Journals (Sweden)

    K. Prasanthi Jasmine

    2014-12-01

    Full Text Available This paper presents the integration of color histogram and DBC co-occurrence matrix for content based image retrieval. The exit DBC collect the directional edges which are calculated by applying the first-order derivatives in 0º, 45º, 90º and 135º directions. The feature vector length of DBC for a particular direction is 512 which are more for image retrieval. To avoid this problem, we collect the directional edges by excluding the center pixel and further applied the rotation invariant property. Further, we calculated the co-occurrence matrix to form the feature vector. Finally, the HSV color histogram and the DBC co-occurrence matrix are integrated to form the feature database. The retrieval results of the proposed method have been tested by conducting three experiments on Brodatz, MIT VisTex texture databases and Corel-1000 natural database. The results after being investigated show a significant improvement in terms of their evaluation measures as compared to LBP, DBC and other transform domain features.

  12. Computer-aided diagnostics of screening mammography using content-based image retrieval

    Science.gov (United States)

    Deserno, Thomas M.; Soiron, Michael; de Oliveira, Júlia E. E.; de A. Araújo, Arnaldo

    2012-03-01

    Breast cancer is one of the main causes of death among women in occidental countries. In the last years, screening mammography has been established worldwide for early detection of breast cancer, and computer-aided diagnostics (CAD) is being developed to assist physicians reading mammograms. A promising method for CAD is content-based image retrieval (CBIR). Recently, we have developed a classification scheme of suspicious tissue pattern based on the support vector machine (SVM). In this paper, we continue moving towards automatic CAD of screening mammography. The experiments are based on in total 10,509 radiographs that have been collected from different sources. From this, 3,375 images are provided with one and 430 radiographs with more than one chain code annotation of cancerous regions. In different experiments, this data is divided into 12 and 20 classes, distinguishing between four categories of tissue density, three categories of pathology and in the 20 class problem two categories of different types of lesions. Balancing the number of images in each class yields 233 and 45 images remaining in each of the 12 and 20 classes, respectively. Using a two-dimensional principal component analysis, features are extracted from small patches of 128 x 128 pixels and classified by means of a SVM. Overall, the accuracy of the raw classification was 61.6 % and 52.1 % for the 12 and the 20 class problem, respectively. The confusion matrices are assessed for detailed analysis. Furthermore, an implementation of a SVM-based CBIR system for CADx in screening mammography is presented. In conclusion, with a smarter patch extraction, the CBIR approach might reach precision rates that are helpful for the physicians. This, however, needs more comprehensive evaluation on clinical data.

  13. Regional content-based image retrieval for solar images: Traditional versus modern methods

    Science.gov (United States)

    Banda, J. M.; Angryk, R. A.

    2015-11-01

    This work presents an extensive evaluation between conventional (distance-based) and modern (search-engine) information retrieval techniques in the context of finding similar Solar image regions within the Solar Dynamics Observatory (SDO) mission image repository. We compare pre-computed image descriptors (image features) extracted from the SDO mission images in two very different ways: (1) similarity retrieval using multiple distance-based metrics and (2) retrieval using Lucene, a general purpose scalable retrieval engine. By transforming image descriptors into histogram-like signatures and into Lucene-compatible text strings, we are able to effectively evaluate the retrieval capabilities of both methodologies. Using the image descriptors alongside a labeled image dataset, we present an extensive evaluation under the criteria of performance, scalability and retrieval precision of experimental retrieval systems in order to determine which implementation would be ideal for a production level system. In our analysis we performed key transformations to our sample datasets to properly evaluate rotation invariance and scalability. At the end of this work we conclude which technique is the most robust and would yield the best performing system after an extensive experimental evaluation, we also point out the strengths and weaknesses of each approach and theorize on potential improvements.

  14. Using an image-extended relational database to support content-based image retrieval in a PACS.

    Science.gov (United States)

    Traina, Caetano; Traina, Agma J M; Araújo, Myrian R B; Bueno, Josiane M; Chino, Fabio J T; Razente, Humberto; Azevedo-Marques, Paulo M

    2005-12-01

    This paper presents a new Picture Archiving and Communication System (PACS), called cbPACS, which has content-based image retrieval capabilities. The cbPACS answers range and k-nearest- neighbor similarity queries, employing a relational database manager extended to support images. The images are compared through their features, which are extracted by an image-processing module and stored in the extended relational database. The database extensions were developed aiming at efficiently answering similarity queries by taking advantage of specialized indexing methods. The main concept supporting the extensions is the definition, inside the relational manager, of distance functions based on features extracted from the images. An extension to the SQL language enables the construction of an interpreter that intercepts the extended commands and translates them to standard SQL, allowing any relational database server to be used. By now, the system implemented works on features based on color distribution of the images through normalized histograms as well as metric histograms. Metric histograms are invariant regarding scale, translation and rotation of images and also to brightness transformations. The cbPACS is prepared to integrate new image features, based on texture and shape of the main objects in the image.

  15. Computer-aided detection of mammographic masses based on content-based image retrieval

    Science.gov (United States)

    Jin, Renchao; Meng, Bo; Song, Enmin; Xu, Xiangyang; Jiang, Luan

    2007-03-01

    A method for computer-aided detection (CAD) of mammographic masses is proposed and a prototype CAD system is presented. The method is based on content-based image retrieval (CBIR). A mammogram database containing 2000 mammographic regions is built in our prototype CBIR-CAD system. Every region of interested (ROI) in the database has known pathology. Specifically, there are 583 ROIs depicting biopsy-proven masses, and the rest 1417 ROIs are normal. Whenever a suspicious ROI is detected in a mammogram by a radiologist, it can be submitted as a query to this CBIRCAD system. As the query results, a series of similar ROI images together with their known pathology knowledge will be retrieved from the database and displayed in the screen in descending order of their similarities to the query ROI to help the radiologist to make the diagnosis decision. Furthermore, our CBIR-CAD system will output a decision index (DI) to quantitatively indicate the probability that the query ROI contains a mass. The DI is calculated by the query matches. In the querying process, 24 features are extracted from each ROI to form a 24-dimensional vector. Euclidean distance in the 24-dimensional feature vector space is applied to measure the similarities between ROIs. The prototype CBIR-CAD system is evaluated based on the leave-one-out sampling scheme. The experiment results showed that the system can achieve a receiver operating characteristic (ROC) area index A Z =0.84 for detection of mammographic masses, which is better than the best results achieved by the other known mass CAD systems.

  16. Efficient content-based P2P image retrieval using peer content descriptions

    Science.gov (United States)

    Muller, Wolfgang T.; Eisenhardt, Martin; Henrich, Andreas

    2003-12-01

    Peer-to-peer (P2P) networks are overlay networks that connect independent computers (also called nodes or peers). In contrast to client/server solutions, all nodes offer and request services from other peers in a P2P network. P2P networks are very attractive in that they harness the computing power of many common desktop machines and necessitate little administrative overhead. While the resulting computing power is impressive, efficiently looking up data still is the major challenge in P2P networks. Current work comprises fast lookup of one-dimensional values (Distributed Hash Tables, DHT) and retrieval of texts using few keywords. However, the lookup of multimedia data in P2P networks is still attacked by very few groups. In this paper, we present experiments with efficient Content Based Image Retrieval in a P2P environment, thus a P2P-CBIR system. The challenge in such systems is to limit the number of messages sent, and to maximize the usefulness of each peer contacted in the query process. We achieve this by distributing peer data summaries over the network. Obviously, the data summaries have to be compact in order to limit the communication overhead. We propose an CBIR scheme based on a compact peer data summary. This peer data summary relies on cluster frequencies. To obtain the compact representation of a peer's collection, a global clustering of the data is efficiently calculated in a distributed manner. After that, each peer publishes how many of its images fall into each cluster. These cluster frequencies are then used by the querying peer to contact only those peers that have the largest number of images present in one cluster given by the query. In our paper we further detail the various challenges that have to be met by the designers of such a P2P-CBIR, and we present experiments with varying degree of data replication (duplicates of images), as well as quality of clustering within the network.

  17. Towards case-based medical learning in radiological decision making using content-based image retrieval

    Directory of Open Access Journals (Sweden)

    Günther Rolf W

    2011-10-01

    Full Text Available Abstract Background Radiologists' training is based on intensive practice and can be improved with the use of diagnostic training systems. However, existing systems typically require laboriously prepared training cases and lack integration into the clinical environment with a proper learning scenario. Consequently, diagnostic training systems advancing decision-making skills are not well established in radiological education. Methods We investigated didactic concepts and appraised methods appropriate to the radiology domain, as follows: (i Adult learning theories stress the importance of work-related practice gained in a team of problem-solvers; (ii Case-based reasoning (CBR parallels the human problem-solving process; (iii Content-based image retrieval (CBIR can be useful for computer-aided diagnosis (CAD. To overcome the known drawbacks of existing learning systems, we developed the concept of image-based case retrieval for radiological education (IBCR-RE. The IBCR-RE diagnostic training is embedded into a didactic framework based on the Seven Jump approach, which is well established in problem-based learning (PBL. In order to provide a learning environment that is as similar as possible to radiological practice, we have analysed the radiological workflow and environment. Results We mapped the IBCR-RE diagnostic training approach into the Image Retrieval in Medical Applications (IRMA framework, resulting in the proposed concept of the IRMAdiag training application. IRMAdiag makes use of the modular structure of IRMA and comprises (i the IRMA core, i.e., the IRMA CBIR engine; and (ii the IRMAcon viewer. We propose embedding IRMAdiag into hospital information technology (IT infrastructure using the standard protocols Digital Imaging and Communications in Medicine (DICOM and Health Level Seven (HL7. Furthermore, we present a case description and a scheme of planned evaluations to comprehensively assess the system. Conclusions The IBCR-RE paradigm

  18. Content-based image retrieval using spatial layout information in brain tumor T1-weighted contrast-enhanced MR images.

    Directory of Open Access Journals (Sweden)

    Meiyan Huang

    Full Text Available This study aims to develop content-based image retrieval (CBIR system for the retrieval of T1-weighted contrast-enhanced MR (CE-MR images of brain tumors. When a tumor region is fed to the CBIR system as a query, the system attempts to retrieve tumors of the same pathological category. The bag-of-visual-words (BoVW model with partition learning is incorporated into the system to extract informative features for representing the image contents. Furthermore, a distance metric learning algorithm called the Rank Error-based Metric Learning (REML is proposed to reduce the semantic gap between low-level visual features and high-level semantic concepts. The effectiveness of the proposed method is evaluated on a brain T1-weighted CE-MR dataset with three types of brain tumors (i.e., meningioma, glioma, and pituitary tumor. Using the BoVW model with partition learning, the mean average precision (mAP of retrieval increases beyond 4.6% with the learned distance metrics compared with the spatial pyramid BoVW method. The distance metric learned by REML significantly outperforms three other existing distance metric learning methods in terms of mAP. The mAP of the CBIR system is as high as 91.8% using the proposed method, and the precision can reach 93.1% when the top 10 images are returned by the system. These preliminary results demonstrate that the proposed method is effective and feasible for the retrieval of brain tumors in T1-weighted CE-MR Images.

  19. Content-based image retrieval using spatial layout information in brain tumor T1-weighted contrast-enhanced MR images.

    Science.gov (United States)

    Huang, Meiyan; Yang, Wei; Wu, Yao; Jiang, Jun; Gao, Yang; Chen, Yang; Feng, Qianjin; Chen, Wufan; Lu, Zhentai

    2014-01-01

    This study aims to develop content-based image retrieval (CBIR) system for the retrieval of T1-weighted contrast-enhanced MR (CE-MR) images of brain tumors. When a tumor region is fed to the CBIR system as a query, the system attempts to retrieve tumors of the same pathological category. The bag-of-visual-words (BoVW) model with partition learning is incorporated into the system to extract informative features for representing the image contents. Furthermore, a distance metric learning algorithm called the Rank Error-based Metric Learning (REML) is proposed to reduce the semantic gap between low-level visual features and high-level semantic concepts. The effectiveness of the proposed method is evaluated on a brain T1-weighted CE-MR dataset with three types of brain tumors (i.e., meningioma, glioma, and pituitary tumor). Using the BoVW model with partition learning, the mean average precision (mAP) of retrieval increases beyond 4.6% with the learned distance metrics compared with the spatial pyramid BoVW method. The distance metric learned by REML significantly outperforms three other existing distance metric learning methods in terms of mAP. The mAP of the CBIR system is as high as 91.8% using the proposed method, and the precision can reach 93.1% when the top 10 images are returned by the system. These preliminary results demonstrate that the proposed method is effective and feasible for the retrieval of brain tumors in T1-weighted CE-MR Images.

  20. Automated image enhancement using power law transformations

    Indian Academy of Sciences (India)

    S P Vimal; P K Thiruvikraman

    2012-12-01

    We propose a scheme for automating power law transformations which are used for image enhancement. The scheme we propose does not require the user to choose the exponent in the power law transformation. This method works well for images having poor contrast, especially to those images in which the peaks corresponding to the background and the foreground are not widely separated.

  1. Content-based histopathology image retrieval using a kernel-based semantic annotation framework.

    Science.gov (United States)

    Caicedo, Juan C; González, Fabio A; Romero, Eduardo

    2011-08-01

    Large amounts of histology images are captured and archived in pathology departments due to the ever expanding use of digital microscopy. The ability to manage and access these collections of digital images is regarded as a key component of next generation medical imaging systems. This paper addresses the problem of retrieving histopathology images from a large collection using an example image as query. The proposed approach automatically annotates the images in the collection, as well as the query images, with high-level semantic concepts. This semantic representation delivers an improved retrieval performance providing more meaningful results. We model the problem of automatic image annotation using kernel methods, resulting in a unified framework that includes: (1) multiple features for image representation, (2) a feature integration and selection mechanism (3) and an automatic semantic image annotation strategy. An extensive experimental evaluation demonstrated the effectiveness of the proposed framework to build meaningful image representations for learning and useful semantic annotations for image retrieval.

  2. Automated imaging system for single molecules

    Science.gov (United States)

    Schwartz, David Charles; Runnheim, Rodney; Forrest, Daniel

    2012-09-18

    There is provided a high throughput automated single molecule image collection and processing system that requires minimal initial user input. The unique features embodied in the present disclosure allow automated collection and initial processing of optical images of single molecules and their assemblies. Correct focus may be automatically maintained while images are collected. Uneven illumination in fluorescence microscopy is accounted for, and an overall robust imaging operation is provided yielding individual images prepared for further processing in external systems. Embodiments described herein are useful in studies of any macromolecules such as DNA, RNA, peptides and proteins. The automated image collection and processing system and method of same may be implemented and deployed over a computer network, and may be ergonomically optimized to facilitate user interaction.

  3. No-reference multiscale blur detection tool for content based image retrieval

    Science.gov (United States)

    Ezekiel, Soundararajan; Stocker, Russell; Harrity, Kyle; Alford, Mark; Ferris, David; Blasch, Erik; Gorniak, Mark

    2014-06-01

    In recent years, digital cameras have been widely used for image capturing. These devices are equipped in cell phones, laptops, tablets, webcams, etc. Image quality is an important component of digital image analysis. To assess image quality for these mobile products, a standard image is required as a reference image. In this case, Root Mean Square Error and Peak Signal to Noise Ratio can be used to measure the quality of the images. However, these methods are not possible if there is no reference image. In our approach, a discrete-wavelet transformation is applied to the blurred image, which decomposes into the approximate image and three detail sub-images, namely horizontal, vertical, and diagonal images. We then focus on noise-measuring the detail images and blur-measuring the approximate image to assess the image quality. We then compute noise mean and noise ratio from the detail images, and blur mean and blur ratio from the approximate image. The Multi-scale Blur Detection (MBD) metric provides both an assessment of the noise and blur content. These values are weighted based on a linear regression against full-reference y values. From these statistics, we can compare to normal useful image statistics for image quality without needing a reference image. We then test the validity of our obtained weights by R2 analysis as well as using them to estimate image quality of an image with a known quality measure. The result shows that our method provides acceptable results for images containing low to mid noise levels and blur content.

  4. Automated image analysis techniques for cardiovascular magnetic resonance imaging

    NARCIS (Netherlands)

    Geest, Robertus Jacobus van der

    2011-01-01

    The introductory chapter provides an overview of various aspects related to quantitative analysis of cardiovascular MR (CMR) imaging studies. Subsequently, the thesis describes several automated methods for quantitative assessment of left ventricular function from CMR imaging studies. Several novel

  5. Content-based image retrieval using a signature graph and a self-organizing map

    Directory of Open Access Journals (Sweden)

    Van Thanh The

    2016-06-01

    Full Text Available In order to effectively retrieve a large database of images, a method of creating an image retrieval system CBIR (contentbased image retrieval is applied based on a binary index which aims to describe features of an image object of interest. This index is called the binary signature and builds input data for the problem of matching similar images. To extract the object of interest, we propose an image segmentation method on the basis of low-level visual features including the color and texture of the image. These features are extracted at each block of the image by the discrete wavelet frame transform and the appropriate color space. On the basis of a segmented image, we create a binary signature to describe the location, color and shape of the objects of interest. In order to match similar images, we provide a similarity measure between the images based on binary signatures. Then, we present a CBIR model which combines a signature graph and a self-organizing map to cluster and store similar images. To illustrate the proposed method, experiments on image databases are reported, including COREL,Wang and MSRDI.

  6. Integrating Color and Spatial Feature for Content-Based Image Retrieval

    Institute of Scientific and Technical Information of China (English)

    2002-01-01

    In this paper, we present a novel and efficient scheme for extracting, indexing and retrieving color images. Our motivation was to reduce the space overhead of partition-based approaches taking advantage of the fact that only a relatively low number of distinct values of a particular visual feature is present in most images. To extract color feature and build indices into our image database we take into consideration factors such as human color perception and perceptual range, and the image is partitioned into a set of regions by using a simple classifying scheme. The compact color feature vector and the spatial color histogram, which are extracted from the seqmented image region, are used for representing the color and spatial information in the image. We have also developed the region-based distance measures to compare the similarity of two images. Extensive tests on a large image collection were conducted to demonstrate the effectiveness of the proposed approach.

  7. Content-based image retrieval in the World Wide Web: a web agent for fetching portraits

    Science.gov (United States)

    Muenkelt, Olaf; Kaufmann, Oliver; Eckstein, Wolfgang

    1997-01-01

    This article propose a way to automatically retrieve images from the world-wide-web using a semantic description for images and an agent concept for the retrieval of images. The system represents image in a textual way, e.g. look for a portrait of the a specific person, or fetch an image showing a countryside in Southern California. This textual descriptions are fed in search machines, e.g. yahoo, alta- vista. The resulting html documents are seeked for links. The next step subsequently processes each link by fetching the document other the net, converting it to an ascii representation, and performing a full text search by using the image description. This leads to starting points of images which are retrieved via a web-agent. The image descriptions are decomposed in a set of parts containing image operations which are further processed, e.g. a set for representing the background of a portrait tries to find a homogeneous region in the image because this is likely to find in a portrait. Additional operations are performed on the foreground, i.e. the image region which contains e.g. the face of a person. The system is realized using two C++ libraries: one for building up web-agents, LIWA++, and one for processing images, HORUS.

  8. A New Content-Based Image Retrieval Using the Multidimensional Generalization of Wald-Wolfowitz Runs Test

    Science.gov (United States)

    Leauhatong, Thurdsak; Hamamoto, Kazuhiko; Atsuta, Kiyoaki; Kondo, Shozo

    This paper proposes two new similarity measures for the content-based image retrieval (CBIR) systems. The similarity measures are based on the k-means clustering algorithm and the multidimensional generalization of the Wald-Wolfowitz (MWW) runs test. The performance comparisons between the proposed similarity measures and a current CBIR similarity measure based on the MWW runs test were performed, and it can be seen that the proposed similarity measures outperform the current similarity measure with respect to the precision and the computational time.

  9. Analyzing and mining automated imaging experiments.

    Science.gov (United States)

    Berlage, Thomas

    2007-04-01

    Image mining is the application of computer-based techniques that extract and exploit information from large image sets to support human users in generating knowledge from these sources. This review focuses on biomedical applications of this technique, in particular automated imaging at the cellular level. Due to increasing automation and the availability of integrated instruments, biomedical users are becoming increasingly confronted with the problem of analyzing such data. Image database applications need to combine data management, image analysis and visual data mining. The main point of such a system is a software layer that represents objects within an image and the ability to use a large spectrum of quantitative and symbolic object features. Image analysis needs to be adapted to each particular experiment; therefore, 'end user programming' will be desired to make the technology more widely applicable.

  10. Classifying content-based Images using Self Organizing Map Neural Networks Based on Nonlinear Features

    Directory of Open Access Journals (Sweden)

    Ebrahim Parcham

    2014-07-01

    Full Text Available Classifying similar images is one of the most interesting and essential image processing operations. Presented methods have some disadvantages like: low accuracy in analysis step and low speed in feature extraction process. In this paper, a new method for image classification is proposed in which similarity weight is revised by means of information in related and unrelated images. Based on researchers’ idea, most of real world similarity measurement systems are nonlinear. Thus, traditional linear methods are not capable of recognizing nonlinear relationship and correlation in such systems. Undoubtedly, Self Organizing Map neural networks are strongest networks for data mining and nonlinear analysis of sophisticated spaces purposes. In our proposed method, we obtain images with the most similarity measure by extracting features of our target image and comparing them with the features of other images. We took advantage of NLPCA algorithm for feature extraction which is a nonlinear algorithm that has the ability to recognize the smallest variations even in noisy images. Finally, we compare the run time and efficiency of our proposed method with previous proposed methods.

  11. HSV Color Histogram and Directional Binary Wavelet Patterns for Content Based Image Retrieval

    Directory of Open Access Journals (Sweden)

    P.Vijaya Bhaskar Reddy

    2012-08-01

    Full Text Available This paper presents a new image indexing and retrieval algorithm by integrating color (HSV color histogram and texture (directional binary wavelet patterns (DBWP features. For color feature,first the RGB image is converted to HSV image, and then histograms are constructed from HSV spaces. For texture feature, an 8-bit grayscale image is divided into eight binary bit-planes, and then binary wavelet transform (BWT on each bitplane to extract the multi-resolution binary images. The local binary pattern (LBP features are extracted from the resultant BWT sub-bands. Two experiments have beencarried out for proving the worth of our algorithm. It is further mentioned that the database considered for experiments are Corel 1000 database (DB1, and MIT VisTex database (DB2. The results after beinginvestigated show a significant improvement in terms of their evaluation measures as compared to HSV histogram and DBWP.

  12. Content-Based High-Resolution Remote Sensing Image Retrieval via Unsupervised Feature Learning and Collaborative Affinity Metric Fusion

    Directory of Open Access Journals (Sweden)

    Yansheng Li

    2016-08-01

    Full Text Available With the urgent demand for automatic management of large numbers of high-resolution remote sensing images, content-based high-resolution remote sensing image retrieval (CB-HRRS-IR has attracted much research interest. Accordingly, this paper proposes a novel high-resolution remote sensing image retrieval approach via multiple feature representation and collaborative affinity metric fusion (IRMFRCAMF. In IRMFRCAMF, we design four unsupervised convolutional neural networks with different layers to generate four types of unsupervised features from the fine level to the coarse level. In addition to these four types of unsupervised features, we also implement four traditional feature descriptors, including local binary pattern (LBP, gray level co-occurrence (GLCM, maximal response 8 (MR8, and scale-invariant feature transform (SIFT. In order to fully incorporate the complementary information among multiple features of one image and the mutual information across auxiliary images in the image dataset, this paper advocates collaborative affinity metric fusion to measure the similarity between images. The performance evaluation of high-resolution remote sensing image retrieval is implemented on two public datasets, the UC Merced (UCM dataset and the Wuhan University (WH dataset. Large numbers of experiments show that our proposed IRMFRCAMF can significantly outperform the state-of-the-art approaches.

  13. Content-Based Image Retrieval Method using the Relative Location of Multiple ROIs

    Directory of Open Access Journals (Sweden)

    LEE, J.

    2011-08-01

    Full Text Available Recently the method of specifying multiple regions of interest (ROI based image retrieval has been suggested. However it measures the similarity of the images without proper consideration of the spatial layouts of the ROIs and thus fails to accurately reflect the intent of the user. In this paper, we propose a new similarity measurement using the relative layouts of the ROIs. The proposed method divides images into blocks of certain size and extracted MPEG-7 dominant colors from the blocks overlapping with the user-designated ROIs to measure their similarities with the target images. At this point, similarity was weighted when the relative location of the ROIs in the query image and the target image was the same. The relative location was calculated by four directions (i.e. up, down, left and right of the basis ROI. The proposed method by an experiment using MPEG-7 XM shows that its performance is higher than the global image retrieval method or the retrieval method that does not consider the relative location of ROIs.

  14. Content-Based Digital Image Retrieval based on Multi-Feature Amalgamation

    Directory of Open Access Journals (Sweden)

    Linhao Li

    2013-12-01

    Full Text Available In actual implementation, digital image retrieval are facing all kinds of problems. There still exists some difficulty in measures and methods for application. Currently there is not a unambiguous algorithm which can directly shown the obvious feature of image content and satisfy the color, scale invariance and rotation invariance of feature simultaneously. So the related technology about image retrieval based on content is analyzed by us. The research focused on global features such as seven HU invariant moments, edge direction histogram and eccentricity. The method for blocked image is also discussed. During the process of image matching, the extracted image features are looked as the points in vector space. The similarity of two images is measured through the closeness between two points and the similarity is calculated by Euclidean distance and the intersection distance of histogram. Then a novel method based on multi-features amalgamation is proposed, to solve the problems in retrieval method for global feature and local feature. It extracts the eccentricity, seven HU invariant moments and edge direction histogram to calculate the similarity distance of each feature of the images, then they are normalized. Contraposing the interior of global feature the weighted feature distance is adopted to form similarity measurement function for retrieval. The features of blocked images are extracted with the partitioning method based on polar coordinate. Finally by the idea of hierarchical retrieval between global feature and local feature, the results are output through global features like invariant moments etc. These results will be taken as the input of local feature match for the second-layer retrieval, which can improve the accuracy of retrieval effectively.

  15. Content-based image retrieval system for solid waste bin level detection and performance evaluation.

    Science.gov (United States)

    Hannan, M A; Arebey, M; Begum, R A; Basri, Hassan; Al Mamun, Md Abdulla

    2016-04-01

    This paper presents a CBIR system to investigate the use of image retrieval with an extracted texture from the image of a bin to detect the bin level. Various similarity distances like Euclidean, Bhattacharyya, Chi-squared, Cosine, and EMD are used with the CBIR system for calculating and comparing the distance between a query image and the images in a database to obtain the highest performance. In this study, the performance metrics is based on two quantitative evaluation criteria. The first one is the average retrieval rate based on the precision-recall graph and the second is the use of F1 measure which is the weighted harmonic mean of precision and recall. In case of feature extraction, texture is used as an image feature for bin level detection system. Various experiments are conducted with different features extraction techniques like Gabor wavelet filter, gray level co-occurrence matrix (GLCM), and gray level aura matrix (GLAM) to identify the level of the bin and its surrounding area. Intensive tests are conducted among 250 bin images to assess the accuracy of the proposed feature extraction techniques. The average retrieval rate is used to evaluate the performance of the retrieval system. The result shows that, the EMD distance achieved high accuracy and provides better performance than the other distances.

  16. A rapid automatic analyzer and its methodology for effective bentonite content based on image recognition technology

    Directory of Open Access Journals (Sweden)

    Wei Long

    2016-09-01

    Full Text Available Fast and accurate determination of effective bentonite content in used clay bonded sand is very important for selecting the correct mixing ratio and mixing process to obtain high-performance molding sand. Currently, the effective bentonite content is determined by testing the ethylene blue absorbed in used clay bonded sand, which is usually a manual operation with some disadvantages including complicated process, long testing time and low accuracy. A rapid automatic analyzer of the effective bentonite content in used clay bonded sand was developed based on image recognition technology. The instrument consists of auto stirring, auto liquid removal, auto titration, step-rotation and image acquisition components, and processor. The principle of the image recognition method is first to decompose the color images into three-channel gray images based on the photosensitive degree difference of the light blue and dark blue in the three channels of red, green and blue, then to make the gray values subtraction calculation and gray level transformation of the gray images, and finally, to extract the outer circle light blue halo and the inner circle blue spot and calculate their area ratio. The titration process can be judged to reach the end-point while the area ratio is higher than the setting value.

  17. Automated Image Retrieval of Chest CT Images Based on Local Grey Scale Invariant Features.

    Science.gov (United States)

    Arrais Porto, Marcelo; Cordeiro d'Ornellas, Marcos

    2015-01-01

    Textual-based tools are regularly employed to retrieve medical images for reading and interpretation using current retrieval Picture Archiving and Communication Systems (PACS) but pose some drawbacks. All-purpose content-based image retrieval (CBIR) systems are limited when dealing with medical images and do not fit well into PACS workflow and clinical practice. This paper presents an automated image retrieval approach for chest CT images based local grey scale invariant features from a local database. Performance was measured in terms of precision and recall, average retrieval precision (ARP), and average retrieval rate (ARR). Preliminary results have shown the effectiveness of the proposed approach. The prototype is also a useful tool for radiology research and education, providing valuable information to the medical and broader healthcare community.

  18. Spatial Color Indexing: An Efficient and Robust Technique for Content-Based Image Retrieval

    Directory of Open Access Journals (Sweden)

    Rachid Alaoui

    2009-01-01

    Full Text Available Problem statement: Color Histogram is admitted as a useful representation of features because it is a statistical result and possesses the merits of simplicity, robustness and efficiency. However, the main problem with color histogram indexing is that it doesn't take into account the spatial information. Previous researches have proved that the effectiveness of image retrieval increases when spatial feature of colors is included in image retrieval. Approach: This study examined the use of a computational geometry-based spatial color indexing methodology, there are two major contributions: (1 Color Spatial Entropy (CSE which introduce entropy to describe the spatial information of colors. (2 Color Hybrid Entropy (CHE witch introduce a description spatial on multiresolution images. Results: The experiment results showed that CSE and CHE is more better performance and efficiently and relevant result than those traditional CBIR method based on the local histograms. Conclusion: our new system was presented to strengthen the retrieval efficacy and remains more stable performance by transformations geometry in more CHE characterize quantitatively the compactness of the multiresolution images.

  19. Development of content based image retrieval system using wavelet and Gabor transform

    Directory of Open Access Journals (Sweden)

    Manish Sharma

    2013-06-01

    Full Text Available A novel approach to image retrieval using color, texture and spatial information is proposed. The color information of an image is represented by the proposed color hologram, which takes into account both the occurrence of colors of pixels and the colors of their neighboring pixels. The proposed Fuzzy Color homogeneity, encoded by fuzzy sets, is incorporated in the color hologram computation. The texture information is described by the mean, variance and energy of wavelet decomposition coefficients in all sub bands. The spatial information is characterized by the class parameters obtained automatically from a unique unsupervised segmentation algorithm in combination with wavelet decomposition. Multi-stage filtering is applied to query processing to reduce the search range to speed up the query. Color homogram filter, wavelet texture filter, and spatial filter are used in sequence to eliminate images that are dissimilar to a query image in color, texture, and spatial information from the search ranges respectively. The proposed texture distance measure used in the wavelet texture filter considers the relationship between the coefficient value ranges and the decomposition levels, thus improving the retrieval performance.

  20. COMPARATIVE STUDY OF DIMENSIONALITY REDUCTION TECHNIQUES USING PCA AND LDA FOR CONTENT BASED IMAGE RETRIEVAL

    Directory of Open Access Journals (Sweden)

    Shereena V. B

    2015-04-01

    Full Text Available The aim of this paper is to present a comparative study of two linear dimension reduction methods namely PCA (Principal Component Analysis and LDA (Linear Discriminant Analysis. The main idea of PCA is to transform the high dimensional input space onto the feature space where the maximal variance is displayed. The feature selection in traditional LDA is obtained by maximizing the difference between classes and minimizing the distance within classes. PCA finds the axes with maximum variance for the whole data set where LDA tries to find the axes for best class seperability. The proposed method is experimented over a general image database using Matlab. The performance of these systems has been evaluated by Precision and Recall measures. Experimental results show that PCA based dimension reduction method gives the better performance in terms of higher precision and recall values with lesser computational complexity than the LDA based method.

  1. Stochastic Optimized Relevance Feedback Particle Swarm Optimization for Content Based Image Retrieval

    Directory of Open Access Journals (Sweden)

    Muhammad Imran

    2014-01-01

    Full Text Available One of the major challenges for the CBIR is to bridge the gap between low level features and high level semantics according to the need of the user. To overcome this gap, relevance feedback (RF coupled with support vector machine (SVM has been applied successfully. However, when the feedback sample is small, the performance of the SVM based RF is often poor. To improve the performance of RF, this paper has proposed a new technique, namely, PSO-SVM-RF, which combines SVM based RF with particle swarm optimization (PSO. The aims of this proposed technique are to enhance the performance of SVM based RF and also to minimize the user interaction with the system by minimizing the RF number. The PSO-SVM-RF was tested on the coral photo gallery containing 10908 images. The results obtained from the experiments showed that the proposed PSO-SVM-RF achieved 100% accuracy in 8 feedback iterations for top 10 retrievals and 80% accuracy in 6 iterations for 100 top retrievals. This implies that with PSO-SVM-RF technique high accuracy rate is achieved at a small number of iterations.

  2. Computer-Aided Diagnosis in Mammography Using Content-Based Image Retrieval Approaches: Current Status and Future Perspectives

    Directory of Open Access Journals (Sweden)

    Bin Zheng

    2009-06-01

    Full Text Available As the rapid advance of digital imaging technologies, the content-based image retrieval (CBIR has became one of the most vivid research areas in computer vision. In the last several years, developing computer-aided detection and/or diagnosis (CAD schemes that use CBIR to search for the clinically relevant and visually similar medical images (or regions depicting suspicious lesions has also been attracting research interest. CBIR-based CAD schemes have potential to provide radiologists with “visual aid” and increase their confidence in accepting CAD-cued results in the decision making. The CAD performance and reliability depends on a number of factors including the optimization of lesion segmentation, feature selection, reference database size, computational efficiency, and relationship between the clinical relevance and visual similarity of the CAD results. By presenting and comparing a number of approaches commonly used in previous studies, this article identifies and discusses the optimal approaches in developing CBIR-based CAD schemes and assessing their performance. Although preliminary studies have suggested that using CBIR-based CAD schemes might improve radiologists’ performance and/or increase their confidence in the decision making, this technology is still in the early development stage. Much research work is needed before the CBIR-based CAD schemes can be accepted in the clinical practice.

  3. Automated Image Data Exploitation Final Report

    Energy Technology Data Exchange (ETDEWEB)

    Kamath, C; Poland, D; Sengupta, S K; Futterman, J H

    2004-01-26

    The automated production of maps of human settlement from recent satellite images is essential to detailed studies of urbanization, population movement, and the like. Commercial satellite imagery is becoming available with sufficient spectral and spatial resolution to apply computer vision techniques previously considered only for laboratory (high resolution, low noise) images. In this project, we extracted the boundaries of human settlements from IKONOS 4-band and panchromatic images using spectral segmentation together with a form of generalized second-order statistics and detection of edges and corners.

  4. Plenoptic Imager for Automated Surface Navigation

    Science.gov (United States)

    Zollar, Byron; Milder, Andrew; Milder, Andrew; Mayo, Michael

    2010-01-01

    An electro-optical imaging device is capable of autonomously determining the range to objects in a scene without the use of active emitters or multiple apertures. The novel, automated, low-power imaging system is based on a plenoptic camera design that was constructed as a breadboard system. Nanohmics proved feasibility of the concept by designing an optical system for a prototype plenoptic camera, developing simulated plenoptic images and range-calculation algorithms, constructing a breadboard prototype plenoptic camera, and processing images (including range calculations) from the prototype system. The breadboard demonstration included an optical subsystem comprised of a main aperture lens, a mechanical structure that holds an array of micro lenses at the focal distance from the main lens, and a structure that mates a CMOS imaging sensor the correct distance from the micro lenses. The demonstrator also featured embedded electronics for camera readout, and a post-processor executing image-processing algorithms to provide ranging information.

  5. Automated spectral imaging for clinical diagnostics

    Science.gov (United States)

    Breneman, John; Heffelfinger, David M.; Pettipiece, Ken; Tsai, Chris; Eden, Peter; Greene, Richard A.; Sorensen, Karen J.; Stubblebine, Will; Witney, Frank

    1998-04-01

    Bio-Rad Laboratories supplies imaging equipment for many applications in the life sciences. As part of our effort to offer more flexibility to the investigator, we are developing a microscope-based imaging spectrometer for the automated detection and analysis of either conventionally or fluorescently labeled samples. Immediate applications will include the use of fluorescence in situ hybridization (FISH) technology. The field of cytogenetics has benefited greatly from the increased sensitivity of FISH producing simplified analysis of complex chromosomal rearrangements. FISH methods for identification lends itself to automation more easily than the current cytogenetics industry standard of G- banding, however, the methods are complementary. Several technologies have been demonstrated successfully for analyzing the signals from labeled samples, including filter exchanging and interferometry. The detection system lends itself to other fluorescent applications including the display of labeled tissue sections, DNA chips, capillary electrophoresis or any other system using color as an event marker. Enhanced displays of conventionally stained specimens will also be possible.

  6. Multimedia human brain database system for surgical candidacy determination in temporal lobe epilepsy with content-based image retrieval

    Science.gov (United States)

    Siadat, Mohammad-Reza; Soltanian-Zadeh, Hamid; Fotouhi, Farshad A.; Elisevich, Kost

    2003-01-01

    This paper presents the development of a human brain multimedia database for surgical candidacy determination in temporal lobe epilepsy. The focus of the paper is on content-based image management, navigation and retrieval. Several medical image-processing methods including our newly developed segmentation method are utilized for information extraction/correlation and indexing. The input data includes T1-, T2-Weighted MRI and FLAIR MRI and ictal and interictal SPECT modalities with associated clinical data and EEG data analysis. The database can answer queries regarding issues such as the correlation between the attribute X of the entity Y and the outcome of a temporal lobe epilepsy surgery. The entity Y can be a brain anatomical structure such as the hippocampus. The attribute X can be either a functionality feature of the anatomical structure Y, calculated with SPECT modalities, such as signal average, or a volumetric/morphological feature of the entity Y such as volume or average curvature. The outcome of the surgery can be any surgery assessment such as memory quotient. A determination is made regarding surgical candidacy by analysis of both textual and image data. The current database system suggests a surgical determination for the cases with relatively small hippocampus and high signal intensity average on FLAIR images within the hippocampus. This indication pretty much fits with the surgeons" expectations/observations. Moreover, as the database gets more populated with patient profiles and individual surgical outcomes, using data mining methods one may discover partially invisible correlations between the contents of different modalities of data and the outcome of the surgery.

  7. Content Based Video Retrieval

    Directory of Open Access Journals (Sweden)

    B. V. Patel

    2012-10-01

    Full Text Available Content based video retrieval is an approach for facilitating the searching and browsing of large image collections over World Wide Web. In this approach, video analysis is conducted on low level visual properties extracted from video frame. We believed that in order to create an effective video retrieval system, visual perception must be taken into account. We conjectured that a technique which employs multiple features for indexing and retrieval would be more effective in the discrimination and search tasks of videos. In order to validate this claim, content based indexing and retrieval systems were implemented using color histogram, various texture features and other approaches. Videos were stored in Oracle 9i Database and a user study measured correctness of response.

  8. Out-of-Sample Extrapolation utilizing Semi-Supervised Manifold Learning (OSE-SSL): Content Based Image Retrieval for Histopathology Images.

    Science.gov (United States)

    Sparks, Rachel; Madabhushi, Anant

    2016-06-06

    Content-based image retrieval (CBIR) retrieves database images most similar to the query image by (1) extracting quantitative image descriptors and (2) calculating similarity between database and query image descriptors. Recently, manifold learning (ML) has been used to perform CBIR in a low dimensional representation of the high dimensional image descriptor space to avoid the curse of dimensionality. ML schemes are computationally expensive, requiring an eigenvalue decomposition (EVD) for every new query image to learn its low dimensional representation. We present out-of-sample extrapolation utilizing semi-supervised ML (OSE-SSL) to learn the low dimensional representation without recomputing the EVD for each query image. OSE-SSL incorporates semantic information, partial class label, into a ML scheme such that the low dimensional representation co-localizes semantically similar images. In the context of prostate histopathology, gland morphology is an integral component of the Gleason score which enables discrimination between prostate cancer aggressiveness. Images are represented by shape features extracted from the prostate gland. CBIR with OSE-SSL for prostate histology obtained from 58 patient studies, yielded an area under the precision recall curve (AUPRC) of 0.53 ± 0.03 comparatively a CBIR with Principal Component Analysis (PCA) to learn a low dimensional space yielded an AUPRC of 0.44 ± 0.01.

  9. Automated landmark-guided deformable image registration

    Science.gov (United States)

    Kearney, Vasant; Chen, Susie; Gu, Xuejun; Chiu, Tsuicheng; Liu, Honghuan; Jiang, Lan; Wang, Jing; Yordy, John; Nedzi, Lucien; Mao, Weihua

    2015-01-01

    The purpose of this work is to develop an automated landmark-guided deformable image registration (LDIR) algorithm between the planning CT and daily cone-beam CT (CBCT) with low image quality. This method uses an automated landmark generation algorithm in conjunction with a local small volume gradient matching search engine to map corresponding landmarks between the CBCT and the planning CT. The landmarks act as stabilizing control points in the following Demons deformable image registration. LDIR is implemented on graphics processing units (GPUs) for parallel computation to achieve ultra fast calculation. The accuracy of the LDIR algorithm has been evaluated on a synthetic case in the presence of different noise levels and data of six head and neck cancer patients. The results indicate that LDIR performed better than rigid registration, Demons, and intensity corrected Demons for all similarity metrics used. In conclusion, LDIR achieves high accuracy in the presence of multimodality intensity mismatch and CBCT noise contamination, while simultaneously preserving high computational efficiency.

  10. Automated Quality Assurance Applied to Mammographic Imaging

    Directory of Open Access Journals (Sweden)

    Anne Davis

    2002-07-01

    Full Text Available Quality control in mammography is based upon subjective interpretation of the image quality of a test phantom. In order to suppress subjectivity due to the human observer, automated computer analysis of the Leeds TOR(MAM test phantom is investigated. Texture analysis via grey-level co-occurrence matrices is used to detect structures in the test object. Scoring of the substructures in the phantom is based on grey-level differences between regions and information from grey-level co-occurrence matrices. The results from scoring groups of particles within the phantom are presented.

  11. Bridging the integration gap between imaging and information systems: a uniform data concept for content-based image retrieval in computer-aided diagnosis.

    Science.gov (United States)

    Welter, Petra; Riesmeier, Jörg; Fischer, Benedikt; Grouls, Christoph; Kuhl, Christiane; Deserno, Thomas M

    2011-01-01

    It is widely accepted that content-based image retrieval (CBIR) can be extremely useful for computer-aided diagnosis (CAD). However, CBIR has not been established in clinical practice yet. As a widely unattended gap of integration, a unified data concept for CBIR-based CAD results and reporting is lacking. Picture archiving and communication systems and the workflow of radiologists must be considered for successful data integration to be achieved. We suggest that CBIR systems applied to CAD should integrate their results in a picture archiving and communication systems environment such as Digital Imaging and Communications in Medicine (DICOM) structured reporting documents. A sample DICOM structured reporting template adaptable to CBIR and an appropriate integration scheme is presented. The proposed CBIR data concept may foster the promulgation of CBIR systems in clinical environments and, thereby, improve the diagnostic process.

  12. Automated vertebra identification in CT images

    Science.gov (United States)

    Ehm, Matthias; Klinder, Tobias; Kneser, Reinhard; Lorenz, Cristian

    2009-02-01

    In this paper, we describe and compare methods for automatically identifying individual vertebrae in arbitrary CT images. The identification is an essential precondition for a subsequent model-based segmentation, which is used in a wide field of orthopedic, neurological, and oncological applications, e.g., spinal biopsies or the insertion of pedicle screws. Since adjacent vertebrae show similar characteristics, an automated labeling of the spine column is a very challenging task, especially if no surrounding reference structures can be taken into account. Furthermore, vertebra identification is complicated due to the fact that many images are bounded to a very limited field of view and may contain only few vertebrae. We propose and evaluate two methods for automatically labeling the spine column by evaluating similarities between given models and vertebral objects. In one method, object boundary information is taken into account by applying a Generalized Hough Transform (GHT) for each vertebral object. In the other method, appearance models containing mean gray value information are registered to each vertebral object using cross and local correlation as similarity measures for the optimization function. The GHT is advantageous in terms of computational performance but cuts back concerning the identification rate. A correct labeling of the vertebral column has been successfully performed on 93% of the test set consisting of 63 disparate input images using rigid image registration with local correlation as similarity measure.

  13. Content-based image retrieval for brain MRI: An image-searching engine and population-based analysis to utilize past clinical data for future diagnosis

    Directory of Open Access Journals (Sweden)

    Andreia V. Faria

    2015-01-01

    Full Text Available Radiological diagnosis is based on subjective judgment by radiologists. The reasoning behind this process is difficult to document and share, which is a major obstacle in adopting evidence-based medicine in radiology. We report our attempt to use a comprehensive brain parcellation tool to systematically capture image features and use them to record, search, and evaluate anatomical phenotypes. Anatomical images (T1-weighted MRI were converted to a standardized index by using a high-dimensional image transformation method followed by atlas-based parcellation of the entire brain. We investigated how the indexed anatomical data captured the anatomical features of healthy controls and a population with Primary Progressive Aphasia (PPA. PPA was chosen because patients have apparent atrophy at different degrees and locations, thus the automated quantitative results can be compared with trained clinicians' qualitative evaluations. We explored and tested the power of individual classifications and of performing a search for images with similar anatomical features in a database using partial least squares-discriminant analysis (PLS-DA and principal component analysis (PCA. The agreement between the automated z-score and the averaged visual scores for atrophy (r = 0.8 was virtually the same as the inter-evaluator agreement. The PCA plot distribution correlated with the anatomical phenotypes and the PLS-DA resulted in a model with an accuracy of 88% for distinguishing PPA variants. The quantitative indices captured the main anatomical features. The indexing of image data has a potential to be an effective, comprehensive, and easily translatable tool for clinical practice, providing new opportunities to mine clinical databases for medical decision support.

  14. Content-based image retrieval for brain MRI: an image-searching engine and population-based analysis to utilize past clinical data for future diagnosis.

    Science.gov (United States)

    Faria, Andreia V; Oishi, Kenichi; Yoshida, Shoko; Hillis, Argye; Miller, Michael I; Mori, Susumu

    2015-01-01

    Radiological diagnosis is based on subjective judgment by radiologists. The reasoning behind this process is difficult to document and share, which is a major obstacle in adopting evidence-based medicine in radiology. We report our attempt to use a comprehensive brain parcellation tool to systematically capture image features and use them to record, search, and evaluate anatomical phenotypes. Anatomical images (T1-weighted MRI) were converted to a standardized index by using a high-dimensional image transformation method followed by atlas-based parcellation of the entire brain. We investigated how the indexed anatomical data captured the anatomical features of healthy controls and a population with Primary Progressive Aphasia (PPA). PPA was chosen because patients have apparent atrophy at different degrees and locations, thus the automated quantitative results can be compared with trained clinicians' qualitative evaluations. We explored and tested the power of individual classifications and of performing a search for images with similar anatomical features in a database using partial least squares-discriminant analysis (PLS-DA) and principal component analysis (PCA). The agreement between the automated z-score and the averaged visual scores for atrophy (r = 0.8) was virtually the same as the inter-evaluator agreement. The PCA plot distribution correlated with the anatomical phenotypes and the PLS-DA resulted in a model with an accuracy of 88% for distinguishing PPA variants. The quantitative indices captured the main anatomical features. The indexing of image data has a potential to be an effective, comprehensive, and easily translatable tool for clinical practice, providing new opportunities to mine clinical databases for medical decision support.

  15. WAVELET BASED CONTENT BASED IMAGE RETRIEVAL USING COLOR AND TEXTURE FEATURE EXTRACTION BY GRAY LEVEL COOCURENCE MATRIX AND COLOR COOCURENCE MATRIX

    Directory of Open Access Journals (Sweden)

    Jeyanthi Prabhu

    2014-01-01

    Full Text Available In this study we proposes an effective content based image retrieval by color and texture based on wavelet coefficient method to achieve good retrieval in efficiency. Color feature extraction is done by color Histogram. The texture feature extraction is acquired by Gray Level Coocurence Matrix (GLCM or Color Coocurence Matrix (CCM. This study provides better result for image retrieval by integrated features. Feature extraction by color Histogram, texture by GLCM, texture by CCM are compared in terms of precision performance measure.

  16. A parallel architecture of content based retrieval for lunar images%基于内容的月球影像检索并行框架设计

    Institute of Scientific and Technical Information of China (English)

    陈慧中; 陈永光; 景宁; 陈荦; 刘义

    2013-01-01

    Content-based lunar image retrieval provides a convenient and efficient way for accessing relevant lunar exploration images by their visual contents. To increase the efficiency, the process of content - based lunar image retrieval was analyzed and modeled using Petri nets, and a parallel mechanism was designed based on the model. A parallel architecture was then proposed for the content based retrieval of lunar exploration images. According to the architecture, an experimental system was implemented. Experiments upon real datasets including Chang' e lunar exploration images confirm that the proposed parallel architecture can effectively improve the constructive and retrieval efficiency.%基于内容的月球影像检索面向月球探测计划获取的大量遥感数据,根据影像视觉内容,提供一种方便而高效的检索方式.为提高其检索效率,在对基于内容的月球影像检索过程进行分析的基础上,运用Petri网完成过程建模与并行化分析.提出一种基于内容的月球影像检索并行框架,并据此部署实验系统,将嫦娥月球影像等实际数据集投入其中进行检索实验.结果表明,基于内容的月球影像检索并行框架能够有效提升查询检索效率.

  17. Toward Automated Feature Detection in UAVSAR Images

    Science.gov (United States)

    Parker, J. W.; Donnellan, A.; Glasscoe, M. T.

    2014-12-01

    Edge detection identifies seismic or aseismic fault motion, as demonstrated in repeat-pass inteferograms obtained by the Uninhabited Aerial Vehicle Synthetic Aperture Radar (UAVSAR) program. But this identification is not robust at present: it requires a flattened background image, interpolation into missing data (holes) and outliers, and background noise that is either sufficiently small or roughly white Gaussian. Identification and mitigation of nongaussian background image noise is essential to creating a robust, automated system to search for such features. Clearly a robust method is needed for machine scanning of the thousands of UAVSAR repeat-pass interferograms for evidence of fault slip, landslides, and other local features.Empirical examination of detrended noise based on 20 km east-west profiles through desert terrain with little tectonic deformation for a suite of flight interferograms shows nongaussian characteristics. Statistical measurement of curvature with varying length scale (Allan variance) shows nearly white behavior (Allan variance slope with spatial distance from roughly -1.76 to -2) from 25 to 400 meters, deviations from -2 suggesting short-range differences (such as used in detecting edges) are often freer of noise than longer-range differences. At distances longer than 400 m the Allan variance flattens out without consistency from one interferogram to another. We attribute this additional noise afflicting difference estimates at longer distances to atmospheric water vapor and uncompensated aircraft motion.Paradoxically, California interferograms made with increasing time intervals before and after the El Mayor Cucapah earthquake (2008, M7.2, Mexico) show visually stronger and more interesting edges, but edge detection methods developed for the first year do not produce reliable results over the first two years, because longer time spans suffer reduced coherence in the interferogram. The changes over time are reflecting fault slip and block

  18. Image analysis and platform development for automated phenotyping in cytomics

    NARCIS (Netherlands)

    Yan, Kuan

    2013-01-01

    This thesis is dedicated to the empirical study of image analysis in HT/HC screen study. Often a HT/HC screening produces extensive amounts that cannot be manually analyzed. Thus, an automated image analysis solution is prior to an objective understanding of the raw image data. Compared to general a

  19. Content-Based Instruction

    Science.gov (United States)

    DelliCarpini, M.; Alonso, O.

    2013-01-01

    DelliCarpini and Alonso's book "Content-Based Instruction" explores different approaches to teaching content-based instruction (CBI) in the English language classroom. They provide a comprehensive overview of how to teach CBI in an easy-to-follow guide that language teachers will find very practical for their own contexts. Topics…

  20. A similarity study between the query mass and retrieved masses using decision tree content-based image retrieval (DTCBIR) CADx system for characterization of ultrasound breast mass images

    Science.gov (United States)

    Cho, Hyun-Chong; Hadjiiski, Lubomir; Chan, Heang-Ping; Sahiner, Berkman; Helvie, Mark; Paramagul, Chintana; Nees, Alexis V.

    2012-03-01

    We are developing a Decision Tree Content-Based Image Retrieval (DTCBIR) CADx scheme to assist radiologists in characterization of breast masses on ultrasound (US) images. Three DTCBIR configurations, including decision tree with boosting (DTb), decision tree with full leaf features (DTL), and decision tree with selected leaf features (DTLs) were compared. For DTb, the features of a query mass were combined first into a merged feature score and then masses with similar scores were retrieved. For DTL and DTLs, similar masses were retrieved based on the Euclidean distance between the feature vector of the query and those of the selected references. For each DTCBIR configuration, we investigated the use of the full feature set and the subset of features selected by the stepwise linear discriminant analysis (LDA) and simplex optimization method, resulting in six retrieval methods. Among the six methods, we selected five, DTb-lda, DTL-lda, DTb-full, DTL-full and DTLs-full, for the observer study. For a query mass, three most similar masses were retrieved with each method and were presented to the radiologists in random order. Three MQSA radiologists rated the similarity between the query mass and the computer-retrieved masses using a ninepoint similarity scale (1=very dissimilar, 9=very similar). For DTb-lda, DTL-lda, DTb-full, DTL-full and DTLs-full, the average Az values were 0.90+/-0.03, 0.85+/-0.04, 0.87+/-0.04, 0.79+/-0.05 and 0.71+/-0.06, respectively, and the average similarity ratings were 5.00, 5.41, 4.96, 5.33 and 5.13, respectively. Although the DTb measures had the best classification performance among the DTCBIRs studied, and DTLs had the worst performance, DTLs-full obtained higher similarity ratings than the DTb measures.

  1. Content Based Medical Image Retrieval with Texture Content Using Gray Level Co-occurrence Matrix and K-Means Clustering Algorithms

    Directory of Open Access Journals (Sweden)

    K. R. Chandran

    2012-01-01

    Full Text Available Problem statement: Recently, there has been a huge progress in collection of varied image databases in the form of digital. Most of the users found it difficult to search and retrieve required images in large collections. In order to provide an effective and efficient search engine tool, the system has been implemented. In image retrieval system, there is no methodologies have been considered directly to retrieve the images from databases. Instead of that, various visual features that have been considered indirect to retrieve the images from databases. In this system, one of the visual features such as texture that has been considered indirectly into images to extract the feature of the image. That featured images only have been considered for the retrieval process in order to retrieve exact desired images from the databases. Approach: The aim of this study is to construct an efficient image retrieval tool namely, “Content Based Medical Image Retrieval with Texture Content using Gray Level Co-occurrence Matrix (GLCM and k-Means Clustering algorithms”. This image retrieval tool is capable of retrieving images based on the texture feature of the image and it takes into account the Pre-processing, feature extraction, Classification and retrieval steps in order to construct an efficient retrieval tool. The main feature of this tool is used of GLCM of the extracting texture pattern of the image and k-means clustering algorithm for image classification in order to improve retrieval efficiency. The proposed image retrieval system consists of three stages i.e., segmentation, texture feature extraction and clustering process. In the segmentation process, preprocessing step to segment the image into blocks is carried out. A reduction in an image region to be processed is carried out in the texture feature extraction process and finally, the extracted image is clustered using the k-means algorithm. The proposed system is employed for domain

  2. Automated Segmentation of Cardiac Magnetic Resonance Images

    DEFF Research Database (Denmark)

    Stegmann, Mikkel Bille; Nilsson, Jens Chr.; Grønning, Bjørn A.

    2001-01-01

    is based on determination of the left-ventricular endocardial and epicardial borders. Since manual border detection is laborious, automated segmentation is highly desirable as a fast, objective and reproducible alternative. Automated segmentation will thus enhance comparability between and within cardiac...... studies and increase accuracy by allowing acquisition of thinner MRI-slices. This abstract demonstrates that statistical models of shape and appearance, namely the deformable models: Active Appearance Models, can successfully segment cardiac MRIs....

  3. Towards Automated Annotation of Benthic Survey Images: Variability of Human Experts and Operational Modes of Automation.

    Directory of Open Access Journals (Sweden)

    Oscar Beijbom

    Full Text Available Global climate change and other anthropogenic stressors have heightened the need to rapidly characterize ecological changes in marine benthic communities across large scales. Digital photography enables rapid collection of survey images to meet this need, but the subsequent image annotation is typically a time consuming, manual task. We investigated the feasibility of using automated point-annotation to expedite cover estimation of the 17 dominant benthic categories from survey-images captured at four Pacific coral reefs. Inter- and intra- annotator variability among six human experts was quantified and compared to semi- and fully- automated annotation methods, which are made available at coralnet.ucsd.edu. Our results indicate high expert agreement for identification of coral genera, but lower agreement for algal functional groups, in particular between turf algae and crustose coralline algae. This indicates the need for unequivocal definitions of algal groups, careful training of multiple annotators, and enhanced imaging technology. Semi-automated annotation, where 50% of the annotation decisions were performed automatically, yielded cover estimate errors comparable to those of the human experts. Furthermore, fully-automated annotation yielded rapid, unbiased cover estimates but with increased variance. These results show that automated annotation can increase spatial coverage and decrease time and financial outlay for image-based reef surveys.

  4. Towards Automated Annotation of Benthic Survey Images: Variability of Human Experts and Operational Modes of Automation

    Science.gov (United States)

    Beijbom, Oscar; Edmunds, Peter J.; Roelfsema, Chris; Smith, Jennifer; Kline, David I.; Neal, Benjamin P.; Dunlap, Matthew J.; Moriarty, Vincent; Fan, Tung-Yung; Tan, Chih-Jui; Chan, Stephen; Treibitz, Tali; Gamst, Anthony; Mitchell, B. Greg; Kriegman, David

    2015-01-01

    Global climate change and other anthropogenic stressors have heightened the need to rapidly characterize ecological changes in marine benthic communities across large scales. Digital photography enables rapid collection of survey images to meet this need, but the subsequent image annotation is typically a time consuming, manual task. We investigated the feasibility of using automated point-annotation to expedite cover estimation of the 17 dominant benthic categories from survey-images captured at four Pacific coral reefs. Inter- and intra- annotator variability among six human experts was quantified and compared to semi- and fully- automated annotation methods, which are made available at coralnet.ucsd.edu. Our results indicate high expert agreement for identification of coral genera, but lower agreement for algal functional groups, in particular between turf algae and crustose coralline algae. This indicates the need for unequivocal definitions of algal groups, careful training of multiple annotators, and enhanced imaging technology. Semi-automated annotation, where 50% of the annotation decisions were performed automatically, yielded cover estimate errors comparable to those of the human experts. Furthermore, fully-automated annotation yielded rapid, unbiased cover estimates but with increased variance. These results show that automated annotation can increase spatial coverage and decrease time and financial outlay for image-based reef surveys. PMID:26154157

  5. Facilitating medical information search using Google Glass connected to a content-based medical image retrieval system.

    Science.gov (United States)

    Widmer, Antoine; Schaer, Roger; Markonis, Dimitrios; Muller, Henning

    2014-01-01

    Wearable computing devices are starting to change the way users interact with computers and the Internet. Among them, Google Glass includes a small screen located in front of the right eye, a camera filming in front of the user and a small computing unit. Google Glass has the advantage to provide online services while allowing the user to perform tasks with his/her hands. These augmented glasses uncover many useful applications, also in the medical domain. For example, Google Glass can easily provide video conference between medical doctors to discuss a live case. Using these glasses can also facilitate medical information search by allowing the access of a large amount of annotated medical cases during a consultation in a non-disruptive fashion for medical staff. In this paper, we developed a Google Glass application able to take a photo and send it to a medical image retrieval system along with keywords in order to retrieve similar cases. As a preliminary assessment of the usability of the application, we tested the application under three conditions (images of the skin; printed CT scans and MRI images; and CT and MRI images acquired directly from an LCD screen) to explore whether using Google Glass affects the accuracy of the results returned by the medical image retrieval system. The preliminary results show that despite minor problems due to the relative stability of the Google Glass, images can be sent to and processed by the medical image retrieval system and similar images are returned to the user, potentially helping in the decision making process.

  6. Comparison of automated and manual segmentation of hippocampus MR images

    Science.gov (United States)

    Haller, John W.; Christensen, Gary E.; Miller, Michael I.; Joshi, Sarang C.; Gado, Mokhtar; Csernansky, John G.; Vannier, Michael W.

    1995-05-01

    The precision and accuracy of area estimates from magnetic resonance (MR) brain images and using manual and automated segmentation methods are determined. Areas of the human hippocampus were measured to compare a new automatic method of segmentation with regions of interest drawn by an expert. MR images of nine normal subjects and nine schizophrenic patients were acquired with a 1.5-T unit (Siemens Medical Systems, Inc., Iselin, New Jersey). From each individual MPRAGE 3D volume image a single comparable 2-D slice (matrix equals 256 X 256) was chosen which corresponds to the same coronal slice of the hippocampus. The hippocampus was first manually segmented, then segmented using high dimensional transformations of a digital brain atlas to individual brain MR images. The repeatability of a trained rater was assessed by comparing two measurements from each individual subject. Variability was also compared within and between subject groups of schizophrenics and normal subjects. Finally, the precision and accuracy of automated segmentation of hippocampal areas were determined by comparing automated measurements to manual segmentation measurements made by the trained rater on MR and brain slice images. The results demonstrate the high repeatability of area measurement from MR images of the human hippocampus. Automated segmentation using high dimensional transformations from a digital brain atlas provides repeatability superior to that of manual segmentation. Furthermore, the validity of automated measurements was demonstrated by a high correlation with manual segmentation measurements made by a trained rater. Quantitative morphometry of brain substructures (e.g. hippocampus) is feasible by use of a high dimensional transformation of a digital brain atlas to an individual MR image. This method automates the search for neuromorphological correlates of schizophrenia by a new mathematically robust method with unprecedented sensitivity to small local and regional differences.

  7. Automated Real-Time Conjunctival Microvasculature Image Stabilization.

    Science.gov (United States)

    Felder, Anthony E; Mercurio, Cesare; Wanek, Justin; Ansari, Rashid; Shahidi, Mahnaz

    2016-07-01

    The bulbar conjunctiva is a thin, vascularized membrane covering the sclera of the eye. Non-invasive imaging techniques have been utilized to assess the conjunctival vasculature as a means of studying microcirculatory hemodynamics. However, eye motion often confounds quantification of these hemodynamic properties. In the current study, we present a novel optical imaging system for automated stabilization of conjunctival microvasculature images by real-time eye motion tracking and realignment of the optical path. The ability of the system to stabilize conjunctival images acquired over time by reducing image displacements and maintaining the imaging area was demonstrated.

  8. Automation of Cassini Support Imaging Uplink Command Development

    Science.gov (United States)

    Ly-Hollins, Lisa; Breneman, Herbert H.; Brooks, Robert

    2010-01-01

    "Support imaging" is imagery requested by other Cassini science teams to aid in the interpretation of their data. The generation of the spacecraft command sequences for these images is performed by the Cassini Instrument Operations Team. The process initially established for doing this was very labor-intensive, tedious and prone to human error. Team management recognized this process as one that could easily benefit from automation. Team members were tasked to document the existing manual process, develop a plan and strategy to automate the process, implement the plan and strategy, test and validate the new automated process, and deliver the new software tools and documentation to Flight Operations for use during the Cassini extended mission. In addition to the goals of higher efficiency and lower risk in the processing of support imaging requests, an effort was made to maximize adaptability of the process to accommodate uplink procedure changes and the potential addition of new capabilities outside the scope of the initial effort.

  9. Estimation of lunar titanium content: Based on absorption features of Chang’E-1 interference imaging spectrometer (ⅡM)

    Institute of Scientific and Technical Information of China (English)

    2010-01-01

    Two linear regression models based on absorption features extracted from CE-1 IIM image data are presented to discuss the relationship between absorption features and titanium content. We computed five absorption parameters (Full Wave at Half Maximum (FWHM), absorption position, absorption area, absorption depth and absorption asymmetry) of the spectra collected at Apollo 17 landing sites to build two regression models, one with FWHM and the other without FWHM due to the low relation coefficient between FWHM and Ti content. Finally Ti content measured from Apollo 17 samples and Apollo 16 samples was used to test the accuracy. The results show that the predicted values of the model with FWHM have many singular values and the result of model without FWHM is more stable. The two models are relatively accurate for high-Ti districts, while seem inexact and disable for low-Ti districts.

  10. 基于主色匹配的图像检索系统%Content Based Image Retrieval (CBIR) System Based on Dominant Color Matching

    Institute of Scientific and Technical Information of China (English)

    袁昕; 朱淼良

    2000-01-01

    基于内容的图像检索CBIR(Content Based Image Retrieval)是多媒体信息管理系统的重要组成部分.在简要介绍CBIR技术的基础上,提出了CBIR系统的结构模型,新的图像内容的特征提取的方法和基于该特征的图像匹配方法--基于主色的图像特征提取(DCE, Dominant Color Extracting)和匹配方法(DCM, Dominant Color Matching),并在此基础上设计实现了WWW发布方式的SPRING实验系统,取得了理想的实验结果.

  11. Extended -Regular Sequence for Automated Analysis of Microarray Images

    Directory of Open Access Journals (Sweden)

    Jin Hee-Jeong

    2006-01-01

    Full Text Available Microarray study enables us to obtain hundreds of thousands of expressions of genes or genotypes at once, and it is an indispensable technology for genome research. The first step is the analysis of scanned microarray images. This is the most important procedure for obtaining biologically reliable data. Currently most microarray image processing systems require burdensome manual block/spot indexing work. Since the amount of experimental data is increasing very quickly, automated microarray image analysis software becomes important. In this paper, we propose two automated methods for analyzing microarray images. First, we propose the extended -regular sequence to index blocks and spots, which enables a novel automatic gridding procedure. Second, we provide a methodology, hierarchical metagrid alignment, to allow reliable and efficient batch processing for a set of microarray images. Experimental results show that the proposed methods are more reliable and convenient than the commercial tools.

  12. Fuzzy Emotional Semantic Analysis and Automated Annotation of Scene Images

    Directory of Open Access Journals (Sweden)

    Jianfang Cao

    2015-01-01

    Full Text Available With the advances in electronic and imaging techniques, the production of digital images has rapidly increased, and the extraction and automated annotation of emotional semantics implied by images have become issues that must be urgently addressed. To better simulate human subjectivity and ambiguity for understanding scene images, the current study proposes an emotional semantic annotation method for scene images based on fuzzy set theory. A fuzzy membership degree was calculated to describe the emotional degree of a scene image and was implemented using the Adaboost algorithm and a back-propagation (BP neural network. The automated annotation method was trained and tested using scene images from the SUN Database. The annotation results were then compared with those based on artificial annotation. Our method showed an annotation accuracy rate of 91.2% for basic emotional values and 82.4% after extended emotional values were added, which correspond to increases of 5.5% and 8.9%, respectively, compared with the results from using a single BP neural network algorithm. Furthermore, the retrieval accuracy rate based on our method reached approximately 89%. This study attempts to lay a solid foundation for the automated emotional semantic annotation of more types of images and therefore is of practical significance.

  13. Fuzzy emotional semantic analysis and automated annotation of scene images.

    Science.gov (United States)

    Cao, Jianfang; Chen, Lichao

    2015-01-01

    With the advances in electronic and imaging techniques, the production of digital images has rapidly increased, and the extraction and automated annotation of emotional semantics implied by images have become issues that must be urgently addressed. To better simulate human subjectivity and ambiguity for understanding scene images, the current study proposes an emotional semantic annotation method for scene images based on fuzzy set theory. A fuzzy membership degree was calculated to describe the emotional degree of a scene image and was implemented using the Adaboost algorithm and a back-propagation (BP) neural network. The automated annotation method was trained and tested using scene images from the SUN Database. The annotation results were then compared with those based on artificial annotation. Our method showed an annotation accuracy rate of 91.2% for basic emotional values and 82.4% after extended emotional values were added, which correspond to increases of 5.5% and 8.9%, respectively, compared with the results from using a single BP neural network algorithm. Furthermore, the retrieval accuracy rate based on our method reached approximately 89%. This study attempts to lay a solid foundation for the automated emotional semantic annotation of more types of images and therefore is of practical significance.

  14. Automated Localization of Optic Disc in Retinal Images

    Directory of Open Access Journals (Sweden)

    Deepali A.Godse

    2013-03-01

    Full Text Available An efficient detection of optic disc (OD in colour retinal images is a significant task in an automated retinal image analysis system. Most of the algorithms developed for OD detection are especially applicable to normal and healthy retinal images. It is a challenging task to detect OD in all types of retinal images, that is, normal, healthy images as well as abnormal, that is, images affected due to disease. This paper presents an automated system to locate an OD and its centre in all types of retinal images. The ensemble of steps based on different criteria produces more accurate results. The proposed algorithm gives excellent results and avoids false OD detection. The technique is developed and tested on standard databases provided for researchers on internet, Diaretdb0 (130 images, Diaretdb1 (89 images, Drive (40 images and local database (194 images. The local database images are collected from ophthalmic clinics. It is able to locate OD and its centre in 98.45% of all tested cases. The results achieved by different algorithms can be compared when algorithms are applied on same standard databases. This comparison is also discussed in this paper which shows that the proposed algorithm is more efficient.

  15. Automated image registration for FDOPA PET studies

    Science.gov (United States)

    Lin, Kang-Ping; Huang, Sung-Cheng; Yu, Dan-Chu; Melega, William; Barrio, Jorge R.; Phelps, Michael E.

    1996-12-01

    In this study, various image registration methods are investigated for their suitability for registration of L-6-[18F]-fluoro-DOPA (FDOPA) PET images. Five different optimization criteria including sum of absolute difference (SAD), mean square difference (MSD), cross-correlation coefficient (CC), standard deviation of pixel ratio (SDPR), and stochastic sign change (SSC) were implemented and Powell's algorithm was used to optimize the criteria. The optimization criteria were calculated either unidirectionally (i.e. only evaluating the criteria for comparing the resliced image 1 with the original image 2) or bidirectionally (i.e. averaging the criteria for comparing the resliced image 1 with the original image 2 and those for the sliced image 2 with the original image 1). Monkey FDOPA images taken at various known orientations were used to evaluate the accuracy of different methods. A set of human FDOPA dynamic images was used to investigate the ability of the methods for correcting subject movement. It was found that a large improvement in performance resulted when bidirectional rather than unidirectional criteria were used. Overall, the SAD, MSD and SDPR methods were found to be comparable in performance and were suitable for registering FDOPA images. The MSD method gave more adequate results for frame-to-frame image registration for correcting subject movement during a dynamic FDOPA study. The utility of the registration method is further demonstrated by registering FDOPA images in monkeys before and after amphetamine injection to reveal more clearly the changes in spatial distribution of FDOPA due to the drug intervention.

  16. Similarity Measure of Content Based Image Retrieval%基于内容图像检索的相似性度量研究

    Institute of Scientific and Technical Information of China (English)

    左玉龙

    2012-01-01

    图像的相似性度量是基于内容的图像检索技术中的一个非常关键的问题。理想的图像相似性度量方法应该能满足人的视觉特性,能够使得视觉上相似的图像间具有较小的距离,也就是说二者的相似度越大,其距离就越小。很显然,选择的相似性度量方法对图像检索结果的影响很大,相似性度量方法的好坏会直接影响到图像检索的性能。所以对常用的相似性度量的方法进行分析,并提出将来相似性度量的研究方向很有必要。%Image similarity measure is the content based image retrieval technology is a very key problem. The ideal method for image similarity measure should be able to meet the human visual characteristics, can make the visually similar images with the smaller distance, that is to say the two similarity is greater, the distance is smaller. Obviously, selection of similarity measurement method for image retrieval resuhs are greatly influenced. By the method of similarity measure, directly affects the performanee of image retrieval. In this paper the similarity measurement method are analyzed, and puts forward the future study direction of similarity measure.

  17. Benchmarking, Research, Development, and Support for ORNL Automated Image and Signature Retrieval (AIR/ASR) Technologies

    Energy Technology Data Exchange (ETDEWEB)

    Tobin, K.W.

    2004-06-01

    This report describes the results of a Cooperative Research and Development Agreement (CRADA) with Applied Materials, Inc. (AMAT) of Santa Clara, California. This project encompassed the continued development and integration of the ORNL Automated Image Retrieval (AIR) technology, and an extension of the technology denoted Automated Signature Retrieval (ASR), and other related technologies with the Defect Source Identification (DSI) software system that was under development by AMAT at the time this work was performed. In the semiconductor manufacturing environment, defect imagery is used to diagnose problems in the manufacturing line, train yield management engineers, and examine historical data for trends. Image management in semiconductor data systems is a growing cause of concern in the industry as fabricators are now collecting up to 20,000 images each week. In response to this concern, researchers at the Oak Ridge National Laboratory (ORNL) developed a semiconductor-specific content-based image retrieval method and system, also known as AIR. The system uses an image-based query-by-example method to locate and retrieve similar imagery from a database of digital imagery using visual image characteristics. The query method is based on a unique architecture that takes advantage of the statistical, morphological, and structural characteristics of image data, generated by inspection equipment in industrial applications. The system improves the manufacturing process by allowing rapid access to historical records of similar events so that errant process equipment can be isolated and corrective actions can be quickly taken to improve yield. The combined ORNL and AMAT technology is referred to hereafter as DSI-AIR and DSI-ASR.

  18. Automated model-based calibration of imaging spectrographs

    Science.gov (United States)

    Kosec, Matjaž; Bürmen, Miran; Tomaževič, Dejan; Pernuš, Franjo; Likar, Boštjan

    2012-03-01

    Hyper-spectral imaging has gained recognition as an important non-invasive research tool in the field of biomedicine. Among the variety of available hyperspectral imaging systems, systems comprising an imaging spectrograph, lens, wideband illumination source and a corresponding camera stand out for the short acquisition time and good signal to noise ratio. The individual images acquired by imaging spectrograph-based systems contain full spectral information along one spatial dimension. Due to the imperfections in the camera lens and in particular the optical components of the imaging spectrograph, the acquired images are subjected to spatial and spectral distortions, resulting in scene dependent nonlinear spectral degradations and spatial misalignments which need to be corrected. However, the existing correction methods require complex calibration setups and a tedious manual involvement, therefore, the correction of the distortions is often neglected. Such simplified approach can lead to significant errors in the analysis of the acquired hyperspectral images. In this paper, we present a novel fully automated method for correction of the geometric and spectral distortions in the acquired images. The method is based on automated non-rigid registration of the reference and acquired images corresponding to the proposed calibration object incorporating standardized spatial and spectral information. The obtained transformation was successfully used for sub-pixel correction of various hyperspectral images, resulting in significant improvement of the spectral and spatial alignment. It was found that the proposed calibration is highly accurate and suitable for routine use in applications involving either diffuse reflectance or transmittance measurement setups.

  19. Automated morphometry of transgenic mouse brains in MR images

    NARCIS (Netherlands)

    Scheenstra, Alize Elske Hiltje

    2011-01-01

    Quantitative and local morphometry of mouse brain MRI is a relatively new field of research, where automated methods can be exploited to rapidly provide accurate and repeatable results. In this thesis we reviewed several existing methods and applications of quantitative morphometry to brain MR image

  20. Automated image analysis in the study of collagenous colitis

    DEFF Research Database (Denmark)

    Kanstrup, Anne-Marie Fiehn; Kristensson, Martin; Engel, Ulla

    2016-01-01

    PURPOSE: The aim of this study was to develop an automated image analysis software to measure the thickness of the subepithelial collagenous band in colon biopsies with collagenous colitis (CC) and incomplete CC (CCi). The software measures the thickness of the collagenous band on microscopic...

  1. Automated quality assessment in three-dimensional breast ultrasound images.

    Science.gov (United States)

    Schwaab, Julia; Diez, Yago; Oliver, Arnau; Martí, Robert; van Zelst, Jan; Gubern-Mérida, Albert; Mourri, Ahmed Bensouda; Gregori, Johannes; Günther, Matthias

    2016-04-01

    Automated three-dimensional breast ultrasound (ABUS) is a valuable adjunct to x-ray mammography for breast cancer screening of women with dense breasts. High image quality is essential for proper diagnostics and computer-aided detection. We propose an automated image quality assessment system for ABUS images that detects artifacts at the time of acquisition. Therefore, we study three aspects that can corrupt ABUS images: the nipple position relative to the rest of the breast, the shadow caused by the nipple, and the shape of the breast contour on the image. Image processing and machine learning algorithms are combined to detect these artifacts based on 368 clinical ABUS images that have been rated manually by two experienced clinicians. At a specificity of 0.99, 55% of the images that were rated as low quality are detected by the proposed algorithms. The areas under the ROC curves of the single classifiers are 0.99 for the nipple position, 0.84 for the nipple shadow, and 0.89 for the breast contour shape. The proposed algorithms work fast and reliably, which makes them adequate for online evaluation of image quality during acquisition. The presented concept may be extended to further image modalities and quality aspects.

  2. Automated Archiving of Archaeological Aerial Images

    Directory of Open Access Journals (Sweden)

    Michael Doneus

    2016-03-01

    Full Text Available The main purpose of any aerial photo archive is to allow quick access to images based on content and location. Therefore, next to a description of technical parameters and depicted content, georeferencing of every image is of vital importance. This can be done either by identifying the main photographed object (georeferencing of the image content or by mapping the center point and/or the outline of the image footprint. The paper proposes a new image archiving workflow. The new pipeline is based on the parameters that are logged by a commercial, but cost-effective GNSS/IMU solution and processed with in-house-developed software. Together, these components allow one to automatically geolocate and rectify the (oblique aerial images (by a simple planar rectification using the exterior orientation parameters and to retrieve their footprints with reasonable accuracy, which is automatically stored as a vector file. The data of three test flights were used to determine the accuracy of the device, which turned out to be better than 1° for roll and pitch (mean between 0.0 and 0.21 with a standard deviation of 0.17–0.46 and better than 2.5° for yaw angles (mean between 0.0 and −0.14 with a standard deviation of 0.58–0.94. This turned out to be sufficient to enable a fast and almost automatic GIS-based archiving of all of the imagery.

  3. Automated image-based tracking and its application in ecology.

    Science.gov (United States)

    Dell, Anthony I; Bender, John A; Branson, Kristin; Couzin, Iain D; de Polavieja, Gonzalo G; Noldus, Lucas P J J; Pérez-Escudero, Alfonso; Perona, Pietro; Straw, Andrew D; Wikelski, Martin; Brose, Ulrich

    2014-07-01

    The behavior of individuals determines the strength and outcome of ecological interactions, which drive population, community, and ecosystem organization. Bio-logging, such as telemetry and animal-borne imaging, provides essential individual viewpoints, tracks, and life histories, but requires capture of individuals and is often impractical to scale. Recent developments in automated image-based tracking offers opportunities to remotely quantify and understand individual behavior at scales and resolutions not previously possible, providing an essential supplement to other tracking methodologies in ecology. Automated image-based tracking should continue to advance the field of ecology by enabling better understanding of the linkages between individual and higher-level ecological processes, via high-throughput quantitative analysis of complex ecological patterns and processes across scales, including analysis of environmental drivers.

  4. Automated Pointing of Cardiac Imaging Catheters.

    Science.gov (United States)

    Loschak, Paul M; Brattain, Laura J; Howe, Robert D

    2013-12-31

    Intracardiac echocardiography (ICE) catheters enable high-quality ultrasound imaging within the heart, but their use in guiding procedures is limited due to the difficulty of manually pointing them at structures of interest. This paper presents the design and testing of a catheter steering model for robotic control of commercial ICE catheters. The four actuated degrees of freedom (4-DOF) are two catheter handle knobs to produce bi-directional bending in combination with rotation and translation of the handle. An extra degree of freedom in the system allows the imaging plane (dependent on orientation) to be directed at an object of interest. A closed form solution for forward and inverse kinematics enables control of the catheter tip position and the imaging plane orientation. The proposed algorithms were validated with a robotic test bed using electromagnetic sensor tracking of the catheter tip. The ability to automatically acquire imaging targets in the heart may improve the efficiency and effectiveness of intracardiac catheter interventions by allowing visualization of soft tissue structures that are not visible using standard fluoroscopic guidance. Although the system has been developed and tested for manipulating ICE catheters, the methods described here are applicable to any long thin tendon-driven tool (with single or bi-directional bending) requiring accurate tip position and orientation control.

  5. Automated vasculature extraction from placenta images

    Science.gov (United States)

    Almoussa, Nizar; Dutra, Brittany; Lampe, Bryce; Getreuer, Pascal; Wittman, Todd; Salafia, Carolyn; Vese, Luminita

    2011-03-01

    Recent research in perinatal pathology argues that analyzing properties of the placenta may reveal important information on how certain diseases progress. One important property is the structure of the placental blood vessels, which supply a fetus with all of its oxygen and nutrition. An essential step in the analysis of the vascular network pattern is the extraction of the blood vessels, which has only been done manually through a costly and time-consuming process. There is no existing method to automatically detect placental blood vessels; in addition, the large variation in the shape, color, and texture of the placenta makes it difficult to apply standard edge-detection algorithms. We describe a method to automatically detect and extract blood vessels from a given image by using image processing techniques and neural networks. We evaluate several local features for every pixel, in addition to a novel modification to an existing road detector. Pixels belonging to blood vessel regions have recognizable responses; hence, we use an artificial neural network to identify the pattern of blood vessels. A set of images where blood vessels are manually highlighted is used to train the network. We then apply the neural network to recognize blood vessels in new images. The network is effective in capturing the most prominent vascular structures of the placenta.

  6. Content-based retrieval of visual information

    NARCIS (Netherlands)

    Oerlemans, Adrianus Antonius Johannes

    2011-01-01

    In this dissertation, I investigate new approaches relevant to content-based image retrieval techniques. First, the MOD paradigm is proposed, a method for detecting salient points in images. These salient points are specifically designed to enhance image retrieval accuracy by maximizing distinctive

  7. Automated thresholding in radiographic image for welded joints

    Science.gov (United States)

    Yazid, Haniza; Arof, Hamzah; Yazid, Hafizal

    2012-03-01

    Automated detection of welding defects in radiographic images becomes non-trivial when uneven illumination, contrast and noise are present. In this paper, a new surface thresholding method is introduced to detect defects in radiographic images of welding joints. In the first stage, several image processing techniques namely fuzzy c means clustering, region filling, mean filtering, edge detection, Otsu's thresholding and morphological operations method are utilised to locate the area in which defects might exist. This is followed by the implementation of inverse surface thresholding with partial differential equation to locate isolated areas that represent the defects in the second stage. The proposed method obtained a promising result with high precision.

  8. SAND: Automated VLBI imaging and analyzing pipeline

    Science.gov (United States)

    Zhang, Ming

    2016-05-01

    The Search And Non-Destroy (SAND) is a VLBI data reduction pipeline composed of a set of Python programs based on the AIPS interface provided by ObitTalk. It is designed for the massive data reduction of multi-epoch VLBI monitoring research. It can automatically investigate calibrated visibility data, search all the radio emissions above a given noise floor and do the model fitting either on the CLEANed image or directly on the uv data. It then digests the model-fitting results, intelligently identifies the multi-epoch jet component correspondence, and recognizes the linear or non-linear proper motion patterns. The outputs including CLEANed image catalogue with polarization maps, animation cube, proper motion fitting and core light curves. For uncalibrated data, a user can easily add inline modules to do the calibration and self-calibration in a batch for a specific array.

  9. Automated delineation of stroke lesions using brain CT images

    Directory of Open Access Journals (Sweden)

    Céline R. Gillebert

    2014-01-01

    Full Text Available Computed tomographic (CT images are widely used for the identification of abnormal brain tissue following infarct and hemorrhage in stroke. Manual lesion delineation is currently the standard approach, but is both time-consuming and operator-dependent. To address these issues, we present a method that can automatically delineate infarct and hemorrhage in stroke CT images. The key elements of this method are the accurate normalization of CT images from stroke patients into template space and the subsequent voxelwise comparison with a group of control CT images for defining areas with hypo- or hyper-intense signals. Our validation, using simulated and actual lesions, shows that our approach is effective in reconstructing lesions resulting from both infarct and hemorrhage and yields lesion maps spatially consistent with those produced manually by expert operators. A limitation is that, relative to manual delineation, there is reduced sensitivity of the automated method in regions close to the ventricles and the brain contours. However, the automated method presents a number of benefits in terms of offering significant time savings and the elimination of the inter-operator differences inherent to manual tracing approaches. These factors are relevant for the creation of large-scale lesion databases for neuropsychological research. The automated delineation of stroke lesions from CT scans may also enable longitudinal studies to quantify changes in damaged tissue in an objective and reproducible manner.

  10. Quantifying biodiversity using digital cameras and automated image analysis.

    Science.gov (United States)

    Roadknight, C. M.; Rose, R. J.; Barber, M. L.; Price, M. C.; Marshall, I. W.

    2009-04-01

    Monitoring the effects on biodiversity of extensive grazing in complex semi-natural habitats is labour intensive. There are also concerns about the standardization of semi-quantitative data collection. We have chosen to focus initially on automating the most time consuming aspect - the image analysis. The advent of cheaper and more sophisticated digital camera technology has lead to a sudden increase in the number of habitat monitoring images and information that is being collected. We report on the use of automated trail cameras (designed for the game hunting market) to continuously capture images of grazer activity in a variety of habitats at Moor House National Nature Reserve, which is situated in the North of England at an average altitude of over 600m. Rainfall is high, and in most areas the soil consists of deep peat (1m to 3m), populated by a mix of heather, mosses and sedges. The cameras have been continuously in operation over a 6 month period, daylight images are in full colour and night images (IR flash) are black and white. We have developed artificial intelligence based methods to assist in the analysis of the large number of images collected, generating alert states for new or unusual image conditions. This paper describes the data collection techniques, outlines the quantitative and qualitative data collected and proposes online and offline systems that can reduce the manpower overheads and increase focus on important subsets in the collected data. By converting digital image data into statistical composite data it can be handled in a similar way to other biodiversity statistics thus improving the scalability of monitoring experiments. Unsupervised feature detection methods and supervised neural methods were tested and offered solutions to simplifying the process. Accurate (85 to 95%) categorization of faunal content can be obtained, requiring human intervention for only those images containing rare animals or unusual (undecidable) conditions, and

  11. Practical approach to apply range image sensors in machine automation

    Science.gov (United States)

    Moring, Ilkka; Paakkari, Jussi

    1993-10-01

    In this paper we propose a practical approach to apply range imaging technology in machine automation. The applications we are especially interested in are industrial heavy-duty machines like paper roll manipulators in harbor terminals, harvesters in forests and drilling machines in mines. Characteristic of these applications is that the sensing system has to be fast, mid-ranging, compact, robust, and relatively cheap. On the other hand the sensing system is not required to be generic with respect to the complexity of scenes and objects or number of object classes. The key in our approach is that just a limited range data set or as we call it, a sparse range image is acquired and analyzed. This makes both the range image sensor and the range image analysis process more feasible and attractive. We believe that this is the way in which range imaging technology will enter the large industrial machine automation market. In the paper we analyze as a case example one of the applications mentioned and, based on that, we try to roughly specify the requirements for a range imaging based sensing system. The possibilities to implement the specified system are analyzed based on our own work on range image acquisition and interpretation.

  12. Automated morphological analysis approach for classifying colorectal microscopic images

    Science.gov (United States)

    Marghani, Khaled A.; Dlay, Satnam S.; Sharif, Bayan S.; Sims, Andrew J.

    2003-10-01

    Automated medical image diagnosis using quantitative measurements is extremely helpful for cancer prognosis to reach a high degree of accuracy and thus make reliable decisions. In this paper, six morphological features based on texture analysis were studied in order to categorize normal and cancer colon mucosa. They were derived after a series of pre-processing steps to generate a set of different shape measurements. Based on the shape and the size, six features known as Euler Number, Equivalent Diamater, Solidity, Extent, Elongation, and Shape Factor AR were extracted. Mathematical morphology is used firstly to remove background noise from segmented images and then to obtain different morphological measures to describe shape, size, and texture of colon glands. The automated system proposed is tested to classifying 102 microscopic samples of colorectal tissues, which consist of 44 normal color mucosa and 58 cancerous. The results were first statistically evaluated, using one-way ANOVA method in order to examine the significance of each feature extracted. Then significant features are selected in order to classify the dataset into two categories. Finally, using two discrimination methods; linear method and k-means clustering, important classification factors were estimated. In brief, this study demonstrates that abnormalities in low-level power tissue morphology can be distinguished using quantitative image analysis. This investigation shows the potential of an automated vision system in histopathology. Furthermore, it has the advantage of being objective, and more importantly a valuable diagnostic decision support tool.

  13. Automated Dermoscopy Image Analysis of Pigmented Skin Lesions

    Directory of Open Access Journals (Sweden)

    Alfonso Baldi

    2010-03-01

    Full Text Available Dermoscopy (dermatoscopy, epiluminescence microscopy is a non-invasive diagnostic technique for the in vivo observation of pigmented skin lesions (PSLs, allowing a better visualization of surface and subsurface structures (from the epidermis to the papillary dermis. This diagnostic tool permits the recognition of morphologic structures not visible by the naked eye, thus opening a new dimension in the analysis of the clinical morphologic features of PSLs. In order to reduce the learning-curve of non-expert clinicians and to mitigate problems inherent in the reliability and reproducibility of the diagnostic criteria used in pattern analysis, several indicative methods based on diagnostic algorithms have been introduced in the last few years. Recently, numerous systems designed to provide computer-aided analysis of digital images obtained by dermoscopy have been reported in the literature. The goal of this article is to review these systems, focusing on the most recent approaches based on content-based image retrieval systems (CBIR.

  14. Image mosaicing for automated pipe scanning

    Science.gov (United States)

    Summan, Rahul; Dobie, Gordon; Guarato, Francesco; MacLeod, Charles; Marshall, Stephen; Forrester, Cailean; Pierce, Gareth; Bolton, Gary

    2015-03-01

    Remote visual inspection (RVI) is critical for the inspection of the interior condition of pipelines particularly in the nuclear and oil and gas industries. Conventional RVI equipment produces a video which is analysed online by a trained inspector employing expert knowledge. Due to the potentially disorientating nature of the footage, this is a time intensive and difficult activity. In this paper a new probe for such visual inspections is presented. The device employs a catadioptric lens coupled with feature based structure from motion to create a 3D model of the interior surface of a pipeline. Reliance upon the availability of image features is mitigated through orientation and distance estimates from an inertial measurement unit and encoder respectively. Such a model affords a global view of the data thus permitting a greater appreciation of the nature and extent of defects. Furthermore, the technique estimates the 3D position and orientation of the probe thus providing information to direct remedial action. Results are presented for both synthetic and real pipe sections. The former enables the accuracy of the generated model to be assessed while the latter demonstrates the efficacy of the technique in a practice.

  15. AUTOMATED IMAGE MATCHING WITH CODED POINTS IN STEREOVISION MEASUREMENT

    Institute of Scientific and Technical Information of China (English)

    Dong Mingli; Zhou Xiaogang; Zhu Lianqing; Lü Naiguang; Sun Yunan

    2005-01-01

    A coding-based method to solve the image matching problems in stereovision measurement is presented. The solution is to add and append an identity ID to the retro-reflect point, so it can be identified efficiently under the complicated circumstances and has the characteristics of rotation, zooming, and deformation independence. Its design architecture and implementation process in details based on the theory of stereovision measurement are described. The method is effective on reducing processing data time, improving accuracy of image matching and automation of measuring system through experiments.

  16. GPU Accelerated Automated Feature Extraction From Satellite Images

    Directory of Open Access Journals (Sweden)

    K. Phani Tejaswi

    2013-04-01

    Full Text Available The availability of large volumes of remote sensing data insists on higher degree of automation in featureextraction, making it a need of thehour. Fusingdata from multiple sources, such as panchromatic,hyperspectraland LiDAR sensors, enhances the probability of identifying and extracting features such asbuildings, vegetation or bodies of water by using a combination of spectral and elevation characteristics.Utilizing theaforementioned featuresin remote sensing is impracticable in the absence ofautomation.Whileefforts are underway to reduce human intervention in data processing, this attempt alone may notsuffice. Thehuge quantum of data that needs to be processed entailsaccelerated processing to be enabled.GPUs, which were originally designed to provide efficient visualization,arebeing massively employed forcomputation intensive parallel processing environments. Image processing in general and hence automatedfeatureextraction, is highly computation intensive, where performance improvements have a direct impacton societal needs. In this context, an algorithm has been formulated for automated feature extraction froma panchromatic or multispectral image based on image processing techniques.Two Laplacian of Guassian(LoGmasks were applied on the image individually followed by detection of zero crossing points andextracting the pixels based on their standard deviationwiththe surrounding pixels. The two extractedimages with different LoG masks were combined together which resulted in an image withthe extractedfeatures and edges.Finally the user is at liberty to apply the image smoothing step depending on the noisecontent in the extracted image.The image ispassed through a hybrid median filter toremove the salt andpepper noise from the image.This paper discusses theaforesaidalgorithmforautomated featureextraction, necessity of deployment of GPUs for thesame;system-level challenges and quantifies thebenefits of integrating GPUs in such environment. The

  17. Automated curved planar reformation of 3D spine images

    Energy Technology Data Exchange (ETDEWEB)

    Vrtovec, Tomaz; Likar, Bostjan; Pernus, Franjo [University of Ljubljana, Faculty of Electrical Engineering, Trzaska 25, SI-1000 Ljubljana (Slovenia)

    2005-10-07

    Traditional techniques for visualizing anatomical structures are based on planar cross-sections from volume images, such as images obtained by computed tomography (CT) or magnetic resonance imaging (MRI). However, planar cross-sections taken in the coordinate system of the 3D image often do not provide sufficient or qualitative enough diagnostic information, because planar cross-sections cannot follow curved anatomical structures (e.g. arteries, colon, spine, etc). Therefore, not all of the important details can be shown simultaneously in any planar cross-section. To overcome this problem, reformatted images in the coordinate system of the inspected structure must be created. This operation is usually referred to as curved planar reformation (CPR). In this paper we propose an automated method for CPR of 3D spine images, which is based on the image transformation from the standard image-based to a novel spine-based coordinate system. The axes of the proposed spine-based coordinate system are determined on the curve that represents the vertebral column, and the rotation of the vertebrae around the spine curve, both of which are described by polynomial models. The optimal polynomial parameters are obtained in an image analysis based optimization framework. The proposed method was qualitatively and quantitatively evaluated on five CT spine images. The method performed well on both normal and pathological cases and was consistent with manually obtained ground truth data. The proposed spine-based CPR benefits from reduced structural complexity in favour of improved feature perception of the spine. The reformatted images are diagnostically valuable and enable easier navigation, manipulation and orientation in 3D space. Moreover, reformatted images may prove useful for segmentation and other image analysis tasks.

  18. Automated 3D renal segmentation based on image partitioning

    Science.gov (United States)

    Yeghiazaryan, Varduhi; Voiculescu, Irina D.

    2016-03-01

    Despite several decades of research into segmentation techniques, automated medical image segmentation is barely usable in a clinical context, and still at vast user time expense. This paper illustrates unsupervised organ segmentation through the use of a novel automated labelling approximation algorithm followed by a hypersurface front propagation method. The approximation stage relies on a pre-computed image partition forest obtained directly from CT scan data. We have implemented all procedures to operate directly on 3D volumes, rather than slice-by-slice, because our algorithms are dimensionality-independent. The results picture segmentations which identify kidneys, but can easily be extrapolated to other body parts. Quantitative analysis of our automated segmentation compared against hand-segmented gold standards indicates an average Dice similarity coefficient of 90%. Results were obtained over volumes of CT data with 9 kidneys, computing both volume-based similarity measures (such as the Dice and Jaccard coefficients, true positive volume fraction) and size-based measures (such as the relative volume difference). The analysis considered both healthy and diseased kidneys, although extreme pathological cases were excluded from the overall count. Such cases are difficult to segment both manually and automatically due to the large amplitude of Hounsfield unit distribution in the scan, and the wide spread of the tumorous tissue inside the abdomen. In the case of kidneys that have maintained their shape, the similarity range lies around the values obtained for inter-operator variability. Whilst the procedure is fully automated, our tools also provide a light level of manual editing.

  19. Automated localization of vertebra landmarks in MRI images

    Science.gov (United States)

    Pai, Akshay; Narasimhamurthy, Anand; Rao, V. S. Veeravasarapu; Vaidya, Vivek

    2011-03-01

    The identification of key landmark points in an MR spine image is an important step for tasks such as vertebra counting. In this paper, we propose a template matching based approach for automatic detection of two key landmark points, namely the second cervical vertebra (C2) and the sacrum from sagittal MR images. The approach is comprised of an approximate localization of vertebral column followed by matching with appropriate templates in order to detect/localize the landmarks. A straightforward extension of the work described here is an automated classification of spine section(s). It also serves as a useful building block for further automatic processing such as extraction of regions of interest for subsequent image processing and also in aiding the counting of vertebra.

  20. Automated computational aberration correction method for broadband interferometric imaging techniques.

    Science.gov (United States)

    Pande, Paritosh; Liu, Yuan-Zhi; South, Fredrick A; Boppart, Stephen A

    2016-07-15

    Numerical correction of optical aberrations provides an inexpensive and simpler alternative to the traditionally used hardware-based adaptive optics techniques. In this Letter, we present an automated computational aberration correction method for broadband interferometric imaging techniques. In the proposed method, the process of aberration correction is modeled as a filtering operation on the aberrant image using a phase filter in the Fourier domain. The phase filter is expressed as a linear combination of Zernike polynomials with unknown coefficients, which are estimated through an iterative optimization scheme based on maximizing an image sharpness metric. The method is validated on both simulated data and experimental data obtained from a tissue phantom, an ex vivo tissue sample, and an in vivo photoreceptor layer of the human retina.

  1. Automated blood vessel extraction using local features on retinal images

    Science.gov (United States)

    Hatanaka, Yuji; Samo, Kazuki; Tajima, Mikiya; Ogohara, Kazunori; Muramatsu, Chisako; Okumura, Susumu; Fujita, Hiroshi

    2016-03-01

    An automated blood vessel extraction using high-order local autocorrelation (HLAC) on retinal images is presented. Although many blood vessel extraction methods based on contrast have been proposed, a technique based on the relation of neighbor pixels has not been published. HLAC features are shift-invariant; therefore, we applied HLAC features to retinal images. However, HLAC features are weak to turned image, thus a method was improved by the addition of HLAC features to a polar transformed image. The blood vessels were classified using an artificial neural network (ANN) with HLAC features using 105 mask patterns as input. To improve performance, the second ANN (ANN2) was constructed by using the green component of the color retinal image and the four output values of ANN, Gabor filter, double-ring filter and black-top-hat transformation. The retinal images used in this study were obtained from the "Digital Retinal Images for Vessel Extraction" (DRIVE) database. The ANN using HLAC output apparent white values in the blood vessel regions and could also extract blood vessels with low contrast. The outputs were evaluated using the area under the curve (AUC) based on receiver operating characteristics (ROC) analysis. The AUC of ANN2 was 0.960 as a result of our study. The result can be used for the quantitative analysis of the blood vessels.

  2. Automated monitoring of activated sludge using image analysis

    OpenAIRE

    Motta, Maurício da; M. N. Pons; Roche, N; A.L. Amaral; Ferreira, E. C.; Alves, M.M.; Mota, M.; Vivier, H.

    2000-01-01

    An automated procedure for the characterisation by image analysis of the morphology of activated sludge has been used to monitor in a systematic manner the biomass in wastewater treatment plants. Over a period of one year, variations in terms mainly of the fractal dimension of flocs and of the amount of filamentous bacteria could be related to rain events affecting the plant influent flow rate and composition. Grand Nancy Council. Météo-France. Brasil. Ministério da Ciênc...

  3. Granulometric profiling of aeolian dust deposits by automated image analysis

    Science.gov (United States)

    Varga, György; Újvári, Gábor; Kovács, János; Jakab, Gergely; Kiss, Klaudia; Szalai, Zoltán

    2016-04-01

    Determination of granulometric parameters is of growing interest in the Earth sciences. Particle size data of sedimentary deposits provide insights into the physicochemical environment of transport, accumulation and post-depositional alterations of sedimentary particles, and are important proxies applied in paleoclimatic reconstructions. It is especially true for aeolian dust deposits with a fairly narrow grain size range as a consequence of the extremely selective nature of wind sediment transport. Therefore, various aspects of aeolian sedimentation (wind strength, distance to source(s), possible secondary source regions and modes of sedimentation and transport) can be reconstructed only from precise grain size data. As terrestrial wind-blown deposits are among the most important archives of past environmental changes, proper explanation of the proxy data is a mandatory issue. Automated imaging provides a unique technique to gather direct information on granulometric characteristics of sedimentary particles. Granulometric data obtained from automatic image analysis of Malvern Morphologi G3-ID is a rarely applied new technique for particle size and shape analyses in sedimentary geology. Size and shape data of several hundred thousand (or even million) individual particles were automatically recorded in this study from 15 loess and paleosoil samples from the captured high-resolution images. Several size (e.g. circle-equivalent diameter, major axis, length, width, area) and shape parameters (e.g. elongation, circularity, convexity) were calculated by the instrument software. At the same time, the mean light intensity after transmission through each particle is automatically collected by the system as a proxy of optical properties of the material. Intensity values are dependent on chemical composition and/or thickness of the particles. The results of the automated imaging were compared to particle size data determined by three different laser diffraction instruments

  4. Automated classification of colon polyps in endoscopic image data

    Science.gov (United States)

    Gross, Sebastian; Palm, Stephan; Tischendorf, Jens J. W.; Behrens, Alexander; Trautwein, Christian; Aach, Til

    2012-03-01

    Colon cancer is the third most commonly diagnosed type of cancer in the US. In recent years, however, early diagnosis and treatment have caused a significant rise in the five year survival rate. Preventive screening is often performed by colonoscopy (endoscopic inspection of the colon mucosa). Narrow Band Imaging (NBI) is a novel diagnostic approach highlighting blood vessel structures on polyps which are an indicator for future cancer risk. In this paper, we review our automated inter- and intra-observer independent system for the automated classification of polyps into hyperplasias and adenomas based on vessel structures to further improve the classification performance. To surpass the performance limitations we derive a novel vessel segmentation approach, extract 22 features to describe complex vessel topologies, and apply three feature selection strategies. Tests are conducted on 286 NBI images with diagnostically important and challenging polyps (10mm or smaller) taken from our representative polyp database. Evaluations are based on ground truth data determined by histopathological analysis. Feature selection by Simulated Annealing yields the best result with a prediction accuracy of 96.2% (sensitivity: 97.6%, specificity: 94.2%) using eight features. Future development aims at implementing a demonstrator platform to begin clinical trials at University Hospital Aachen.

  5. Automated Image Processing for the Analysis of DNA Repair Dynamics

    CERN Document Server

    Riess, Thorsten; Tomas, Martin; Ferrando-May, Elisa; Merhof, Dorit

    2011-01-01

    The efficient repair of cellular DNA is essential for the maintenance and inheritance of genomic information. In order to cope with the high frequency of spontaneous and induced DNA damage, a multitude of repair mechanisms have evolved. These are enabled by a wide range of protein factors specifically recognizing different types of lesions and finally restoring the normal DNA sequence. This work focuses on the repair factor XPC (xeroderma pigmentosum complementation group C), which identifies bulky DNA lesions and initiates their removal via the nucleotide excision repair pathway. The binding of XPC to damaged DNA can be visualized in living cells by following the accumulation of a fluorescent XPC fusion at lesions induced by laser microirradiation in a fluorescence microscope. In this work, an automated image processing pipeline is presented which allows to identify and quantify the accumulation reaction without any user interaction. The image processing pipeline comprises a preprocessing stage where the ima...

  6. Automated segmentation of three-dimensional MR brain images

    Science.gov (United States)

    Park, Jonggeun; Baek, Byungjun; Ahn, Choong-Il; Ku, Kyo Bum; Jeong, Dong Kyun; Lee, Chulhee

    2006-03-01

    Brain segmentation is a challenging problem due to the complexity of the brain. In this paper, we propose an automated brain segmentation method for 3D magnetic resonance (MR) brain images which are represented as a sequence of 2D brain images. The proposed method consists of three steps: pre-processing, removal of non-brain regions (e.g., the skull, meninges, other organs, etc), and spinal cord restoration. In pre-processing, we perform adaptive thresholding which takes into account variable intensities of MR brain images corresponding to various image acquisition conditions. In segmentation process, we iteratively apply 2D morphological operations and masking for the sequences of 2D sagittal, coronal, and axial planes in order to remove non-brain tissues. Next, final 3D brain regions are obtained by applying OR operation for segmentation results of three planes. Finally we reconstruct the spinal cord truncated during the previous processes. Experiments are performed with fifteen 3D MR brain image sets with 8-bit gray-scale. Experiment results show the proposed algorithm is fast, and provides robust and satisfactory results.

  7. An automated deformable image registration evaluation of confidence tool

    Science.gov (United States)

    Kirby, Neil; Chen, Josephine; Kim, Hojin; Morin, Olivier; Nie, Ke; Pouliot, Jean

    2016-04-01

    Deformable image registration (DIR) is a powerful tool for radiation oncology, but it can produce errors. Beyond this, DIR accuracy is not a fixed quantity and varies on a case-by-case basis. The purpose of this study is to explore the possibility of an automated program to create a patient- and voxel-specific evaluation of DIR accuracy. AUTODIRECT is a software tool that was developed to perform this evaluation for the application of a clinical DIR algorithm to a set of patient images. In brief, AUTODIRECT uses algorithms to generate deformations and applies them to these images (along with processing) to generate sets of test images, with known deformations that are similar to the actual ones and with realistic noise properties. The clinical DIR algorithm is applied to these test image sets (currently 4). From these tests, AUTODIRECT generates spatial and dose uncertainty estimates for each image voxel based on a Student’s t distribution. In this study, four commercially available DIR algorithms were used to deform a dose distribution associated with a virtual pelvic phantom image set, and AUTODIRECT was used to generate dose uncertainty estimates for each deformation. The virtual phantom image set has a known ground-truth deformation, so the true dose-warping errors of the DIR algorithms were also known. AUTODIRECT predicted error patterns that closely matched the actual error spatial distribution. On average AUTODIRECT overestimated the magnitude of the dose errors, but tuning the AUTODIRECT algorithms should improve agreement. This proof-of-principle test demonstrates the potential for the AUTODIRECT algorithm as an empirical method to predict DIR errors.

  8. Automated extraction of chemical structure information from digital raster images

    Directory of Open Access Journals (Sweden)

    Shedden Kerby A

    2009-02-01

    Full Text Available Abstract Background To search for chemical structures in research articles, diagrams or text representing molecules need to be translated to a standard chemical file format compatible with cheminformatic search engines. Nevertheless, chemical information contained in research articles is often referenced as analog diagrams of chemical structures embedded in digital raster images. To automate analog-to-digital conversion of chemical structure diagrams in scientific research articles, several software systems have been developed. But their algorithmic performance and utility in cheminformatic research have not been investigated. Results This paper aims to provide critical reviews for these systems and also report our recent development of ChemReader – a fully automated tool for extracting chemical structure diagrams in research articles and converting them into standard, searchable chemical file formats. Basic algorithms for recognizing lines and letters representing bonds and atoms in chemical structure diagrams can be independently run in sequence from a graphical user interface-and the algorithm parameters can be readily changed-to facilitate additional development specifically tailored to a chemical database annotation scheme. Compared with existing software programs such as OSRA, Kekule, and CLiDE, our results indicate that ChemReader outperforms other software systems on several sets of sample images from diverse sources in terms of the rate of correct outputs and the accuracy on extracting molecular substructure patterns. Conclusion The availability of ChemReader as a cheminformatic tool for extracting chemical structure information from digital raster images allows research and development groups to enrich their chemical structure databases by annotating the entries with published research articles. Based on its stable performance and high accuracy, ChemReader may be sufficiently accurate for annotating the chemical database with links

  9. An investigation of image compression on NIIRS rating degradation through automated image analysis

    Science.gov (United States)

    Chen, Hua-Mei; Blasch, Erik; Pham, Khanh; Wang, Zhonghai; Chen, Genshe

    2016-05-01

    The National Imagery Interpretability Rating Scale (NIIRS) is a subjective quantification of static image widely adopted by the Geographic Information System (GIS) community. Efforts have been made to relate NIIRS image quality to sensor parameters using the general image quality equations (GIQE), which make it possible to automatically predict the NIIRS rating of an image through automated image analysis. In this paper, we present an automated procedure to extract line edge profile based on which the NIIRS rating of a given image can be estimated through the GIQEs if the ground sampling distance (GSD) is known. Steps involved include straight edge detection, edge stripes determination, and edge intensity determination, among others. Next, we show how to employ GIQEs to estimate NIIRS degradation without knowing the ground truth GSD and investigate the effects of image compression on the degradation of an image's NIIRS rating. Specifically, we consider JPEG and JPEG2000 image compression standards. The extensive experimental results demonstrate the effect of image compression on the ground sampling distance and relative edge response, which are the major factors effecting NIIRS rating.

  10. Automated in situ brain imaging for mapping the Drosophila connectome.

    Science.gov (United States)

    Lin, Chi-Wen; Lin, Hsuan-Wen; Chiu, Mei-Tzu; Shih, Yung-Hsin; Wang, Ting-Yuan; Chang, Hsiu-Ming; Chiang, Ann-Shyn

    2015-01-01

    Mapping the connectome, a wiring diagram of the entire brain, requires large-scale imaging of numerous single neurons with diverse morphology. It is a formidable challenge to reassemble these neurons into a virtual brain and correlate their structural networks with neuronal activities, which are measured in different experiments to analyze the informational flow in the brain. Here, we report an in situ brain imaging technique called Fly Head Array Slice Tomography (FHAST), which permits the reconstruction of structural and functional data to generate an integrative connectome in Drosophila. Using FHAST, the head capsules of an array of flies can be opened with a single vibratome sectioning to expose the brains, replacing the painstaking and inconsistent brain dissection process. FHAST can reveal in situ brain neuroanatomy with minimal distortion to neuronal morphology and maintain intact neuronal connections to peripheral sensory organs. Most importantly, it enables the automated 3D imaging of 100 intact fly brains in each experiment. The established head model with in situ brain neuroanatomy allows functional data to be accurately registered and associated with 3D images of single neurons. These integrative data can then be shared, searched, visualized, and analyzed for understanding how brain-wide activities in different neurons within the same circuit function together to control complex behaviors.

  11. Automated pollen identification using microscopic imaging and texture analysis.

    Science.gov (United States)

    Marcos, J Víctor; Nava, Rodrigo; Cristóbal, Gabriel; Redondo, Rafael; Escalante-Ramírez, Boris; Bueno, Gloria; Déniz, Óscar; González-Porto, Amelia; Pardo, Cristina; Chung, François; Rodríguez, Tomás

    2015-01-01

    Pollen identification is required in different scenarios such as prevention of allergic reactions, climate analysis or apiculture. However, it is a time-consuming task since experts are required to recognize each pollen grain through the microscope. In this study, we performed an exhaustive assessment on the utility of texture analysis for automated characterisation of pollen samples. A database composed of 1800 brightfield microscopy images of pollen grains from 15 different taxa was used for this purpose. A pattern recognition-based methodology was adopted to perform pollen classification. Four different methods were evaluated for texture feature extraction from the pollen image: Haralick's gray-level co-occurrence matrices (GLCM), log-Gabor filters (LGF), local binary patterns (LBP) and discrete Tchebichef moments (DTM). Fisher's discriminant analysis and k-nearest neighbour were subsequently applied to perform dimensionality reduction and multivariate classification, respectively. Our results reveal that LGF and DTM, which are based on the spectral properties of the image, outperformed GLCM and LBP in the proposed classification problem. Furthermore, we found that the combination of all the texture features resulted in the highest performance, yielding an accuracy of 95%. Therefore, thorough texture characterisation could be considered in further implementations of automatic pollen recognition systems based on image processing techniques.

  12. Precision Relative Positioning for Automated Aerial Refueling from a Stereo Imaging System

    Science.gov (United States)

    2015-03-01

    PRECISION RELATIVE POSITIONING FOR AUTOMATED AERIAL REFUELING FROM A STEREO IMAGING SYSTEM THESIS Kyle P. Werner, 2Lt, USAF AFIT-ENG-MS-15-M-048...Government and is not subject to copyright protection in the United States. AFIT-ENG-MS-15-M-048 PRECISION RELATIVE POSITIONING FOR AUTOMATED AERIAL...RELEASE; DISTRIBUTION UNLIMITED. AFIT-ENG-MS-15-M-048 PRECISION RELATIVE POSITIONING FOR AUTOMATED AERIAL REFUELING FROM A STEREO IMAGING SYSTEM THESIS

  13. Automated image analysis for space debris identification and astrometric measurements

    Science.gov (United States)

    Piattoni, Jacopo; Ceruti, Alessandro; Piergentili, Fabrizio

    2014-10-01

    The space debris is a challenging problem for the human activity in the space. Observation campaigns are conducted around the globe to detect and track uncontrolled space objects. One of the main problems in optical observation is obtaining useful information about the debris dynamical state by the images collected. For orbit determination, the most relevant information embedded in optical observation is the precise angular position, which can be evaluated by astrometry procedures, comparing the stars inside the image with star catalogs. This is typically a time consuming process, if done by a human operator, which makes this task impractical when dealing with large amounts of data, in the order of thousands images per night, generated by routinely conducted observations. An automated procedure is investigated in this paper that is capable to recognize the debris track inside a picture, calculate the celestial coordinates of the image's center and use these information to compute the debris angular position in the sky. This procedure has been implemented in a software code, that does not require human interaction and works without any supplemental information besides the image itself, detecting space objects and solving for their angular position without a priori information. The algorithm for object detection was developed inside the research team. For the star field computation, the software code astrometry.net was used and released under GPL v2 license. The complete procedure was validated by an extensive testing, using the images obtained in the observation campaign performed in a joint project between the Italian Space Agency (ASI) and the University of Bologna at the Broglio Space center, Kenya.

  14. Automated microaneurysm detection algorithms applied to diabetic retinopathy retinal images

    Directory of Open Access Journals (Sweden)

    Akara Sopharak

    2013-07-01

    Full Text Available Diabetic retinopathy is the commonest cause of blindness in working age people. It is characterised and graded by the development of retinal microaneurysms, haemorrhages and exudates. The damage caused by diabetic retinopathy can be prevented if it is treated in its early stages. Therefore, automated early detection can limit the severity of the disease, improve the follow-up management of diabetic patients and assist ophthalmologists in investigating and treating the disease more efficiently. This review focuses on microaneurysm detection as the earliest clinically localised characteristic of diabetic retinopathy, a frequently observed complication in both Type 1 and Type 2 diabetes. Algorithms used for microaneurysm detection from retinal images are reviewed. A number of features used to extract microaneurysm are summarised. Furthermore, a comparative analysis of reported methods used to automatically detect microaneurysms is presented and discussed. The performance of methods and their complexity are also discussed.

  15. Image auto-zoom technology for AFM automation

    Institute of Scientific and Technical Information of China (English)

    LIU Wen-liang; QIAN Jian-qiang; LI Yuan

    2009-01-01

    For the case of atomic force microscope (AFM) automation, we extract the most valuable sub-region of a given AFM image automatically for succeeding scanning to get the higher resolution of interesting region. Two objective functions are sum-marized based on the analysis of evaluation of the information of a sub-region, and corresponding algorithm principles based on standard deviation and Discrete Cosine Transform (DCT) compression are determined from math. Algorithm realizations are analyzed and two select patterns of sub-region: fixed grid mode and sub-region walk mode are compared. To speed up the algorithm of DCT compression which is too slow to practical applied, a new algorithm is proposed based on analysis of DCT's block computing feature, and it can perform hundreds times faster than original. Implementation result of the algorithms proves that this technology can be applied to the AFM automatic operation. Finally the difference between the two objective functions is discussed with detail computations.

  16. Automated processing of webcam images for phenological classification.

    Science.gov (United States)

    Bothmann, Ludwig; Menzel, Annette; Menze, Bjoern H; Schunk, Christian; Kauermann, Göran

    2017-01-01

    Along with the global climate change, there is an increasing interest for its effect on phenological patterns such as start and end of the growing season. Scientific digital webcams are used for this purpose taking every day one or more images from the same natural motive showing for example trees or grassland sites. To derive phenological patterns from the webcam images, regions of interest are manually defined on these images by an expert and subsequently a time series of percentage greenness is derived and analyzed with respect to structural changes. While this standard approach leads to satisfying results and allows to determine dates of phenological change points, it is associated with a considerable amount of manual work and is therefore constrained to a limited number of webcams only. In particular, this forbids to apply the phenological analysis to a large network of publicly accessible webcams in order to capture spatial phenological variation. In order to be able to scale up the analysis to several hundreds or thousands of webcams, we propose and evaluate two automated alternatives for the definition of regions of interest, allowing for efficient analyses of webcam images. A semi-supervised approach selects pixels based on the correlation of the pixels' time series of percentage greenness with a few prototype pixels. An unsupervised approach clusters pixels based on scores of a singular value decomposition. We show for a scientific webcam that the resulting regions of interest are at least as informative as those chosen by an expert with the advantage that no manual action is required. Additionally, we show that the methods can even be applied to publicly available webcams accessed via the internet yielding interesting partitions of the analyzed images. Finally, we show that the methods are suitable for the intended big data applications by analyzing 13988 webcams from the AMOS database. All developed methods are implemented in the statistical software

  17. Automated determination of spinal centerline in CT and MR images

    Science.gov (United States)

    Štern, Darko; Vrtovec, Tomaž; Pernuš, Franjo; Likar, Boštjan

    2009-02-01

    The spinal curvature is one of the most important parameters for the evaluation of spinal deformities. The spinal centerline, represented by the curve that passes through the centers of the vertebral bodies in three-dimensions (3D), allows valid quantitative measurements of the spinal curvature at any location along the spine. We propose a novel automated method for the determination of the spinal centerline in 3D spine images. Our method exploits the anatomical property that the vertebral body walls are cylindrically-shaped and therefore the lines normal to the edges of the vertebral body walls most often intersect in the middle of the vertebral bodies, i.e. at the location of spinal centerline. These points of intersection are first obtained by a novel algorithm that performs a selective search in the directions normal to the edges of the structures and then connected with a parametric curve that represents the spinal centerline in 3D. As the method is based on anatomical properties of the 3D spine anatomy, it is modality-independent, i.e. applicable to images obtained by computed tomography (CT) and magnetic resonance (MR). The proposed method was evaluated on six CT and four MR images (T1- and T2-weighted) of normal spines and on one scoliotic CT spine image. The qualitative and quantitative results for the normal spines show that the spinal centerline can be successfully determined in both CT and MR spine images, while the results for the scoliotic spine indicate that the method may also be used to evaluate pathological curvatures.

  18. Automated 3D ultrasound image segmentation to aid breast cancer image interpretation.

    Science.gov (United States)

    Gu, Peng; Lee, Won-Mean; Roubidoux, Marilyn A; Yuan, Jie; Wang, Xueding; Carson, Paul L

    2016-02-01

    Segmentation of an ultrasound image into functional tissues is of great importance to clinical diagnosis of breast cancer. However, many studies are found to segment only the mass of interest and not all major tissues. Differences and inconsistencies in ultrasound interpretation call for an automated segmentation method to make results operator-independent. Furthermore, manual segmentation of entire three-dimensional (3D) ultrasound volumes is time-consuming, resource-intensive, and clinically impractical. Here, we propose an automated algorithm to segment 3D ultrasound volumes into three major tissue types: cyst/mass, fatty tissue, and fibro-glandular tissue. To test its efficacy and consistency, the proposed automated method was employed on a database of 21 cases of whole breast ultrasound. Experimental results show that our proposed method not only distinguishes fat and non-fat tissues correctly, but performs well in classifying cyst/mass. Comparison of density assessment between the automated method and manual segmentation demonstrates good consistency with an accuracy of 85.7%. Quantitative comparison of corresponding tissue volumes, which uses overlap ratio, gives an average similarity of 74.54%, consistent with values seen in MRI brain segmentations. Thus, our proposed method exhibits great potential as an automated approach to segment 3D whole breast ultrasound volumes into functionally distinct tissues that may help to correct ultrasound speed of sound aberrations and assist in density based prognosis of breast cancer.

  19. Automated indexing of Laue images from polycrystalline materials

    Energy Technology Data Exchange (ETDEWEB)

    Chung, J.S.; Ice, G.E. [Oak Ridge National Lab., TN (United States). Metals and Ceramics Div.

    1998-12-31

    Third generation hard x-ray synchrotron sources and new x-ray optics have revolutionized x-ray microbeams. Now intense sub-micron x-ray beams are routinely available for x-ray diffraction measurement. An important application of sub-micron x-ray beams is analyzing polycrystalline material by measuring the diffraction of individual grains. For these measurements, conventional analysis methods will not work. The most suitable method for microdiffraction on polycrystalline samples is taking broad-bandpass or white-beam Laue images. With this method, the crystal orientation and non-isostatic strain can be measured rapidly without rotation of sample or detector. The essential step is indexing the reflections from more than one grain. An algorithm has recently been developed to index broad bandpass Laue images from multi-grain samples. For a single grain, a unique set of indices is found by comparing measured angles between Laue reflections and angles between possible indices derived from the x-ray energy bandpass and the scattering angle 2 theta. This method has been extended to multigrain diffraction by successively indexing points not recognized in preceding indexing iterations. This automated indexing method can be used in a wide range of applications.

  20. Automated processing of webcam images for phenological classification

    Science.gov (United States)

    Bothmann, Ludwig; Menzel, Annette; Menze, Bjoern H.; Schunk, Christian; Kauermann, Göran

    2017-01-01

    Along with the global climate change, there is an increasing interest for its effect on phenological patterns such as start and end of the growing season. Scientific digital webcams are used for this purpose taking every day one or more images from the same natural motive showing for example trees or grassland sites. To derive phenological patterns from the webcam images, regions of interest are manually defined on these images by an expert and subsequently a time series of percentage greenness is derived and analyzed with respect to structural changes. While this standard approach leads to satisfying results and allows to determine dates of phenological change points, it is associated with a considerable amount of manual work and is therefore constrained to a limited number of webcams only. In particular, this forbids to apply the phenological analysis to a large network of publicly accessible webcams in order to capture spatial phenological variation. In order to be able to scale up the analysis to several hundreds or thousands of webcams, we propose and evaluate two automated alternatives for the definition of regions of interest, allowing for efficient analyses of webcam images. A semi-supervised approach selects pixels based on the correlation of the pixels’ time series of percentage greenness with a few prototype pixels. An unsupervised approach clusters pixels based on scores of a singular value decomposition. We show for a scientific webcam that the resulting regions of interest are at least as informative as those chosen by an expert with the advantage that no manual action is required. Additionally, we show that the methods can even be applied to publicly available webcams accessed via the internet yielding interesting partitions of the analyzed images. Finally, we show that the methods are suitable for the intended big data applications by analyzing 13988 webcams from the AMOS database. All developed methods are implemented in the statistical software

  1. Automated Nanofiber Diameter Measurement in SEM Images Using a Robust Image Analysis Method

    Directory of Open Access Journals (Sweden)

    Ertan Öznergiz

    2014-01-01

    Full Text Available Due to the high surface area, porosity, and rigidity, applications of nanofibers and nanosurfaces have developed in recent years. Nanofibers and nanosurfaces are typically produced by electrospinning method. In the production process, determination of average fiber diameter is crucial for quality assessment. Average fiber diameter is determined by manually measuring the diameters of randomly selected fibers on scanning electron microscopy (SEM images. However, as the number of the images increases, manual fiber diameter determination becomes a tedious and time consuming task as well as being sensitive to human errors. Therefore, an automated fiber diameter measurement system is desired. In the literature, this task is achieved by using image analysis algorithms. Typically, these methods first isolate each fiber in the image and measure the diameter of each isolated fiber. Fiber isolation is an error-prone process. In this study, automated calculation of nanofiber diameter is achieved without fiber isolation using image processing and analysis algorithms. Performance of the proposed method was tested on real data. The effectiveness of the proposed method is shown by comparing automatically and manually measured nanofiber diameter values.

  2. Automated detection of a prostate Ni-Ti stent in electronic portal images

    DEFF Research Database (Denmark)

    Carl, Jesper; Nielsen, Henning; Nielsen, Jane

    2006-01-01

    of a thermo-expandable Ni-Ti stent. The current study proposes a new detection algorithm for automated detection of the Ni-Ti stent in electronic portal images. The algorithm is based on the Ni-Ti stent having a cylindrical shape with a fixed diameter, which was used as the basis for an automated detection...

  3. Automated Detection of Firearms and Knives in a CCTV Image.

    Science.gov (United States)

    Grega, Michał; Matiolański, Andrzej; Guzik, Piotr; Leszczuk, Mikołaj

    2016-01-01

    Closed circuit television systems (CCTV) are becoming more and more popular and are being deployed in many offices, housing estates and in most public spaces. Monitoring systems have been implemented in many European and American cities. This makes for an enormous load for the CCTV operators, as the number of camera views a single operator can monitor is limited by human factors. In this paper, we focus on the task of automated detection and recognition of dangerous situations for CCTV systems. We propose algorithms that are able to alert the human operator when a firearm or knife is visible in the image. We have focused on limiting the number of false alarms in order to allow for a real-life application of the system. The specificity and sensitivity of the knife detection are significantly better than others published recently. We have also managed to propose a version of a firearm detection algorithm that offers a near-zero rate of false alarms. We have shown that it is possible to create a system that is capable of an early warning in a dangerous situation, which may lead to faster and more effective response times and a reduction in the number of potential victims.

  4. Automated Detection of Firearms and Knives in a CCTV Image

    Directory of Open Access Journals (Sweden)

    Michał Grega

    2016-01-01

    Full Text Available Closed circuit television systems (CCTV are becoming more and more popular and are being deployed in many offices, housing estates and in most public spaces. Monitoring systems have been implemented in many European and American cities. This makes for an enormous load for the CCTV operators, as the number of camera views a single operator can monitor is limited by human factors. In this paper, we focus on the task of automated detection and recognition of dangerous situations for CCTV systems. We propose algorithms that are able to alert the human operator when a firearm or knife is visible in the image. We have focused on limiting the number of false alarms in order to allow for a real-life application of the system. The specificity and sensitivity of the knife detection are significantly better than others published recently. We have also managed to propose a version of a firearm detection algorithm that offers a near-zero rate of false alarms. We have shown that it is possible to create a system that is capable of an early warning in a dangerous situation, which may lead to faster and more effective response times and a reduction in the number of potential victims.

  5. Application of automated image analysis to coal petrography

    Science.gov (United States)

    Chao, E.C.T.; Minkin, J.A.; Thompson, C.L.

    1982-01-01

    The coal petrologist seeks to determine the petrographic characteristics of organic and inorganic coal constituents and their lateral and vertical variations within a single coal bed or different coal beds of a particular coal field. Definitive descriptions of coal characteristics and coal facies provide the basis for interpretation of depositional environments, diagenetic changes, and burial history and determination of the degree of coalification or metamorphism. Numerous coal core or columnar samples must be studied in detail in order to adequately describe and define coal microlithotypes, lithotypes, and lithologic facies and their variations. The large amount of petrographic information required can be obtained rapidly and quantitatively by use of an automated image-analysis system (AIAS). An AIAS can be used to generate quantitative megascopic and microscopic modal analyses for the lithologic units of an entire columnar section of a coal bed. In our scheme for megascopic analysis, distinctive bands 2 mm or more thick are first demarcated by visual inspection. These bands consist of either nearly pure microlithotypes or lithotypes such as vitrite/vitrain or fusite/fusain, or assemblages of microlithotypes. Megascopic analysis with the aid of the AIAS is next performed to determine volume percentages of vitrite, inertite, minerals, and microlithotype mixtures in bands 0.5 to 2 mm thick. The microlithotype mixtures are analyzed microscopically by use of the AIAS to determine their modal composition in terms of maceral and optically observable mineral components. Megascopic and microscopic data are combined to describe the coal unit quantitatively in terms of (V) for vitrite, (E) for liptite, (I) for inertite or fusite, (M) for mineral components other than iron sulfide, (S) for iron sulfide, and (VEIM) for the composition of the mixed phases (Xi) i = 1,2, etc. in terms of the maceral groups vitrinite V, exinite E, inertinite I, and optically observable mineral

  6. Automated image analysis of atomic force microscopy images of rotavirus particles

    Energy Technology Data Exchange (ETDEWEB)

    Venkataraman, S. [Life Sciences Division, Oak Ridge National Laboratory, Oak Ridge, Tennessee 37831 (United States); Department of Electrical and Computer Engineering, University of Tennessee, Knoxville, TN 37996 (United States); Allison, D.P. [Life Sciences Division, Oak Ridge National Laboratory, Oak Ridge, Tennessee 37831 (United States); Department of Biochemistry, Cellular, and Molecular Biology, University of Tennessee, Knoxville, TN 37996 (United States); Molecular Imaging Inc. Tempe, AZ, 85282 (United States); Qi, H. [Department of Electrical and Computer Engineering, University of Tennessee, Knoxville, TN 37996 (United States); Morrell-Falvey, J.L. [Life Sciences Division, Oak Ridge National Laboratory, Oak Ridge, Tennessee 37831 (United States); Kallewaard, N.L. [Vanderbilt University Medical Center, Nashville, TN 37232-2905 (United States); Crowe, J.E. [Vanderbilt University Medical Center, Nashville, TN 37232-2905 (United States); Doktycz, M.J. [Life Sciences Division, Oak Ridge National Laboratory, Oak Ridge, Tennessee 37831 (United States)]. E-mail: doktyczmj@ornl.gov

    2006-06-15

    A variety of biological samples can be imaged by the atomic force microscope (AFM) under environments that range from vacuum to ambient to liquid. Generally imaging is pursued to evaluate structural features of the sample or perhaps identify some structural changes in the sample that are induced by the investigator. In many cases, AFM images of sample features and induced structural changes are interpreted in general qualitative terms such as markedly smaller or larger, rougher, highly irregular, or smooth. Various manual tools can be used to analyze images and extract more quantitative data, but this is usually a cumbersome process. To facilitate quantitative AFM imaging, automated image analysis routines are being developed. Viral particles imaged in water were used as a test case to develop an algorithm that automatically extracts average dimensional information from a large set of individual particles. The extracted information allows statistical analyses of the dimensional characteristics of the particles and facilitates interpretation related to the binding of the particles to the surface. This algorithm is being extended for analysis of other biological samples and physical objects that are imaged by AFM.

  7. Research on the Application of Content-based Image Retrieval Technology in Shopping Website%基于内容的图像检索技术在购物网站中的应用研究

    Institute of Scientific and Technical Information of China (English)

    张薷; 李玉海

    2012-01-01

    本文通过分析电子商务购物网站中基于文本信息检索的现状以及存在的问题,结合虚拟购物平台的特点,提出了基于内容的图像检索技术在购物网站中的应用,并进一步分析了基于内容的图像检索技术的特点、方法以及用于购物网站的检索匹配过程。%This paper puts forward the application of content-based image retrieval technology in shopping website through the analysis of the current situation and existing problems of e-commerce shopping website based on the text information retrieval,bounded up with the characteristics of the virtual shopping platform.Then it further analyzes not only the methods and characteristics of content-based image retrieval but also the matching process in the shopping website.

  8. Automated Imaging System for Pigmented Skin Lesion Diagnosis

    Directory of Open Access Journals (Sweden)

    Mariam Ahmed Sheha

    2016-10-01

    Full Text Available Through the study of pigmented skin lesions risk factors, the appearance of malignant melanoma turns the anomalous occurrence of these lesions to annoying sign. The difficulty of differentiation between malignant melanoma and melanocytic naive is the error-bone problem that usually faces the physicians in diagnosis. To think through the hard mission of pigmented skin lesions diagnosis different clinical diagnosis algorithms were proposed such as pattern analysis, ABCD rule of dermoscopy, Menzies method, and 7-points checklist. Computerized monitoring of these algorithms improves the diagnosis of melanoma compared to simple naked-eye of physician during examination. Toward the serious step of melanoma early detection, aiming to reduce melanoma mortality rate, several computerized studies and procedures were proposed. Through this research different approaches with a huge number of features were discussed to point out the best approach or methodology could be followed to accurately diagnose the pigmented skin lesion. This paper proposes automated system for diagnosis of melanoma to provide quantitative and objective evaluation of skin lesion as opposed to visual assessment, which is subjective in nature. Two different data sets were utilized to reduce the effect of qualitative interpretation problem upon accurate diagnosis. Set of clinical images that are acquired from a standard camera while the other set is acquired from a special dermoscopic camera and so named dermoscopic images. System contribution appears in new, complete and different approaches presented for the aim of pigmented skin lesion diagnosis. These approaches result from using large conclusive set of features fed to different classifiers. The three main types of different features extracted from the region of interest are geometric, chromatic, and texture features. Three statistical methods were proposed to select the most significant features that will cause a valuable effect in

  9. Automated Identification of Rivers and Shorelines in Aerial Imagery Using Image Texture

    Science.gov (United States)

    2011-01-01

    defining the criteria for segmenting the image. For these cases certain automated, unsupervised (or minimally supervised), image classification ...banks, image analysis, edge finding, photography, satellite, texture, entropy 16. SECURITY CLASSIFICATION OF: a. REPORT Unclassified b. ABSTRACT...high resolution bank geometry. Much of the globe is covered by various sorts of multi- or hyperspectral imagery and numerous techniques have been

  10. Research on Content Based Image Spam Filtering Technology%基于内容的图像型垃圾邮件过滤技术研究

    Institute of Scientific and Technical Information of China (English)

    刘艳洋; 曹玉东; 贾旭

    2014-01-01

    提出了一种图像型垃圾邮件的过滤方法,该方法不依赖于附属图像的文字信息,而是直接提取图像本身的视觉特征,包括梯度直方图、颜色直方图和 LBP 特征。分析了支持向量机(SVM)算法,基于该算法实现了图像型垃圾邮件的过滤,实验结果表明,LBP特征的识别效果好于梯度直方图和颜色直方图特征。%The image-based spam filtering scheme was proposed, which does not depend on the text label of the image, however the visual feature is extracted directly, including the image gradient histogram, color histogram and LBP features in the scheme. The Support Vector Machine algorithm was analyzed. Based on this algorithm, the image spam filtering was implemented. The experimental results show that LBP is the best as compared with the image gradient histogram and color histogram.

  11. Grades computacionais na recuperação de imagens médicas baseada em conteúdo Grid computing in the optimization of content-based medical images retrieval

    Directory of Open Access Journals (Sweden)

    Marcelo Costa Oliveira

    2007-08-01

    outra forma seria limitado a supercomputadores.OBJECTIVE: To utilize the grid computing technology to enable the utilization of a similarity measurement algorithm for content-based medical image retrieval. MATERIALS AND METHODS: The content-based images retrieval technique is comprised of two sequential steps: texture analysis and similarity measurement algorithm. These steps have been adopted for head and knee images for evaluation of accuracy in the retrieval of images of a single plane and acquisition sequence in a databank with 2,400 medical images. Initially, texture analysis was utilized as a preselection resource to obtain a set of the 1,000 most similar images as compared with a reference image selected by a clinician. Then, these 1,000 images were processed utilizing a similarity measurement algorithm on a computational grid. RESULTS: The texture analysis has demonstrated low accuracy for sagittal knee images (0.54 and axial head images (0.40. Nevertheless, this technique has shown effectiveness as a filter, pre-selecting images to be evaluated by the similarity measurement algorithm. Content-based images retrieval with similarity measurement algorithm applied on these pre-selected images has demonstrated satisfactory accuracy - 0.95 for sagittal knee images, and 0.92 for axial head images. The high computational cost of the similarity measurement algorithm was balanced by the utilization of grid computing. CONCLUSION: The approach combining texture analysis and similarity measurement algorithm for content-based images retrieval resulted in an accuracy of > 90%. Grid computing has shown to be essential for the utilization of similarity measurement algorithm in the content-based images retrieval that otherwise would be limited to supercomputers.

  12. A semi-automated image analysis procedure for in situ plankton imaging systems.

    Science.gov (United States)

    Bi, Hongsheng; Guo, Zhenhua; Benfield, Mark C; Fan, Chunlei; Ford, Michael; Shahrestani, Suzan; Sieracki, Jeffery M

    2015-01-01

    Plankton imaging systems are capable of providing fine-scale observations that enhance our understanding of key physical and biological processes. However, processing the large volumes of data collected by imaging systems remains a major obstacle for their employment, and existing approaches are designed either for images acquired under laboratory controlled conditions or within clear waters. In the present study, we developed a semi-automated approach to analyze plankton taxa from images acquired by the ZOOplankton VISualization (ZOOVIS) system within turbid estuarine waters, in Chesapeake Bay. When compared to images under laboratory controlled conditions or clear waters, images from highly turbid waters are often of relatively low quality and more variable, due to the large amount of objects and nonlinear illumination within each image. We first customized a segmentation procedure to locate objects within each image and extracted them for classification. A maximally stable extremal regions algorithm was applied to segment large gelatinous zooplankton and an adaptive threshold approach was developed to segment small organisms, such as copepods. Unlike the existing approaches for images acquired from laboratory, controlled conditions or clear waters, the target objects are often the majority class, and the classification can be treated as a multi-class classification problem. We customized a two-level hierarchical classification procedure using support vector machines to classify the target objects ( 95%). First, histograms of oriented gradients feature descriptors were constructed for the segmented objects. In the first step all non-target and target objects were classified into different groups: arrow-like, copepod-like, and gelatinous zooplankton. Each object was passed to a group-specific classifier to remove most non-target objects. After the object was classified, an expert or non-expert then manually removed the non-target objects that could not be removed

  13. Twelve automated thresholding methods for segmentation of PET images: a phantom study.

    Science.gov (United States)

    Prieto, Elena; Lecumberri, Pablo; Pagola, Miguel; Gómez, Marisol; Bilbao, Izaskun; Ecay, Margarita; Peñuelas, Iván; Martí-Climent, Josep M

    2012-06-21

    Tumor volume delineation over positron emission tomography (PET) images is of great interest for proper diagnosis and therapy planning. However, standard segmentation techniques (manual or semi-automated) are operator dependent and time consuming while fully automated procedures are cumbersome or require complex mathematical development. The aim of this study was to segment PET images in a fully automated way by implementing a set of 12 automated thresholding algorithms, classical in the fields of optical character recognition, tissue engineering or non-destructive testing images in high-tech structures. Automated thresholding algorithms select a specific threshold for each image without any a priori spatial information of the segmented object or any special calibration of the tomograph, as opposed to usual thresholding methods for PET. Spherical (18)F-filled objects of different volumes were acquired on clinical PET/CT and on a small animal PET scanner, with three different signal-to-background ratios. Images were segmented with 12 automatic thresholding algorithms and results were compared with the standard segmentation reference, a threshold at 42% of the maximum uptake. Ridler and Ramesh thresholding algorithms based on clustering and histogram-shape information, respectively, provided better results that the classical 42%-based threshold (p < 0.05). We have herein demonstrated that fully automated thresholding algorithms can provide better results than classical PET segmentation tools.

  14. AMIsurvey, chimenea and other tools: Automated imaging for transient surveys with existing radio-observatories

    CERN Document Server

    Staley, Tim D

    2015-01-01

    In preparing the way for the Square Kilometre Array and its pathfinders, there is a pressing need to begin probing the transient sky in a fully robotic fashion using the current generation of radio telescopes. Effective exploitation of such surveys requires a largely automated data-reduction process. This paper introduces an end-to-end automated reduction pipeline, AMIsurvey, used for calibrating and imaging data from the Arcminute Microkelvin Imager Large Array. AMIsurvey makes use of several component libraries which have been packaged separately for open-source release. The most scientifically significant of these is chimenea, which implements a telescope agnostic algorithm for automated imaging of pre-calibrated multi-epoch radio-synthesis data, making use of CASA subroutines for the underlying image-synthesis operations. At a lower level, AMIsurvey relies upon two libraries, drive-ami and drive-casa, built to allow use of mature radio-astronomy software packages from within Python scripts. These packages...

  15. Automated interpretation of PET/CT images in patients with lung cancer

    DEFF Research Database (Denmark)

    Gutte, Henrik; Jakobsson, David; Olofsson, Fredrik

    2007-01-01

    PURPOSE: To develop a completely automated method based on image processing techniques and artificial neural networks for the interpretation of combined [(18)F]fluorodeoxyglucose (FDG) positron emission tomography (PET) and computed tomography (CT) images for the diagnosis and staging of lung...... for localization of lesions in the PET images in the feature extraction process. Eight features from each examination were used as inputs to artificial neural networks trained to classify the images. Thereafter, the performance of the network was evaluated in the test set. RESULTS: The performance of the automated...... method measured as the area under the receiver operating characteristic curve, was 0.97 in the test group, with an accuracy of 92%. The sensitivity was 86% at a specificity of 100%. CONCLUSIONS: A completely automated method using artificial neural networks can be used to detect lung cancer...

  16. Extending and applying active appearance models for automated, high precision segmentation in different image modalities

    DEFF Research Database (Denmark)

    Stegmann, Mikkel Bille; Fisker, Rune; Ersbøll, Bjarne Kjær

    2001-01-01

    , an initialization scheme is designed thus making the usage of AAMs fully automated. Using these extensions it is demonstrated that AAMs can segment bone structures in radiographs, pork chops in perspective images and the left ventricle in cardiovascular magnetic resonance images in a robust, fast and accurate...

  17. Knowledge Acquisition, Validation, and Maintenance in a Planning System for Automated Image Processing

    Science.gov (United States)

    Chien, Steve A.

    1996-01-01

    A key obstacle hampering fielding of AI planning applications is the considerable expense of developing, verifying, updating, and maintainting the planning knowledge base (KB). Planning systems must be able to compare favorably in terms of software lifecycle costs to other means of automation such as scripts or rule-based expert systems. This paper describes a planning application of automated imaging processing and our overall approach to knowledge acquisition for this application.

  18. Automated recognition of cell phenotypes in histology images based on membrane- and nuclei-targeting biomarkers

    Directory of Open Access Journals (Sweden)

    Tözeren Aydın

    2007-09-01

    Full Text Available Abstract Background Three-dimensional in vitro culture of cancer cells are used to predict the effects of prospective anti-cancer drugs in vivo. In this study, we present an automated image analysis protocol for detailed morphological protein marker profiling of tumoroid cross section images. Methods Histologic cross sections of breast tumoroids developed in co-culture suspensions of breast cancer cell lines, stained for E-cadherin and progesterone receptor, were digitized and pixels in these images were classified into five categories using k-means clustering. Automated segmentation was used to identify image regions composed of cells expressing a given biomarker. Synthesized images were created to check the accuracy of the image processing system. Results Accuracy of automated segmentation was over 95% in identifying regions of interest in synthesized images. Image analysis of adjacent histology slides stained, respectively, for Ecad and PR, accurately predicted regions of different cell phenotypes. Image analysis of tumoroid cross sections from different tumoroids obtained under the same co-culture conditions indicated the variation of cellular composition from one tumoroid to another. Variations in the compositions of cross sections obtained from the same tumoroid were established by parallel analysis of Ecad and PR-stained cross section images. Conclusion Proposed image analysis methods offer standardized high throughput profiling of molecular anatomy of tumoroids based on both membrane and nuclei markers that is suitable to rapid large scale investigations of anti-cancer compounds for drug development.

  19. Automated Micro-Object Detection for Mobile Diagnostics Using Lens-Free Imaging Technology

    Directory of Open Access Journals (Sweden)

    Mohendra Roy

    2016-05-01

    Full Text Available Lens-free imaging technology has been extensively used recently for microparticle and biological cell analysis because of its high throughput, low cost, and simple and compact arrangement. However, this technology still lacks a dedicated and automated detection system. In this paper, we describe a custom-developed automated micro-object detection method for a lens-free imaging system. In our previous work (Roy et al., we developed a lens-free imaging system using low-cost components. This system was used to generate and capture the diffraction patterns of micro-objects and a global threshold was used to locate the diffraction patterns. In this work we used the same setup to develop an improved automated detection and analysis algorithm based on adaptive threshold and clustering of signals. For this purpose images from the lens-free system were then used to understand the features and characteristics of the diffraction patterns of several types of samples. On the basis of this information, we custom-developed an automated algorithm for the lens-free imaging system. Next, all the lens-free images were processed using this custom-developed automated algorithm. The performance of this approach was evaluated by comparing the counting results with standard optical microscope results. We evaluated the counting results for polystyrene microbeads, red blood cells, and HepG2, HeLa, and MCF7 cells. The comparison shows good agreement between the systems, with a correlation coefficient of 0.91 and linearity slope of 0.877. We also evaluated the automated size profiles of the microparticle samples. This Wi-Fi-enabled lens-free imaging system, along with the dedicated software, possesses great potential for telemedicine applications in resource-limited settings.

  20. Automated Photogrammetric Image Matching with Sift Algorithm and Delaunay Triangulation

    DEFF Research Database (Denmark)

    Karagiannis, Georgios; Antón Castro, Francesc/François; Mioc, Darka

    2016-01-01

    An algorithm for image matching of multi-sensor and multi-temporal satellite images is developed. The method is based on the SIFT feature detector proposed by Lowe in (Lowe, 1999). First, SIFT feature points are detected independently in two images (reference and sensed image). The features...... of each feature set for each image are computed. The isomorphism of the Delaunay triangulations is determined to guarantee the quality of the image matching. The algorithm is implemented in Matlab and tested on World-View 2, SPOT6 and TerraSAR-X image patches....

  1. A semi-automated image analysis procedure for in situ plankton imaging systems.

    Directory of Open Access Journals (Sweden)

    Hongsheng Bi

    Full Text Available Plankton imaging systems are capable of providing fine-scale observations that enhance our understanding of key physical and biological processes. However, processing the large volumes of data collected by imaging systems remains a major obstacle for their employment, and existing approaches are designed either for images acquired under laboratory controlled conditions or within clear waters. In the present study, we developed a semi-automated approach to analyze plankton taxa from images acquired by the ZOOplankton VISualization (ZOOVIS system within turbid estuarine waters, in Chesapeake Bay. When compared to images under laboratory controlled conditions or clear waters, images from highly turbid waters are often of relatively low quality and more variable, due to the large amount of objects and nonlinear illumination within each image. We first customized a segmentation procedure to locate objects within each image and extracted them for classification. A maximally stable extremal regions algorithm was applied to segment large gelatinous zooplankton and an adaptive threshold approach was developed to segment small organisms, such as copepods. Unlike the existing approaches for images acquired from laboratory, controlled conditions or clear waters, the target objects are often the majority class, and the classification can be treated as a multi-class classification problem. We customized a two-level hierarchical classification procedure using support vector machines to classify the target objects ( 95%. First, histograms of oriented gradients feature descriptors were constructed for the segmented objects. In the first step all non-target and target objects were classified into different groups: arrow-like, copepod-like, and gelatinous zooplankton. Each object was passed to a group-specific classifier to remove most non-target objects. After the object was classified, an expert or non-expert then manually removed the non-target objects that

  2. Digital transplantation pathology: combining whole slide imaging, multiplex staining and automated image analysis.

    Science.gov (United States)

    Isse, K; Lesniak, A; Grama, K; Roysam, B; Minervini, M I; Demetris, A J

    2012-01-01

    Conventional histopathology is the gold standard for allograft monitoring, but its value proposition is increasingly questioned. "-Omics" analysis of tissues, peripheral blood and fluids and targeted serologic studies provide mechanistic insights into allograft injury not currently provided by conventional histology. Microscopic biopsy analysis, however, provides valuable and unique information: (a) spatial-temporal relationships; (b) rare events/cells; (c) complex structural context; and (d) integration into a "systems" model. Nevertheless, except for immunostaining, no transformative advancements have "modernized" routine microscopy in over 100 years. Pathologists now team with hardware and software engineers to exploit remarkable developments in digital imaging, nanoparticle multiplex staining, and computational image analysis software to bridge the traditional histology-global "-omic" analyses gap. Included are side-by-side comparisons, objective biopsy finding quantification, multiplexing, automated image analysis, and electronic data and resource sharing. Current utilization for teaching, quality assurance, conferencing, consultations, research and clinical trials is evolving toward implementation for low-volume, high-complexity clinical services like transplantation pathology. Cost, complexities of implementation, fluid/evolving standards, and unsettled medical/legal and regulatory issues remain as challenges. Regardless, challenges will be overcome and these technologies will enable transplant pathologists to increase information extraction from tissue specimens and contribute to cross-platform biomarker discovery for improved outcomes.

  3. A fully automated method for quantifying and localizing white matter hyperintensities on MR images.

    Science.gov (United States)

    Wu, Minjie; Rosano, Caterina; Butters, Meryl; Whyte, Ellen; Nable, Megan; Crooks, Ryan; Meltzer, Carolyn C; Reynolds, Charles F; Aizenstein, Howard J

    2006-12-01

    White matter hyperintensities (WMH), commonly found on T2-weighted FLAIR brain MR images in the elderly, are associated with a number of neuropsychiatric disorders, including vascular dementia, Alzheimer's disease, and late-life depression. Previous MRI studies of WMHs have primarily relied on the subjective and global (i.e., full-brain) ratings of WMH grade. In the current study we implement and validate an automated method for quantifying and localizing WMHs. We adapt a fuzzy-connected algorithm to automate the segmentation of WMHs and use a demons-based image registration to automate the anatomic localization of the WMHs using the Johns Hopkins University White Matter Atlas. The method is validated using the brain MR images acquired from eleven elderly subjects with late-onset late-life depression (LLD) and eight elderly controls. This dataset was chosen because LLD subjects are known to have significant WMH burden. The volumes of WMH identified in our automated method are compared with the accepted gold standard (manual ratings). A significant correlation of the automated method and the manual ratings is found (Pdepression. Progress in Neuro-Psychopharmacology and Biological Psychiatry. 27 (3), 539-544.]), we found there was a significantly greater WMH burden in the LLD subjects versus the controls for both the manual and automated method. The effect size was greater for the automated method, suggesting that it is a more specific measure. Additionally, we describe the anatomic localization of the WMHs in LLD subjects as well as in the control subjects, and detect the regions of interest (ROIs) specific for the WMH burden of LLD patients. Given the emergence of large NeuroImage databases, techniques, such as that described here, will allow for a better understanding of the relationship between WMHs and neuropsychiatric disorders.

  4. Improving Automated Annotation of Benthic Survey Images Using Wide-band Fluorescence

    Science.gov (United States)

    Beijbom, Oscar; Treibitz, Tali; Kline, David I.; Eyal, Gal; Khen, Adi; Neal, Benjamin; Loya, Yossi; Mitchell, B. Greg; Kriegman, David

    2016-03-01

    Large-scale imaging techniques are used increasingly for ecological surveys. However, manual analysis can be prohibitively expensive, creating a bottleneck between collected images and desired data-products. This bottleneck is particularly severe for benthic surveys, where millions of images are obtained each year. Recent automated annotation methods may provide a solution, but reflectance images do not always contain sufficient information for adequate classification accuracy. In this work, the FluorIS, a low-cost modified consumer camera, was used to capture wide-band wide-field-of-view fluorescence images during a field deployment in Eilat, Israel. The fluorescence images were registered with standard reflectance images, and an automated annotation method based on convolutional neural networks was developed. Our results demonstrate a 22% reduction of classification error-rate when using both images types compared to only using reflectance images. The improvements were large, in particular, for coral reef genera Platygyra, Acropora and Millepora, where classification recall improved by 38%, 33%, and 41%, respectively. We conclude that convolutional neural networks can be used to combine reflectance and fluorescence imagery in order to significantly improve automated annotation accuracy and reduce the manual annotation bottleneck.

  5. A method for fast automated microscope image stitching.

    Science.gov (United States)

    Yang, Fan; Deng, Zhen-Sheng; Fan, Qiu-Hong

    2013-05-01

    Image stitching is an important technology to produce a panorama or larger image by combining several images with overlapped areas. In many biomedical researches, image stitching is highly desirable to acquire a panoramic image which represents large areas of certain structures or whole sections, while retaining microscopic resolution. In this study, we develop a fast normal light microscope image stitching algorithm based on feature extraction. At first, an algorithm of scale-space reconstruction of speeded-up robust features (SURF) was proposed to extract features from the images to be stitched with a short time and higher repeatability. Then, the histogram equalization (HE) method was employed to preprocess the images to enhance their contrast for extracting more features. Thirdly, the rough overlapping zones of the images preprocessed were calculated by phase correlation, and the improved SURF was used to extract the image features in the rough overlapping areas. Fourthly, the features were corresponded by matching algorithm and the transformation parameters were estimated, then the images were blended seamlessly. Finally, this procedure was applied to stitch normal light microscope images to verify its validity. Our experimental results demonstrate that the improved SURF algorithm is very robust to viewpoint, illumination, blur, rotation and zoom of the images and our method is able to stitch microscope images automatically with high precision and high speed. Also, the method proposed in this paper is applicable to registration and stitching of common images as well as stitching the microscope images in the field of virtual microscope for the purpose of observing, exchanging, saving, and establishing a database of microscope images.

  6. Automated Photogrammetric Image Matching with Sift Algorithm and Delaunay Triangulation

    Science.gov (United States)

    Karagiannis, Georgios; Antón Castro, Francesc; Mioc, Darka

    2016-06-01

    An algorithm for image matching of multi-sensor and multi-temporal satellite images is developed. The method is based on the SIFT feature detector proposed by Lowe in (Lowe, 1999). First, SIFT feature points are detected independently in two images (reference and sensed image). The features detected are invariant to image rotations, translations, scaling and also to changes in illumination, brightness and 3-dimensional viewpoint. Afterwards, each feature of the reference image is matched with one in the sensed image if, and only if, the distance between them multiplied by a threshold is shorter than the distances between the point and all the other points in the sensed image. Then, the matched features are used to compute the parameters of the homography that transforms the coordinate system of the sensed image to the coordinate system of the reference image. The Delaunay triangulations of each feature set for each image are computed. The isomorphism of the Delaunay triangulations is determined to guarantee the quality of the image matching. The algorithm is implemented in Matlab and tested on World-View 2, SPOT6 and TerraSAR-X image patches.

  7. Microscopic images dataset for automation of RBCs counting.

    Science.gov (United States)

    Abbas, Sherif

    2015-12-01

    A method for Red Blood Corpuscles (RBCs) counting has been developed using RBCs light microscopic images and Matlab algorithm. The Dataset consists of Red Blood Corpuscles (RBCs) images and there RBCs segmented images. A detailed description using flow chart is given in order to show how to produce RBCs mask. The RBCs mask was used to count the number of RBCs in the blood smear image.

  8. Automated quadrilateral mesh generation for digital image structures

    Institute of Scientific and Technical Information of China (English)

    2011-01-01

    With the development of advanced imaging technology, digital images are widely used. This paper proposes an automatic quadrilateral mesh generation algorithm for multi-colour imaged structures. It takes an original arbitrary digital image as an input for automatic quadrilateral mesh generation, this includes removing the noise, extracting and smoothing the boundary geometries between different colours, and automatic all-quad mesh generation with the above boundaries as constraints. An application example is...

  9. Microscopic images dataset for automation of RBCs counting

    Directory of Open Access Journals (Sweden)

    Sherif Abbas

    2015-12-01

    Full Text Available A method for Red Blood Corpuscles (RBCs counting has been developed using RBCs light microscopic images and Matlab algorithm. The Dataset consists of Red Blood Corpuscles (RBCs images and there RBCs segmented images. A detailed description using flow chart is given in order to show how to produce RBCs mask. The RBCs mask was used to count the number of RBCs in the blood smear image.

  10. An automated detection for axonal boutons in vivo two-photon imaging of mouse

    Science.gov (United States)

    Li, Weifu; Zhang, Dandan; Xie, Qiwei; Chen, Xi; Han, Hua

    2017-02-01

    Activity-dependent changes in the synaptic connections of the brain are tightly related to learning and memory. Previous studies have shown that essentially all new synaptic contacts were made by adding new partners to existing synaptic elements. To further explore synaptic dynamics in specific pathways, concurrent imaging of pre and postsynaptic structures in identified connections is required. Consequently, considerable attention has been paid for the automated detection of axonal boutons. Different from most previous methods proposed in vitro data, this paper considers a more practical case in vivo neuron images which can provide real time information and direct observation of the dynamics of a disease process in mouse. Additionally, we present an automated approach for detecting axonal boutons by starting with deconvolving the original images, then thresholding the enhanced images, and reserving the regions fulfilling a series of criteria. Experimental result in vivo two-photon imaging of mouse demonstrates the effectiveness of our proposed method.

  11. A review of automated image understanding within 3D baggage computed tomography security screening.

    Science.gov (United States)

    Mouton, Andre; Breckon, Toby P

    2015-01-01

    Baggage inspection is the principal safeguard against the transportation of prohibited and potentially dangerous materials at airport security checkpoints. Although traditionally performed by 2D X-ray based scanning, increasingly stringent security regulations have led to a growing demand for more advanced imaging technologies. The role of X-ray Computed Tomography is thus rapidly expanding beyond the traditional materials-based detection of explosives. The development of computer vision and image processing techniques for the automated understanding of 3D baggage-CT imagery is however, complicated by poor image resolutions, image clutter and high levels of noise and artefacts. We discuss the recent and most pertinent advancements and identify topics for future research within the challenging domain of automated image understanding for baggage security screening CT.

  12. Automated quantification of budding Saccharomyces cerevisiae using a novel image cytometry method.

    Science.gov (United States)

    Laverty, Daniel J; Kury, Alexandria L; Kuksin, Dmitry; Pirani, Alnoor; Flanagan, Kevin; Chan, Leo Li-Ying

    2013-06-01

    The measurements of concentration, viability, and budding percentages of Saccharomyces cerevisiae are performed on a routine basis in the brewing and biofuel industries. Generation of these parameters is of great importance in a manufacturing setting, where they can aid in the estimation of product quality, quantity, and fermentation time of the manufacturing process. Specifically, budding percentages can be used to estimate the reproduction rate of yeast populations, which directly correlates with metabolism of polysaccharides and bioethanol production, and can be monitored to maximize production of bioethanol during fermentation. The traditional method involves manual counting using a hemacytometer, but this is time-consuming and prone to human error. In this study, we developed a novel automated method for the quantification of yeast budding percentages using Cellometer image cytometry. The automated method utilizes a dual-fluorescent nucleic acid dye to specifically stain live cells for imaging analysis of unique morphological characteristics of budding yeast. In addition, cell cycle analysis is performed as an alternative method for budding analysis. We were able to show comparable yeast budding percentages between manual and automated counting, as well as cell cycle analysis. The automated image cytometry method is used to analyze and characterize corn mash samples directly from fermenters during standard fermentation. Since concentration, viability, and budding percentages can be obtained simultaneously, the automated method can be integrated into the fermentation quality assurance protocol, which may improve the quality and efficiency of beer and bioethanol production processes.

  13. A feasibility assessment of automated FISH image and signal analysis to assist cervical cancer detection

    Science.gov (United States)

    Wang, Xingwei; Li, Yuhua; Liu, Hong; Li, Shibo; Zhang, Roy R.; Zheng, Bin

    2012-02-01

    Fluorescence in situ hybridization (FISH) technology provides a promising molecular imaging tool to detect cervical cancer. Since manual FISH analysis is difficult, time-consuming, and inconsistent, the automated FISH image scanning systems have been developed. Due to limited focal depth of scanned microscopic image, a FISH-probed specimen needs to be scanned in multiple layers that generate huge image data. To improve diagnostic efficiency of using automated FISH image analysis, we developed a computer-aided detection (CAD) scheme. In this experiment, four pap-smear specimen slides were scanned by a dual-detector fluorescence image scanning system that acquired two spectrum images simultaneously, which represent images of interphase cells and FISH-probed chromosome X. During image scanning, once detecting a cell signal, system captured nine image slides by automatically adjusting optical focus. Based on the sharpness index and maximum intensity measurement, cells and FISH signals distributed in 3-D space were projected into a 2-D con-focal image. CAD scheme was applied to each con-focal image to detect analyzable interphase cells using an adaptive multiple-threshold algorithm and detect FISH-probed signals using a top-hat transform. The ratio of abnormal cells was calculated to detect positive cases. In four scanned specimen slides, CAD generated 1676 con-focal images that depicted analyzable cells. FISH-probed signals were independently detected by our CAD algorithm and an observer. The Kappa coefficients for agreement between CAD and observer ranged from 0.69 to 1.0 in detecting/counting FISH signal spots. The study demonstrated the feasibility of applying automated FISH image and signal analysis to assist cyto-geneticists in detecting cervical cancers.

  14. Fully automated corneal endothelial morphometry of images captured by clinical specular microscopy

    Science.gov (United States)

    Bucht, Curry; Söderberg, Per; Manneberg, Göran

    2010-02-01

    The corneal endothelium serves as the posterior barrier of the cornea. Factors such as clarity and refractive properties of the cornea are in direct relationship to the quality of the endothelium. The endothelial cell density is considered the most important morphological factor of the corneal endothelium. Pathological conditions and physical trauma may threaten the endothelial cell density to such an extent that the optical property of the cornea and thus clear eyesight is threatened. Diagnosis of the corneal endothelium through morphometry is an important part of several clinical applications. Morphometry of the corneal endothelium is presently carried out by semi automated analysis of pictures captured by a Clinical Specular Microscope (CSM). Because of the occasional need of operator involvement, this process can be tedious, having a negative impact on sampling size. This study was dedicated to the development and use of fully automated analysis of a very large range of images of the corneal endothelium, captured by CSM, using Fourier analysis. Software was developed in the mathematical programming language Matlab. Pictures of the corneal endothelium, captured by CSM, were read into the analysis software. The software automatically performed digital enhancement of the images, normalizing lights and contrasts. The digitally enhanced images of the corneal endothelium were Fourier transformed, using the fast Fourier transform (FFT) and stored as new images. Tools were developed and applied for identification and analysis of relevant characteristics of the Fourier transformed images. The data obtained from each Fourier transformed image was used to calculate the mean cell density of its corresponding corneal endothelium. The calculation was based on well known diffraction theory. Results in form of estimated cell density of the corneal endothelium were obtained, using fully automated analysis software on 292 images captured by CSM. The cell density obtained by the

  15. Advanced automated gain adjustments for in-vivo ultrasound imaging

    DEFF Research Database (Denmark)

    Moshavegh, Ramin; Hemmsen, Martin Christian; Martins, Bo

    2015-01-01

    Automatic gain adjustments are necessary on the state-of-the-art ultrasound scanners to obtain optimal scan quality, while reducing the unnecessary user interactions with the scanner. However, when large anechoic regions exist in the scan plane, the sudden and drastic variation of attenuations...... in the scanned media complicates the gain compensation. This paper presents an advanced and automated gain adjustment method that precisely compensate for the gains on scans and dynamically adapts to the drastic attenuation variations between different media. The proposed algorithm makes use of several...

  16. Novel automated motion compensation technique for producing cumulative maximum intensity subharmonic images.

    Science.gov (United States)

    Dave, Jaydev K; Forsberg, Flemming

    2009-09-01

    The aim of this study was to develop a novel automated motion compensation algorithm for producing cumulative maximum intensity (CMI) images from subharmonic imaging (SHI) of breast lesions. SHI is a nonlinear contrast-specific ultrasound imaging technique in which pulses are received at half the frequency of the transmitted pulses. A Logiq 9 scanner (GE Healthcare, Milwaukee, WI, USA) was modified to operate in grayscale SHI mode (transmitting/receiving at 4.4/2.2 MHz) and used to scan 14 women with 16 breast lesions. Manual CMI images were reconstructed by temporal maximum-intensity projection of pixels traced from the first frame to the last. In the new automated technique, the user selects a kernel in the first frame and the algorithm then uses the sum of absolute difference (SAD) technique to identify motion-induced displacements in the remaining frames. A reliability parameter was used to estimate the accuracy of the motion tracking based on the ratio of the minimum SAD to the average SAD. Two thresholds (the mean and 85% of the mean reliability parameter) were used to eliminate images plagued by excessive motion and/or noise. The automated algorithm was compared with the manual technique for computational time, correction of motion artifacts, removal of noisy frames and quality of the final image. The automated algorithm compensated for motion artifacts and noisy frames. The computational time was 2 min compared with 60-90 minutes for the manual method. The quality of the motion-compensated CMI-SHI images generated by the automated technique was comparable to the manual method and provided a snapshot of the microvasculature showing interconnections between vessels, which was less evident in the original data. In conclusion, an automated algorithm for producing CMI-SHI images has been developed. It eliminates the need for manual processing and yields reproducible images, thereby increasing the throughput and efficiency of reconstructing CMI-SHI images. The

  17. Automated detection of cardiac phase from intracoronary ultrasound image sequences.

    Science.gov (United States)

    Sun, Zheng; Dong, Yi; Li, Mengchan

    2015-01-01

    Intracoronary ultrasound (ICUS) is a widely used interventional imaging modality in clinical diagnosis and treatment of cardiac vessel diseases. Due to cyclic cardiac motion and pulsatile blood flow within the lumen, there exist changes of coronary arterial dimensions and relative motion between the imaging catheter and the lumen during continuous pullback of the catheter. The action subsequently causes cyclic changes to the image intensity of the acquired image sequence. Information on cardiac phases is implied in a non-gated ICUS image sequence. A 1-D phase signal reflecting cardiac cycles was extracted according to cyclical changes in local gray-levels in ICUS images. The local extrema of the signal were then detected to retrieve cardiac phases and to retrospectively gate the image sequence. Results of clinically acquired in vivo image data showed that the average inter-frame dissimilarity of lower than 0.1 was achievable with our technique. In terms of computational efficiency and complexity, the proposed method was shown to be competitive when compared with the current methods. The average frame processing time was lower than 30 ms. We effectively reduced the effect of image noises, useless textures, and non-vessel region on the phase signal detection by discarding signal components caused by non-cardiac factors.

  18. Research on Content-based Commodity Image Retrieval Technology in E-commerce%电子商务中基于内容的商品图像检索技术研究

    Institute of Scientific and Technical Information of China (English)

    姚琪; 蒋达央

    2013-01-01

    当前电子商务高速发展过程中基于文本检索技术(TBIR)的不足越来越明显,而基于内容的图像检索技术(CBIR)在商业中的应用价值和现实意义成为未来电子商务检索技术发生变革的动力。基于一维颜色特征向量和形状特征向量的组合式图像特征检索技术弥补了以往单一特征向量检索技术的缺陷。通过两种特征向量的组合应用,实现更高相似度匹配检索结果,以满足CBIR在电子商务交易中的实际检索需求。%Currently, the deifciency of text-based image retrieval technology (TBIR) is becoming more and more obvious in the rapid development of e-commerce. Content-based image retrieval technology (CBIR) is the driving force of the reform of the future e-commerce retrieval technology due to its application value and realistic signiifcance in business. The combined image feature retrieval technology based on one-dimensional color feature vector and shape feature vector overcomes the deifciency of the previous single feature vector retrieval technology. The combined application of two feature vectors may realize the matching search results with higher similarity, so as to meet the practical retrieval demands of CBIR in e-commerce.

  19. Infrared thermal imaging for automated detection of diabetic foot complications

    NARCIS (Netherlands)

    Netten, van Jaap J.; Baal, van Jeff G.; Liu, Chanjuan; Heijden, van der Ferdi; Bus, Sicco A.

    2013-01-01

    Background: Although thermal imaging can be a valuable technology in the prevention and management of diabetic foot disease, it is not yet widely used in clinical practice. Technological advancement in infrared imaging increases its application range. The aim was to explore the first steps in the ap

  20. An Automated Method for Semantic Classification of Regions in Coastal Images

    NARCIS (Netherlands)

    Hoonhout, B.M.; Radermacher, M.; Baart, F.; Van der Maaten, L.J.P.

    2015-01-01

    Large, long-term coastal imagery datasets are nowadays a low-cost source of information for various coastal research disciplines. However, the applicability of many existing algorithms for coastal image analysis is limited for these large datasets due to a lack of automation and robustness. Therefor

  1. Automated Segmentability Index for Layer Segmentation of Macular SD-OCT Images

    NARCIS (Netherlands)

    Lee, K.; Buitendijk, G.H.; Bogunovic, H.; Springelkamp, H.; Hofman, A.; Wahle, A.; Sonka, M.; Vingerling, J.R.; Klaver, C.C.W.; Abramoff, M.D.

    2016-01-01

    PURPOSE: To automatically identify which spectral-domain optical coherence tomography (SD-OCT) scans will provide reliable automated layer segmentations for more accurate layer thickness analyses in population studies. METHODS: Six hundred ninety macular SD-OCT image volumes (6.0 x 6.0 x 2.3 mm3) we

  2. Automated Selection of Uniform Regions for CT Image Quality Detection

    CERN Document Server

    Naeemi, Maitham D; Roychodhury, Sohini

    2016-01-01

    CT images are widely used in pathology detection and follow-up treatment procedures. Accurate identification of pathological features requires diagnostic quality CT images with minimal noise and artifact variation. In this work, a novel Fourier-transform based metric for image quality (IQ) estimation is presented that correlates to additive CT image noise. In the proposed method, two windowed CT image subset regions are analyzed together to identify the extent of variation in the corresponding Fourier-domain spectrum. The two square windows are chosen such that their center pixels coincide and one window is a subset of the other. The Fourier-domain spectral difference between these two sub-sampled windows is then used to isolate spatial regions-of-interest (ROI) with low signal variation (ROI-LV) and high signal variation (ROI-HV), respectively. Finally, the spatial variance ($var$), standard deviation ($std$), coefficient of variance ($cov$) and the fraction of abdominal ROI pixels in ROI-LV ($\

  3. A SYSTEM FOR ACCESSING A COLLECTION OF HISTOLOGY IMAGES USING CONTENT-BASED STRATEGIES Sistema para acceder una colección de imágenes histológicas mediante estrategias basadas en el contenido

    Directory of Open Access Journals (Sweden)

    F GONZÁLEZ

    Full Text Available Histology images are an important resource for research, education and medical practice. The availability of image collections with reference purposes is limited to printed formats such as books and specialized journals. When histology image sets are published in digital formats, they are composed of some tens of images that do not represent the wide diversity of biological structures that can be found in fundamental tissues. Making a complete histology image collection available to the general public having a great impact on research and education in different areas such as medicine, biology and natural sciences. This work presents the acquisition process of a histology image collection with 20,000 samples in digital format, from tissue processing to digital image capturing. The main purpose of collecting these images is to make them available as reference material to the academic comunity. In addition, this paper presents the design and architecture of a system to query and explore the image collection, using content-based image retrieval tools and text-based search on the annotations provided by experts. The system also offers novel image visualization methods to allow easy identification of interesting images among hundreds of possible pictures. The system has been developed using a service-oriented architecture and allows web-based access in http://www.informed.unal.edu.coLas imágenes histológicas son un importante recurso para la investigación, la educación y la práctica médica. La disponibilidad de imágenes individuales o colecciones de imágenes de referencia está limitada a formatos impresos como libros y revistas científicas. En aquellos casos en donde se publican conjuntos de imágenes digitales, éstos están compuestos por algunas cuantas decenas de imágenes que no representan la gran diversidad de estructuras biológicas que pueden encontrarse en los tejidos fundamentales. Contar con una completa colección de im

  4. Efficient parallel Levenberg-Marquardt model fitting towards real-time automated parametric imaging microscopy.

    Science.gov (United States)

    Zhu, Xiang; Zhang, Dianwen

    2013-01-01

    We present a fast, accurate and robust parallel Levenberg-Marquardt minimization optimizer, GPU-LMFit, which is implemented on graphics processing unit for high performance scalable parallel model fitting processing. GPU-LMFit can provide a dramatic speed-up in massive model fitting analyses to enable real-time automated pixel-wise parametric imaging microscopy. We demonstrate the performance of GPU-LMFit for the applications in superresolution localization microscopy and fluorescence lifetime imaging microscopy.

  5. Fully Automated Prostate Magnetic Resonance Imaging and Transrectal Ultrasound Fusion via a Probabilistic Registration Metric

    OpenAIRE

    Sparks, Rachel; Bloch, B. Nicolas; Feleppa, Ernest; Barratt, Dean; Madabhushi, Anant

    2013-01-01

    In this work, we present a novel, automated, registration method to fuse magnetic resonance imaging (MRI) and transrectal ultrasound (TRUS) images of the prostate. Our methodology consists of: (1) delineating the prostate on MRI, (2) building a probabilistic model of prostate location on TRUS, and (3) aligning the MRI prostate segmentation to the TRUS probabilistic model. TRUS-guided needle biopsy is the current gold standard for prostate cancer (CaP) diagnosis. Up to 40% of CaP lesions appea...

  6. Automated interpretation of optic nerve images: a data mining framework for glaucoma diagnostic support.

    Science.gov (United States)

    Abidi, Syed S R; Artes, Paul H; Yun, Sanjan; Yu, Jin

    2007-01-01

    Confocal Scanning Laser Tomography (CSLT) techniques capture high-quality images of the optic disc (the retinal region where the optic nerve exits the eye) that are used in the diagnosis and monitoring of glaucoma. We present a hybrid framework, combining image processing and data mining methods, to support the interpretation of CSLT optic nerve images. Our framework features (a) Zernike moment methods to derive shape information from optic disc images; (b) classification of optic disc images, based on shape information, to distinguish between healthy and glaucomatous optic discs. We apply Multi Layer Perceptrons, Support Vector Machines and Bayesian Networks for feature sub-set selection and image classification; and (c) clustering of optic disc images, based on shape information, using Self-Organizing Maps to visualize sub-types of glaucomatous optic disc damage. Our framework offers an automated and objective analysis of optic nerve images that can potentially support both diagnosis and monitoring of glaucoma.

  7. Automated registration of multispectral MR vessel wall images of the carotid artery

    Energy Technology Data Exchange (ETDEWEB)

    Klooster, R. van ' t; Staring, M.; Reiber, J. H. C.; Lelieveldt, B. P. F.; Geest, R. J. van der, E-mail: rvdgeest@lumc.nl [Department of Radiology, Division of Image Processing, Leiden University Medical Center, 2300 RC Leiden (Netherlands); Klein, S. [Department of Radiology and Department of Medical Informatics, Biomedical Imaging Group Rotterdam, Erasmus MC, Rotterdam 3015 GE (Netherlands); Kwee, R. M.; Kooi, M. E. [Department of Radiology, Cardiovascular Research Institute Maastricht, Maastricht University Medical Center, Maastricht 6202 AZ (Netherlands)

    2013-12-15

    Purpose: Atherosclerosis is the primary cause of heart disease and stroke. The detailed assessment of atherosclerosis of the carotid artery requires high resolution imaging of the vessel wall using multiple MR sequences with different contrast weightings. These images allow manual or automated classification of plaque components inside the vessel wall. Automated classification requires all sequences to be in alignment, which is hampered by patient motion. In clinical practice, correction of this motion is performed manually. Previous studies applied automated image registration to correct for motion using only nondeformable transformation models and did not perform a detailed quantitative validation. The purpose of this study is to develop an automated accurate 3D registration method, and to extensively validate this method on a large set of patient data. In addition, the authors quantified patient motion during scanning to investigate the need for correction. Methods: MR imaging studies (1.5T, dedicated carotid surface coil, Philips) from 55 TIA/stroke patients with ipsilateral <70% carotid artery stenosis were randomly selected from a larger cohort. Five MR pulse sequences were acquired around the carotid bifurcation, each containing nine transverse slices: T1-weighted turbo field echo, time of flight, T2-weighted turbo spin-echo, and pre- and postcontrast T1-weighted turbo spin-echo images (T1W TSE). The images were manually segmented by delineating the lumen contour in each vessel wall sequence and were manually aligned by applying throughplane and inplane translations to the images. To find the optimal automatic image registration method, different masks, choice of the fixed image, different types of the mutual information image similarity metric, and transformation models including 3D deformable transformation models, were evaluated. Evaluation of the automatic registration results was performed by comparing the lumen segmentations of the fixed image and

  8. Automated wavelet denoising of photoacoustic signals for burn-depth image reconstruction

    Science.gov (United States)

    Holan, Scott H.; Viator, John A.

    2007-02-01

    Photoacoustic image reconstruction involves dozens or perhaps hundreds of point measurements, each of which contributes unique information about the subsurface absorbing structures under study. For backprojection imaging, two or more point measurements of photoacoustic waves induced by irradiating a sample with laser light are used to produce an image of the acoustic source. Each of these point measurements must undergo some signal processing, such as denoising and system deconvolution. In order to efficiently process the numerous signals acquired for photoacoustic imaging, we have developed an automated wavelet algorithm for processing signals generated in a burn injury phantom. We used the discrete wavelet transform to denoise photoacoustic signals generated in an optically turbid phantom containing whole blood. The denoising used universal level independent thresholding, as developed by Donoho and Johnstone. The entire signal processing technique was automated so that no user intervention was needed to reconstruct the images. The signals were backprojected using the automated wavelet processing software and showed reconstruction using denoised signals improved image quality by 21%, using a relative 2-norm difference scheme.

  9. An automated image analysis system to measure and count organisms in laboratory microcosms.

    Directory of Open Access Journals (Sweden)

    François Mallard

    Full Text Available 1. Because of recent technological improvements in the way computer and digital camera perform, the potential use of imaging for contributing to the study of communities, populations or individuals in laboratory microcosms has risen enormously. However its limited use is due to difficulties in the automation of image analysis. 2. We present an accurate and flexible method of image analysis for detecting, counting and measuring moving particles on a fixed but heterogeneous substrate. This method has been specifically designed to follow individuals, or entire populations, in experimental laboratory microcosms. It can be used in other applications. 3. The method consists in comparing multiple pictures of the same experimental microcosm in order to generate an image of the fixed background. This background is then used to extract, measure and count the moving organisms, leaving out the fixed background and the motionless or dead individuals. 4. We provide different examples (springtails, ants, nematodes, daphnia to show that this non intrusive method is efficient at detecting organisms under a wide variety of conditions even on faintly contrasted and heterogeneous substrates. 5. The repeatability and reliability of this method has been assessed using experimental populations of the Collembola Folsomia candida. 6. We present an ImageJ plugin to automate the analysis of digital pictures of laboratory microcosms. The plugin automates the successive steps of the analysis and recursively analyses multiple sets of images, rapidly producing measurements from a large number of replicated microcosms.

  10. A performance analysis system for MEMS using automated imaging methods

    Energy Technology Data Exchange (ETDEWEB)

    LaVigne, G.F.; Miller, S.L.

    1998-08-01

    The ability to make in-situ performance measurements of MEMS operating at high speeds has been demonstrated using a new image analysis system. Significant improvements in performance and reliability have directly resulted from the use of this system.

  11. VirtualShave: automated hair removal from digital dermatoscopic images.

    Science.gov (United States)

    Fiorese, M; Peserico, E; Silletti, A

    2011-01-01

    VirtualShave is a novel tool to remove hair from digital dermatoscopic images. First, individual hairs are identified using a top-hat filter followed by morphological postprocessing. Then, they are replaced through PDE-based inpainting with an estimate of the underlying occluded skin. VirtualShave's performance is comparable to that of a human operator removing hair manually, and the resulting images are almost indistinguishable from those of hair-free skin.

  12. Multispectral Image Road Extraction Based Upon Automated Map Conflation

    Science.gov (United States)

    Chen, Bin

    Road network extraction from remotely sensed imagery enables many important and diverse applications such as vehicle tracking, drone navigation, and intelligent transportation studies. There are, however, a number of challenges to road detection from an image. Road pavement material, width, direction, and topology vary across a scene. Complete or partial occlusions caused by nearby buildings, trees, and the shadows cast by them, make maintaining road connectivity difficult. The problems posed by occlusions are exacerbated with the increasing use of oblique imagery from aerial and satellite platforms. Further, common objects such as rooftops and parking lots are made of materials similar or identical to road pavements. This problem of common materials is a classic case of a single land cover material existing for different land use scenarios. This work addresses these problems in road extraction from geo-referenced imagery by leveraging the OpenStreetMap digital road map to guide image-based road extraction. The crowd-sourced cartography has the advantages of worldwide coverage that is constantly updated. The derived road vectors follow only roads and so can serve to guide image-based road extraction with minimal confusion from occlusions and changes in road material. On the other hand, the vector road map has no information on road widths and misalignments between the vector map and the geo-referenced image are small but nonsystematic. Properly correcting misalignment between two geospatial datasets, also known as map conflation, is an essential step. A generic framework requiring minimal human intervention is described for multispectral image road extraction and automatic road map conflation. The approach relies on the road feature generation of a binary mask and a corresponding curvilinear image. A method for generating the binary road mask from the image by applying a spectral measure is presented. The spectral measure, called anisotropy-tunable distance (ATD

  13. Automated Contour Detection for Intravascular Ultrasound Image Sequences Based on Fast Active Contour Algorithm

    Institute of Scientific and Technical Information of China (English)

    DONG Hai-yan; WANG Hui-nan

    2006-01-01

    Intravascular ultrasound can provide high-resolution real-time crosssectional images about lumen, plaque and tissue. Traditionally, the luminal border and medial-adventitial border are traced manually. This process is extremely timeconsuming and the subjective difference would be large. In this paper, a new automated contour detection method is introduced based on fast active contour model.Experimental results found that lumen and vessel area measurements after automated detection showed good agreement with manual tracings with high correlation coefficients (0.94 and 0.95, respectively) and small system difference ( -0.32 and 0.56, respectively). So it can be a reliable and accurate diagnostic tool.

  14. Crowdsourcing scoring of immunohistochemistry images: Evaluating Performance of the Crowd and an Automated Computational Method

    Science.gov (United States)

    Irshad, Humayun; Oh, Eun-Yeong; Schmolze, Daniel; Quintana, Liza M.; Collins, Laura; Tamimi, Rulla M.; Beck, Andrew H.

    2017-01-01

    The assessment of protein expression in immunohistochemistry (IHC) images provides important diagnostic, prognostic and predictive information for guiding cancer diagnosis and therapy. Manual scoring of IHC images represents a logistical challenge, as the process is labor intensive and time consuming. Since the last decade, computational methods have been developed to enable the application of quantitative methods for the analysis and interpretation of protein expression in IHC images. These methods have not yet replaced manual scoring for the assessment of IHC in the majority of diagnostic laboratories and in many large-scale research studies. An alternative approach is crowdsourcing the quantification of IHC images to an undefined crowd. The aim of this study is to quantify IHC images for labeling of ER status with two different crowdsourcing approaches, image-labeling and nuclei-labeling, and compare their performance with automated methods. Crowdsourcing- derived scores obtained greater concordance with the pathologist interpretations for both image-labeling and nuclei-labeling tasks (83% and 87%), as compared to the pathologist concordance achieved by the automated method (81%) on 5,338 TMA images from 1,853 breast cancer patients. This analysis shows that crowdsourcing the scoring of protein expression in IHC images is a promising new approach for large scale cancer molecular pathology studies. PMID:28230179

  15. Automated Quality Assessment of Structural Magnetic Resonance Brain Images Based on a Supervised Machine Learning Algorithm

    Directory of Open Access Journals (Sweden)

    Ricardo Andres Pizarro

    2016-12-01

    Full Text Available High-resolution three-dimensional magnetic resonance imaging (3D-MRI is being increasingly used to delineate morphological changes underlying neuropsychiatric disorders. Unfortunately, artifacts frequently compromise the utility of 3D-MRI yielding irreproducible results, from both type I and type II errors. It is therefore critical to screen 3D-MRIs for artifacts before use. Currently, quality assessment involves slice-wise visual inspection of 3D-MRI volumes, a procedure that is both subjective and time consuming. Automating the quality rating of 3D-MRI could improve the efficiency and reproducibility of the procedure. The present study is one of the first efforts to apply a support vector machine (SVM algorithm in the quality assessment of structural brain images, using global and region of interest (ROI automated image quality features developed in-house. SVM is a supervised machine-learning algorithm that can predict the category of test datasets based on the knowledge acquired from a learning dataset. The performance (accuracy of the automated SVM approach was assessed, by comparing the SVM-predicted quality labels to investigator-determined quality labels. The accuracy for classifying 1457 3D-MRI volumes from our database using the SVM approach is around 80%. These results are promising and illustrate the possibility of using SVM as an automated quality assessment tool for 3D-MRI.

  16. Automated analysis of image mammogram for breast cancer diagnosis

    Science.gov (United States)

    Nurhasanah, Sampurno, Joko; Faryuni, Irfana Diah; Ivansyah, Okto

    2016-03-01

    Medical imaging help doctors in diagnosing and detecting diseases that attack the inside of the body without surgery. Mammogram image is a medical image of the inner breast imaging. Diagnosis of breast cancer needs to be done in detail and as soon as possible for determination of next medical treatment. The aim of this work is to increase the objectivity of clinical diagnostic by using fractal analysis. This study applies fractal method based on 2D Fourier analysis to determine the density of normal and abnormal and applying the segmentation technique based on K-Means clustering algorithm to image abnormal for determine the boundary of the organ and calculate the area of organ segmentation results. The results show fractal method based on 2D Fourier analysis can be used to distinguish between the normal and abnormal breast and segmentation techniques with K-Means Clustering algorithm is able to generate the boundaries of normal and abnormal tissue organs, so area of the abnormal tissue can be determined.

  17. Automated Classification of Glaucoma Images by Wavelet Energy Features

    Directory of Open Access Journals (Sweden)

    N.Annu

    2013-04-01

    Full Text Available Glaucoma is the second leading cause of blindness worldwide. As glaucoma progresses, more optic nerve tissue is lost and the optic cup grows which leads to vision loss. This paper compiles a systemthat could be used by non-experts to filtrate cases of patients not affected by the disease. This work proposes glaucomatous image classification using texture features within images and efficient glaucoma classification based on Probabilistic Neural Network (PNN. Energy distribution over wavelet sub bands is applied to compute these texture features. Wavelet features were obtained from the daubechies (db3, symlets (sym3, and biorthogonal (bio3.3, bio3.5, and bio3.7 wavelet filters. It uses a technique to extract energy signatures obtained using 2-D discrete wavelet transform and the energy obtained from the detailed coefficients can be used to distinguish between normal and glaucomatous images. We observedan accuracy of around 95%, this demonstrates the effectiveness of these methods.

  18. Application of Bayesian Classification to Content-Based Data Management

    Science.gov (United States)

    Lynnes, Christopher; Berrick, S.; Gopalan, A.; Hua, X.; Shen, S.; Smith, P.; Yang, K-Y.; Wheeler, K.; Curry, C.

    2004-01-01

    The high volume of Earth Observing System data has proven to be challenging to manage for data centers and users alike. At the Goddard Earth Sciences Distributed Active Archive Center (GES DAAC), about 1 TB of new data are archived each day. Distribution to users is also about 1 TB/day. A substantial portion of this distribution is MODIS calibrated radiance data, which has a wide variety of uses. However, much of the data is not useful for a particular user's needs: for example, ocean color users typically need oceanic pixels that are free of cloud and sun-glint. The GES DAAC is using a simple Bayesian classification scheme to rapidly classify each pixel in the scene in order to support several experimental content-based data services for near-real-time MODIS calibrated radiance products (from Direct Readout stations). Content-based subsetting would allow distribution of, say, only clear pixels to the user if desired. Content-based subscriptions would distribute data to users only when they fit the user's usability criteria in their area of interest within the scene. Content-based cache management would retain more useful data on disk for easy online access. The classification may even be exploited in an automated quality assessment of the geolocation product. Though initially to be demonstrated at the GES DAAC, these techniques have applicability in other resource-limited environments, such as spaceborne data systems.

  19. System and method for automated object detection in an image

    Energy Technology Data Exchange (ETDEWEB)

    Kenyon, Garrett T.; Brumby, Steven P.; George, John S.; Paiton, Dylan M.; Schultz, Peter F.

    2015-10-06

    A contour/shape detection model may use relatively simple and efficient kernels to detect target edges in an object within an image or video. A co-occurrence probability may be calculated for two or more edge features in an image or video using an object definition. Edge features may be differentiated between in response to measured contextual support, and prominent edge features may be extracted based on the measured contextual support. The object may then be identified based on the extracted prominent edge features.

  20. Automated Structure Detection in HRTEM Images: An Example with Graphene

    DEFF Research Database (Denmark)

    Kling, Jens; Vestergaard, Jacob Schack; Dahl, Anders Bjorholm

    of time making it difficult to resolve dynamic processes or unstable structures. Tools that assist to get the maximum of information out of recorded images are therefore greatly appreciated. In order to get the most accurate results out of the structure detection, we have optimized the imaging conditions...... used for the FEI Titan ETEM with a monochromator and an objective-lens Cs-corrector. To reduce the knock-on damage of the carbon atoms in the graphene structure, the microscope was operated at 80kV. As this strongly increases the influence of the chromatic aberration of the lenses, the energy spread...

  1. An Imaging System for Automated Characteristic Length Measurement of Debrisat Fragments

    Science.gov (United States)

    Moraguez, Mathew; Patankar, Kunal; Fitz-Coy, Norman; Liou, J.-C.; Sorge, Marlon; Cowardin, Heather; Opiela, John; Krisko, Paula H.

    2015-01-01

    The debris fragments generated by DebriSat's hypervelocity impact test are currently being processed and characterized through an effort of NASA and USAF. The debris characteristics will be used to update satellite breakup models. In particular, the physical dimensions of the debris fragments must be measured to provide characteristic lengths for use in these models. Calipers and commercial 3D scanners were considered as measurement options, but an automated imaging system was ultimately developed to measure debris fragments. By automating the entire process, the measurement results are made repeatable and the human factor associated with calipers and 3D scanning is eliminated. Unlike using calipers to measure, the imaging system obtains non-contact measurements to avoid damaging delicate fragments. Furthermore, this fully automated measurement system minimizes fragment handling, which reduces the potential for fragment damage during the characterization process. In addition, the imaging system reduces the time required to determine the characteristic length of the debris fragment. In this way, the imaging system can measure the tens of thousands of DebriSat fragments at a rate of about six minutes per fragment, compared to hours per fragment in NASA's current 3D scanning measurement approach. The imaging system utilizes a space carving algorithm to generate a 3D point cloud of the article being measured and a custom developed algorithm then extracts the characteristic length from the point cloud. This paper describes the measurement process, results, challenges, and future work of the imaging system used for automated characteristic length measurement of DebriSat fragments.

  2. Computer-assisted tree taxonomy by automated image recognition

    NARCIS (Netherlands)

    Pauwels, E.J.; Zeeuw, P.M.de; Ranguelova, E.B.

    2009-01-01

    We present an algorithm that performs image-based queries within the domain of tree taxonomy. As such, it serves as an example relevant to many other potential applications within the field of biodiversity and photo-identification. Unsupervised matching results are produced through a chain of comput

  3. Automated identification of retained surgical items in radiological images

    Science.gov (United States)

    Agam, Gady; Gan, Lin; Moric, Mario; Gluncic, Vicko

    2015-03-01

    Retained surgical items (RSIs) in patients is a major operating room (OR) patient safety concern. An RSI is any surgical tool, sponge, needle or other item inadvertently left in a patients body during the course of surgery. If left undetected, RSIs may lead to serious negative health consequences such as sepsis, internal bleeding, and even death. To help physicians efficiently and effectively detect RSIs, we are developing computer-aided detection (CADe) software for X-ray (XR) image analysis, utilizing large amounts of currently available image data to produce a clinically effective RSI detection system. Physician analysis of XRs for the purpose of RSI detection is a relatively lengthy process that may take up to 45 minutes to complete. It is also error prone due to the relatively low acuity of the human eye for RSIs in XR images. The system we are developing is based on computer vision and machine learning algorithms. We address the problem of low incidence by proposing synthesis algorithms. The CADe software we are developing may be integrated into a picture archiving and communication system (PACS), be implemented as a stand-alone software application, or be integrated into portable XR machine software through application programming interfaces. Preliminary experimental results on actual XR images demonstrate the effectiveness of the proposed approach.

  4. Automated Coronal Loop Identification Using Digital Image Processing Techniques

    Science.gov (United States)

    Lee, Jong K.; Gary, G. Allen; Newman, Timothy S.

    2003-01-01

    The results of a master thesis project on a study of computer algorithms for automatic identification of optical-thin, 3-dimensional solar coronal loop centers from extreme ultraviolet and X-ray 2-dimensional images will be presented. These center splines are proxies of associated magnetic field lines. The project is pattern recognition problems in which there are no unique shapes or edges and in which photon and detector noise heavily influence the images. The study explores extraction techniques using: (1) linear feature recognition of local patterns (related to the inertia-tensor concept), (2) parametric space via the Hough transform, and (3) topological adaptive contours (snakes) that constrains curvature and continuity as possible candidates for digital loop detection schemes. We have developed synthesized images for the coronal loops to test the various loop identification algorithms. Since the topology of these solar features is dominated by the magnetic field structure, a first-order magnetic field approximation using multiple dipoles provides a priori information in the identification process. Results from both synthesized and solar images will be presented.

  5. AUTOMATED VIDEO IMAGE MORPHOMETRY OF THE CORNEAL ENDOTHELIUM

    NARCIS (Netherlands)

    SIERTSEMA, JV; LANDESZ, M; VANDENBROM, H; VANRIJ, G

    1993-01-01

    The central corneal endothelium of 13 eyes in 13 subjects was visualized with a non-contact specular microscope. This report describes the computer-assisted morphometric analysis of enhanced digitized images, using a direct input by means of a frame grabber. The output consisted of mean cell area, c

  6. Automated marker tracking using noisy X-ray images degraded by the treatment beam

    Energy Technology Data Exchange (ETDEWEB)

    Wisotzky, E. [Fraunhofer Institute for Production Systems and Design Technology (IPK), Berlin (Germany); German Cancer Research Center (DKFZ), Heidelberg (Germany); Fast, M.F.; Nill, S. [The Royal Marsden NHS Foundation Trust, London (United Kingdom). Joint Dept. of Physics; Oelfke, U. [The Royal Marsden NHS Foundation Trust, London (United Kingdom). Joint Dept. of Physics; German Cancer Research Center (DKFZ), Heidelberg (Germany)

    2015-09-01

    This study demonstrates the feasibility of automated marker tracking for the real-time detection of intrafractional target motion using noisy kilovoltage (kV) X-ray images degraded by the megavoltage (MV) treatment beam. The authors previously introduced the in-line imaging geometry, in which the flat-panel detector (FPD) is mounted directly underneath the treatment head of the linear accelerator. They found that the 121 kVp image quality was severely compromised by the 6 MV beam passing through the FPD at the same time. Specific MV-induced artefacts present a considerable challenge for automated marker detection algorithms. For this study, the authors developed a new imaging geometry by re-positioning the FPD and the X-ray tube. This improved the contrast-to-noise-ratio between 40% and 72% at the 1.2 mAs/image exposure setting. The increase in image quality clearly facilitates the quick and stable detection of motion with the aid of a template matching algorithm. The setup was tested with an anthropomorphic lung phantom (including an artificial lung tumour). In the tumour one or three Calypso {sup registered} beacons were embedded to achieve better contrast during MV radiation. For a single beacon, image acquisition and automated marker detection typically took around 76±6 ms. The success rate was found to be highly dependent on imaging dose and gantry angle. To eliminate possible false detections, the authors implemented a training phase prior to treatment beam irradiation and also introduced speed limits for motion between subsequent images.

  7. Automated Detection of Contaminated Radar Image Pixels in Mountain Areas

    Institute of Scientific and Technical Information of China (English)

    LIU Liping; Qin XU; Pengfei ZHANG; Shun LIU

    2008-01-01

    In mountain areas,radar observations are often contaminated(1)by echoes from high-speed moving vehicles and(2)by point-wise ground clutter under either normal propagation(NP)or anomalous propa-gation(AP)conditions.Level II data are collected from KMTX(Salt Lake City,Utah)radar to analyze these two types of contamination in the mountain area around the Great Salt Lake.Human experts provide the"ground truth"for possible contamination of either type on each individual pixel.Common features are then extracted for contaminated pixels of each type.For example,pixels contaminated by echoes from high-speed moving vehicles are characterized by large radial velocity and spectrum width.Echoes from a moving train tend to have larger velocity and reflectivity but smaller spectrum width than those from moving vehicles on highways.These contaminated pixels are only seen in areas of large terrain gradient(in the radial direction along the radar beam).The same is true for the second type of contamination-point-wise ground clutters.Six quality control(QC)parameters are selected to quantify the extracted features.Histograms are computed for each QC parameter and grouped for contaminated pixels of each type and also for non-contaminated pixels.Based on the computed histograms,a fuzzy logical algorithm is developed for automated detection of contaminated pixels.The algorithm is tested with KMTX radar data under different(clear and rainy)weather conditions.

  8. Automated segmentation of regions of interest in whole slide skin histopathological images.

    Science.gov (United States)

    Xu, Hongming; Lu, Cheng; Mandal, Mrinal

    2015-01-01

    In the diagnosis of skin melanoma by analyzing histopathological images, the epidermis and epidermis-dermis junctional areas are regions of interest as they provide the most important histologic diagnosis features. This paper presents an automated technique for segmenting epidermis and dermis regions from whole slide skin histopathological images. The proposed technique first performs epidermis segmentation using a thresholding and thickness measurement based method. The dermis area is then segmented based on a predefined depth of segmentation from the epidermis outer boundary. Experimental results on 66 different skin images show that the proposed technique can robustly segment regions of interest as desired.

  9. Automated 3D ultrasound image segmentation for assistant diagnosis of breast cancer

    Science.gov (United States)

    Wang, Yuxin; Gu, Peng; Lee, Won-Mean; Roubidoux, Marilyn A.; Du, Sidan; Yuan, Jie; Wang, Xueding; Carson, Paul L.

    2016-04-01

    Segmentation of an ultrasound image into functional tissues is of great importance to clinical diagnosis of breast cancer. However, many studies are found to segment only the mass of interest and not all major tissues. Differences and inconsistencies in ultrasound interpretation call for an automated segmentation method to make results operator-independent. Furthermore, manual segmentation of entire three-dimensional (3D) ultrasound volumes is time-consuming, resource-intensive, and clinically impractical. Here, we propose an automated algorithm to segment 3D ultrasound volumes into three major tissue types: cyst/mass, fatty tissue, and fibro-glandular tissue. To test its efficacy and consistency, the proposed automated method was employed on a database of 21 cases of whole breast ultrasound. Experimental results show that our proposed method not only distinguishes fat and non-fat tissues correctly, but performs well in classifying cyst/mass. Comparison of density assessment between the automated method and manual segmentation demonstrates good consistency with an accuracy of 85.7%. Quantitative comparison of corresponding tissue volumes, which uses overlap ratio, gives an average similarity of 74.54%, consistent with values seen in MRI brain segmentations. Thus, our proposed method exhibits great potential as an automated approach to segment 3D whole breast ultrasound volumes into functionally distinct tissues that may help to correct ultrasound speed of sound aberrations and assist in density based prognosis of breast cancer.

  10. Automated Detection and Removal of Cloud Shadows on HICO Images

    Science.gov (United States)

    2011-01-01

    Gross, F. Moshary and S. Ahmed, "Impacts of atmospheric corrections on algal bloom detection techniques," 89th AMS Annual Meeting, Phoenix, Arizona... Remote Sens. 36, 880-897, (1998). 4] R. Amin, J. Zhou, A. Gilerson, B. Gross, F. Moshary and S. Ahmed, "Novel optical techniques for detecting and...32157 (1998). 11]J. Cihlar, J. Howarth, " Detection and removal of cloud contamination from AVHRR images," IEEE Trans. Geos. Remote Sens., 32, 583

  11. Extended Field Laser Confocal Microscopy (EFLCM: Combining automated Gigapixel image capture with in silico virtual microscopy

    Directory of Open Access Journals (Sweden)

    Strandh Christer

    2008-07-01

    Full Text Available Abstract Background Confocal laser scanning microscopy has revolutionized cell biology. However, the technique has major limitations in speed and sensitivity due to the fact that a single laser beam scans the sample, allowing only a few microseconds signal collection for each pixel. This limitation has been overcome by the introduction of parallel beam illumination techniques in combination with cold CCD camera based image capture. Methods Using the combination of microlens enhanced Nipkow spinning disc confocal illumination together with fully automated image capture and large scale in silico image processing we have developed a system allowing the acquisition, presentation and analysis of maximum resolution confocal panorama images of several Gigapixel size. We call the method Extended Field Laser Confocal Microscopy (EFLCM. Results We show using the EFLCM technique that it is possible to create a continuous confocal multi-colour mosaic from thousands of individually captured images. EFLCM can digitize and analyze histological slides, sections of entire rodent organ and full size embryos. It can also record hundreds of thousands cultured cells at multiple wavelength in single event or time-lapse fashion on fixed slides, in live cell imaging chambers or microtiter plates. Conclusion The observer independent image capture of EFLCM allows quantitative measurements of fluorescence intensities and morphological parameters on a large number of cells. EFLCM therefore bridges the gap between the mainly illustrative fluorescence microscopy and purely quantitative flow cytometry. EFLCM can also be used as high content analysis (HCA instrument for automated screening processes.

  12. A semi-automated single day image differencing technique to identify animals in aerial imagery.

    Directory of Open Access Journals (Sweden)

    Pat Terletzky

    Full Text Available Our research presents a proof-of-concept that explores a new and innovative method to identify large animals in aerial imagery with single day image differencing. We acquired two aerial images of eight fenced pastures and conducted a principal component analysis of each image. We then subtracted the first principal component of the two pasture images followed by heuristic thresholding to generate polygons. The number of polygons represented the number of potential cattle (Bos taurus and horses (Equus caballus in the pasture. The process was considered semi-automated because we were not able to automate the identification of spatial or spectral thresholding values. Imagery was acquired concurrently with ground counts of animal numbers. Across the eight pastures, 82% of the animals were correctly identified, mean percent commission was 53%, and mean percent omission was 18%. The high commission error was due to small mis-alignments generated from image-to-image registration, misidentified shadows, and grouping behavior of animals. The high probability of correctly identifying animals suggests short time interval image differencing could provide a new technique to enumerate wild ungulates occupying grassland ecosystems, especially in isolated or difficult to access areas. To our knowledge, this was the first attempt to use standard change detection techniques to identify and enumerate large ungulates.

  13. A semi-automated single day image differencing technique to identify animals in aerial imagery.

    Science.gov (United States)

    Terletzky, Pat; Ramsey, Robert Douglas

    2014-01-01

    Our research presents a proof-of-concept that explores a new and innovative method to identify large animals in aerial imagery with single day image differencing. We acquired two aerial images of eight fenced pastures and conducted a principal component analysis of each image. We then subtracted the first principal component of the two pasture images followed by heuristic thresholding to generate polygons. The number of polygons represented the number of potential cattle (Bos taurus) and horses (Equus caballus) in the pasture. The process was considered semi-automated because we were not able to automate the identification of spatial or spectral thresholding values. Imagery was acquired concurrently with ground counts of animal numbers. Across the eight pastures, 82% of the animals were correctly identified, mean percent commission was 53%, and mean percent omission was 18%. The high commission error was due to small mis-alignments generated from image-to-image registration, misidentified shadows, and grouping behavior of animals. The high probability of correctly identifying animals suggests short time interval image differencing could provide a new technique to enumerate wild ungulates occupying grassland ecosystems, especially in isolated or difficult to access areas. To our knowledge, this was the first attempt to use standard change detection techniques to identify and enumerate large ungulates.

  14. Automated analysis of craniofacial morphology using magnetic resonance images.

    Directory of Open Access Journals (Sweden)

    M Mallar Chakravarty

    Full Text Available Quantitative analysis of craniofacial morphology is of interest to scholars working in a wide variety of disciplines, such as anthropology, developmental biology, and medicine. T1-weighted (anatomical magnetic resonance images (MRI provide excellent contrast between soft tissues. Given its three-dimensional nature, MRI represents an ideal imaging modality for the analysis of craniofacial structure in living individuals. Here we describe how T1-weighted MR images, acquired to examine brain anatomy, can also be used to analyze facial features. Using a sample of typically developing adolescents from the Saguenay Youth Study (N = 597; 292 male, 305 female, ages: 12 to 18 years, we quantified inter-individual variations in craniofacial structure in two ways. First, we adapted existing nonlinear registration-based morphological techniques to generate iteratively a group-wise population average of craniofacial features. The nonlinear transformations were used to map the craniofacial structure of each individual to the population average. Using voxel-wise measures of expansion and contraction, we then examined the effects of sex and age on inter-individual variations in facial features. Second, we employed a landmark-based approach to quantify variations in face surfaces. This approach involves: (a placing 56 landmarks (forehead, nose, lips, jaw-line, cheekbones, and eyes on a surface representation of the MRI-based group average; (b warping the landmarks to the individual faces using the inverse nonlinear transformation estimated for each person; and (3 using a principal components analysis (PCA of the warped landmarks to identify facial features (i.e. clusters of landmarks that vary in our sample in a correlated fashion. As with the voxel-wise analysis of the deformation fields, we examined the effects of sex and age on the PCA-derived spatial relationships between facial features. Both methods demonstrated significant sexual dimorphism in

  15. Image cytometer method for automated assessment of human spermatozoa concentration

    DEFF Research Database (Denmark)

    Egeberg, D L; Kjaerulff, S; Hansen, C

    2013-01-01

    to investigator bias. Here we show that image cytometry can be used to accurately measure the sperm concentration of human semen samples with great ease and reproducibility. The impact of several factors (pipetting, mixing, round cell content, sperm concentration), which can influence the read-out as well......In the basic clinical work-up of infertile couples, a semen analysis is mandatory and the sperm concentration is one of the most essential variables to be determined. Sperm concentration is usually assessed by manual counting using a haemocytometer and is hence labour intensive and may be subjected...... and easy measurement of human sperm concentration....

  16. Automated Hierarchical Time Gain Compensation for In Vivo Ultrasound Imaging

    DEFF Research Database (Denmark)

    Moshavegh, Ramin; Hemmsen, Martin Christian; Martins, Bo

    2015-01-01

    Time gain compensation (TGC) is essential to ensure the optimal image quality of the clinical ultrasound scans. When large fluid collections are present within the scan plane, the attenuation distribution is changed drastically and TGC compensation becomes challenging. This paper presents...... tissue and the ultrasound signal strength. The proposed algorithm was applied to a set of 44 in vivo abdominal movie sequences each containing 15 frames. Matching pairs of in vivo sequences, unprocessed and processed with the proposed AHTGC were visualized side by side and evaluated by two radiologists...

  17. Automated image analysis for quantification of filamentous bacteria

    DEFF Research Database (Denmark)

    Fredborg, M.; Rosenvinge, F. S.; Spillum, E.

    2015-01-01

    Background: Antibiotics of the beta-lactam group are able to alter the shape of the bacterial cell wall, e.g. filamentation or a spheroplast formation. Early determination of antimicrobial susceptibility may be complicated by filamentation of bacteria as this can be falsely interpreted as growth...... displaying different resistant profiles and differences in filamentation kinetics were used to study a novel image analysis algorithm to quantify length of bacteria and bacterial filamentation. A total of 12 beta-lactam antibiotics or beta-lactam-beta-lactamase inhibitor combinations were analyzed...

  18. Automated Image-Based Procedures for Adaptive Radiotherapy

    DEFF Research Database (Denmark)

    Bjerre, Troels

    -tissue complication probability (NTCP), margins used to account for interfraction and intrafraction anatomical changes and motion need to be reduced. This can only be achieved through proper treatment plan adaptations and intrafraction motion management. This thesis describes methods in support of image...... to encourage bone rigidity and local tissue volume change only in the gross tumour volume and the lungs. This is highly relevant in adaptive radiotherapy when modelling significant tumour volume changes. - It is described how cone beam CT reconstruction can be modelled as a deformation of a planning CT scan...

  19. Automated Hierarchical Time Gain Compensation for In Vivo Ultrasound Imaging

    DEFF Research Database (Denmark)

    Moshavegh, Ramin; Hemmsen, Martin Christian; Martins, Bo;

    2015-01-01

    in terms of image quality. Wilcoxon signed-rank test was used to evaluate whether radiologists preferred the processed sequences or the unprocessed data. The results indicate that the average visual analogue scale (VAS) is positive ( p-value: 2.34 × 10−13) and estimated to be 1.01 (95% CI: 0.85; 1...... tissue and the ultrasound signal strength. The proposed algorithm was applied to a set of 44 in vivo abdominal movie sequences each containing 15 frames. Matching pairs of in vivo sequences, unprocessed and processed with the proposed AHTGC were visualized side by side and evaluated by two radiologists...

  20. Automated detection of diabetic retinopathy in retinal images

    Directory of Open Access Journals (Sweden)

    Carmen Valverde

    2016-01-01

    Full Text Available Diabetic retinopathy (DR is a disease with an increasing prevalence and the main cause of blindness among working-age population. The risk of severe vision loss can be significantly reduced by timely diagnosis and treatment. Systematic screening for DR has been identified as a cost-effective way to save health services resources. Automatic retinal image analysis is emerging as an important screening tool for early DR detection, which can reduce the workload associated to manual grading as well as save diagnosis costs and time. Many research efforts in the last years have been devoted to developing automatic tools to help in the detection and evaluation of DR lesions. However, there is a large variability in the databases and evaluation criteria used in the literature, which hampers a direct comparison of the different studies. This work is aimed at summarizing the results of the available algorithms for the detection and classification of DR pathology. A detailed literature search was conducted using PubMed. Selected relevant studies in the last 10 years were scrutinized and included in the review. Furthermore, we will try to give an overview of the available commercial software for automatic retinal image analysis.

  1. Automated classification of female facial beauty by image analysis and supervised learning

    Science.gov (United States)

    Gunes, Hatice; Piccardi, Massimo; Jan, Tony

    2004-01-01

    The fact that perception of facial beauty may be a universal concept has long been debated amongst psychologists and anthropologists. In this paper, we performed experiments to evaluate the extent of beauty universality by asking a number of diverse human referees to grade a same collection of female facial images. Results obtained show that the different individuals gave similar votes, thus well supporting the concept of beauty universality. We then trained an automated classifier using the human votes as the ground truth and used it to classify an independent test set of facial images. The high accuracy achieved proves that this classifier can be used as a general, automated tool for objective classification of female facial beauty. Potential applications exist in the entertainment industry and plastic surgery.

  2. A Fully Automated Method to Detect and Segment a Manufactured Object in an Underwater Color Image

    Directory of Open Access Journals (Sweden)

    Phlypo Ronald

    2010-01-01

    Full Text Available We propose a fully automated active contours-based method for the detection and the segmentation of a moored manufactured object in an underwater image. Detection of objects in underwater images is difficult due to the variable lighting conditions and shadows on the object. The proposed technique is based on the information contained in the color maps and uses the visual attention method, combined with a statistical approach for the detection and an active contour for the segmentation of the object to overcome the above problems. In the classical active contour method the region descriptor is fixed and the convergence of the method depends on the initialization. With our approach, this dependence is overcome with an initialization using the visual attention results and a criterion to select the best region descriptor. This approach improves the convergence and the processing time while providing the advantages of a fully automated method.

  3. Extraction of prostatic lumina and automated recognition for prostatic calculus image using PCA-SVM.

    Science.gov (United States)

    Wang, Zhuocai; Xu, Xiangmin; Ding, Xiaojun; Xiao, Hui; Huang, Yusheng; Liu, Jian; Xing, Xiaofen; Wang, Hua; Liao, D Joshua

    2011-01-01

    Identification of prostatic calculi is an important basis for determining the tissue origin. Computation-assistant diagnosis of prostatic calculi may have promising potential but is currently still less studied. We studied the extraction of prostatic lumina and automated recognition for calculus images. Extraction of lumina from prostate histology images was based on local entropy and Otsu threshold recognition using PCA-SVM and based on the texture features of prostatic calculus. The SVM classifier showed an average time 0.1432 second, an average training accuracy of 100%, an average test accuracy of 93.12%, a sensitivity of 87.74%, and a specificity of 94.82%. We concluded that the algorithm, based on texture features and PCA-SVM, can recognize the concentric structure and visualized features easily. Therefore, this method is effective for the automated recognition of prostatic calculi.

  4. Automated grading of renal cell carcinoma using whole slide imaging

    Directory of Open Access Journals (Sweden)

    Fang-Cheng Yeh

    2014-01-01

    Full Text Available Introduction: Recent technology developments have demonstrated the benefit of using whole slide imaging (WSI in computer-aided diagnosis. In this paper, we explore the feasibility of using automatic WSI analysis to assist grading of clear cell renal cell carcinoma (RCC, which is a manual task traditionally performed by pathologists. Materials and Methods: Automatic WSI analysis was applied to 39 hematoxylin and eosin-stained digitized slides of clear cell RCC with varying grades. Kernel regression was used to estimate the spatial distribution of nuclear size across the entire slides. The analysis results were correlated with Fuhrman nuclear grades determined by pathologists. Results: The spatial distribution of nuclear size provided a panoramic view of the tissue sections. The distribution images facilitated locating regions of interest, such as high-grade regions and areas with necrosis. The statistical analysis showed that the maximum nuclear size was significantly different (P < 0.001 between low-grade (Grades I and II and high-grade tumors (Grades III and IV. The receiver operating characteristics analysis showed that the maximum nuclear size distinguished high-grade and low-grade tumors with a false positive rate of 0.2 and a true positive rate of 1.0. The area under the curve is 0.97. Conclusion: The automatic WSI analysis allows pathologists to see the spatial distribution of nuclei size inside the tumors. The maximum nuclear size can also be used to differentiate low-grade and high-grade clear cell RCC with good sensitivity and specificity. These data suggest that automatic WSI analysis may facilitate pathologic grading of renal tumors and reduce variability encountered with manual grading.

  5. Automated semantic indexing of imaging reports to support retrieval of medical images in the multimedia electronic medical record.

    Science.gov (United States)

    Lowe, H J; Antipov, I; Hersh, W; Smith, C A; Mailhot, M

    1999-12-01

    This paper describes preliminary work evaluating automated semantic indexing of radiology imaging reports to represent images stored in the Image Engine multimedia medical record system at the University of Pittsburgh Medical Center. The authors used the SAPHIRE indexing system to automatically identify important biomedical concepts within radiology reports and represent these concepts with terms from the 1998 edition of the U.S. National Library of Medicine's Unified Medical Language System (UMLS) Metathesaurus. This automated UMLS indexing was then compared with manual UMLS indexing of the same reports. Human indexing identified appropriate UMLS Metathesaurus descriptors for 81% of the important biomedical concepts contained in the report set. SAPHIRE automatically identified UMLS Metathesaurus descriptors for 64% of the important biomedical concepts contained in the report set. The overall conclusions of this pilot study were that the UMLS metathesaurus provided adequate coverage of the majority of the important concepts contained within the radiology report test set and that SAPHIRE could automatically identify and translate almost two thirds of these concepts into appropriate UMLS descriptors. Further work is required to improve both the recall and precision of this automated concept extraction process.

  6. Efficient Parallel Levenberg-Marquardt Model Fitting towards Real-Time Automated Parametric Imaging Microscopy

    OpenAIRE

    Xiang Zhu; Dianwen Zhang

    2013-01-01

    We present a fast, accurate and robust parallel Levenberg-Marquardt minimization optimizer, GPU-LMFit, which is implemented on graphics processing unit for high performance scalable parallel model fitting processing. GPU-LMFit can provide a dramatic speed-up in massive model fitting analyses to enable real-time automated pixel-wise parametric imaging microscopy. We demonstrate the performance of GPU-LMFit for the applications in superresolution localization microscopy and fluorescence lifetim...

  7. Automated static image analysis as a novel tool in describing the physical properties of dietary fiber

    OpenAIRE

    Kurek,Marcin Andrzej; Piwińska, Monika; Wyrwisz, Jarosław; Wierzbicka, Agnieszka

    2015-01-01

    Abstract The growing interest in the usage of dietary fiber in food has caused the need to provide precise tools for describing its physical properties. This research examined two dietary fibers from oats and beets, respectively, in variable particle sizes. The application of automated static image analysis for describing the hydration properties and particle size distribution of dietary fiber was analyzed. Conventional tests for water holding capacity (WHC) were conducted. The particles were...

  8. Automated Formosat Image Processing System for Rapid Response to International Disasters

    Science.gov (United States)

    Cheng, M. C.; Chou, S. C.; Chen, Y. C.; Chen, B.; Liu, C.; Yu, S. J.

    2016-06-01

    FORMOSAT-2, Taiwan's first remote sensing satellite, was successfully launched in May of 2004 into the Sun-synchronous orbit at 891 kilometers of altitude. With the daily revisit feature, the 2-m panchromatic, 8-m multi-spectral resolution images captured have been used for researches and operations in various societal benefit areas. This paper details the orchestration of various tasks conducted in different institutions in Taiwan in the efforts responding to international disasters. The institutes involved including its space agency-National Space Organization (NSPO), Center for Satellite Remote Sensing Research of National Central University, GIS Center of Feng-Chia University, and the National Center for High-performance Computing. Since each institution has its own mandate, the coordinated tasks ranged from receiving emergency observation requests, scheduling and tasking of satellite operation, downlink to ground stations, images processing including data injection, ortho-rectification, to delivery of image products. With the lessons learned from working with international partners, the FORMOSAT Image Processing System has been extensively automated and streamlined with a goal to shorten the time between request and delivery in an efficient manner. The integrated team has developed an Application Interface to its system platform that provides functions of search in archive catalogue, request of data services, mission planning, inquiry of services status, and image download. This automated system enables timely image acquisition and substantially increases the value of data product. Example outcome of these efforts in recent response to support Sentinel Asia in Nepal Earthquake is demonstrated herein.

  9. OpenComet: An automated tool for comet assay image analysis

    Directory of Open Access Journals (Sweden)

    Benjamin M. Gyori

    2014-01-01

    Full Text Available Reactive species such as free radicals are constantly generated in vivo and DNA is the most important target of oxidative stress. Oxidative DNA damage is used as a predictive biomarker to monitor the risk of development of many diseases. The comet assay is widely used for measuring oxidative DNA damage at a single cell level. The analysis of comet assay output images, however, poses considerable challenges. Commercial software is costly and restrictive, while free software generally requires laborious manual tagging of cells. This paper presents OpenComet, an open-source software tool providing automated analysis of comet assay images. It uses a novel and robust method for finding comets based on geometric shape attributes and segmenting the comet heads through image intensity profile analysis. Due to automation, OpenComet is more accurate, less prone to human bias, and faster than manual analysis. A live analysis functionality also allows users to analyze images captured directly from a microscope. We have validated OpenComet on both alkaline and neutral comet assay images as well as sample images from existing software packages. Our results show that OpenComet achieves high accuracy with significantly reduced analysis time.

  10. OpenComet: an automated tool for comet assay image analysis.

    Science.gov (United States)

    Gyori, Benjamin M; Venkatachalam, Gireedhar; Thiagarajan, P S; Hsu, David; Clement, Marie-Veronique

    2014-01-01

    Reactive species such as free radicals are constantly generated in vivo and DNA is the most important target of oxidative stress. Oxidative DNA damage is used as a predictive biomarker to monitor the risk of development of many diseases. The comet assay is widely used for measuring oxidative DNA damage at a single cell level. The analysis of comet assay output images, however, poses considerable challenges. Commercial software is costly and restrictive, while free software generally requires laborious manual tagging of cells. This paper presents OpenComet, an open-source software tool providing automated analysis of comet assay images. It uses a novel and robust method for finding comets based on geometric shape attributes and segmenting the comet heads through image intensity profile analysis. Due to automation, OpenComet is more accurate, less prone to human bias, and faster than manual analysis. A live analysis functionality also allows users to analyze images captured directly from a microscope. We have validated OpenComet on both alkaline and neutral comet assay images as well as sample images from existing software packages. Our results show that OpenComet achieves high accuracy with significantly reduced analysis time.

  11. Use of an Automated Image Processing Program to Quantify Recombinant Adenovirus Particles

    Science.gov (United States)

    Obenauer-Kutner, Linda J.; Halperin, Rebecca; Ihnat, Peter M.; Tully, Christopher P.; Bordens, Ronald W.; Grace, Michael J.

    2005-02-01

    Electron microscopy has a pivotal role as an analytical tool in pharmaceutical research. However, digital image data have proven to be too large for efficient quantitative analysis. We describe here the development and application of an automated image processing (AIP) program that rapidly quantifies shape measurements of recombinant adenovirus (rAd) obtained from digitized field emission scanning electron microscope (FESEM) images. The program was written using the macro-recording features within Image-Pro® Plus software. The macro program, which is linked to a Microsoft Excel spreadsheet, consists of a series of subroutines designed to automatically measure rAd vector objects from the FESEM images. The application and utility of this macro program has enabled us to rapidly and efficiently analyze very large data sets of rAd samples while minimizing operator time.

  12. Automated pathologies detection in retina digital images based on complex continuous wavelet transform phase angles.

    Science.gov (United States)

    Lahmiri, Salim; Gargour, Christian S; Gabrea, Marcel

    2014-10-01

    An automated diagnosis system that uses complex continuous wavelet transform (CWT) to process retina digital images and support vector machines (SVMs) for classification purposes is presented. In particular, each retina image is transformed into two one-dimensional signals by concatenating image rows and columns separately. The mathematical norm of phase angles found in each one-dimensional signal at each level of CWT decomposition are relied on to characterise the texture of normal images against abnormal images affected by exudates, drusen and microaneurysms. The leave-one-out cross-validation method was adopted to conduct experiments and the results from the SVM show that the proposed approach gives better results than those obtained by other methods based on the correct classification rate, sensitivity and specificity.

  13. RootGraph: a graphic optimization tool for automated image analysis of plant roots.

    Science.gov (United States)

    Cai, Jinhai; Zeng, Zhanghui; Connor, Jason N; Huang, Chun Yuan; Melino, Vanessa; Kumar, Pankaj; Miklavcic, Stanley J

    2015-11-01

    This paper outlines a numerical scheme for accurate, detailed, and high-throughput image analysis of plant roots. In contrast to existing root image analysis tools that focus on root system-average traits, a novel, fully automated and robust approach for the detailed characterization of root traits, based on a graph optimization process is presented. The scheme, firstly, distinguishes primary roots from lateral roots and, secondly, quantifies a broad spectrum of root traits for each identified primary and lateral root. Thirdly, it associates lateral roots and their properties with the specific primary root from which the laterals emerge. The performance of this approach was evaluated through comparisons with other automated and semi-automated software solutions as well as against results based on manual measurements. The comparisons and subsequent application of the algorithm to an array of experimental data demonstrate that this method outperforms existing methods in terms of accuracy, robustness, and the ability to process root images under high-throughput conditions.

  14. Automated measurement of parameters related to the deformities of lower limbs based on x-rays images.

    Science.gov (United States)

    Wojciechowski, Wadim; Molka, Adrian; Tabor, Zbisław

    2016-03-01

    Measurement of the deformation of the lower limbs in the current standard full-limb X-rays images presents significant challenges to radiologists and orthopedists. The precision of these measurements is deteriorated because of inexact positioning of the leg during image acquisition, problems with selecting reliable anatomical landmarks in projective X-ray images, and inevitable errors of manual measurements. The influence of the random errors resulting from the last two factors on the precision of the measurement can be reduced if an automated measurement method is used instead of a manual one. In the paper a framework for an automated measurement of various metric and angular quantities used in the description of the lower extremity deformation in full-limb frontal X-ray images is described. The results of automated measurements are compared with manual measurements. These results demonstrate that an automated method can be a valuable alternative to the manual measurements.

  15. Automated 3D-Objectdocumentation on the Base of an Image Set

    Directory of Open Access Journals (Sweden)

    Sebastian Vetter

    2011-12-01

    Full Text Available Digital stereo-photogrammetry allows users an automatic evaluation of the spatial dimension and the surface texture of objects. The integration of image analysis techniques simplifies the automation of evaluation of large image sets and offers a high accuracy [1]. Due to the substantial similarities of stereoscopic image pairs, correlation techniques provide measurements of subpixel precision for corresponding image points. With the help of an automated point search algorithm in image sets identical points are used to associate pairs of images to stereo models and group them. The found identical points in all images are basis for calculation of the relative orientation of each stereo model as well as defining the relation of neighboured stereo models. By using proper filter strategies incorrect points are removed and the relative orientation of the stereo model can be made automatically. With the help of 3D-reference points or distances at the object or a defined distance of camera basis the stereo model is orientated absolute. An adapted expansion- and matching algorithm offers the possibility to scan the object surface automatically. The result is a three dimensional point cloud; the scan resolution depends on image quality. With the integration of the iterative closest point- algorithm (ICP these partial point clouds are fitted to a total point cloud. In this way, 3D-reference points are not necessary. With the help of the implemented triangulation algorithm a digital surface models (DSM can be created. The texturing can be made automatically by the usage of the images that were used for scanning the object surface. It is possible to texture the surface model directly or to generate orthophotos automatically. By using of calibrated digital SLR cameras with full frame sensor a high accuracy can be reached. A big advantage is the possibility to control the accuracy and quality of the 3d-objectdocumentation with the resolution of the images. The

  16. Automated construction of arterial and venous trees in retinal images.

    Science.gov (United States)

    Hu, Qiao; Abràmoff, Michael D; Garvin, Mona K

    2015-10-01

    While many approaches exist to segment retinal vessels in fundus photographs, only a limited number focus on the construction and disambiguation of arterial and venous trees. Previous approaches are local and/or greedy in nature, making them susceptible to errors or limiting their applicability to large vessels. We propose a more global framework to generate arteriovenous trees in retinal images, given a vessel segmentation. In particular, our approach consists of three stages. The first stage is to generate an overconnected vessel network, named the vessel potential connectivity map (VPCM), consisting of vessel segments and the potential connectivity between them. The second stage is to disambiguate the VPCM into multiple anatomical trees, using a graph-based metaheuristic algorithm. The third stage is to classify these trees into arterial or venous (A/V) trees. We evaluated our approach with a ground truth built based on a public database, showing a pixel-wise classification accuracy of 88.15% using a manual vessel segmentation as input, and 86.11% using an automatic vessel segmentation as input.

  17. Scanner-based image quality measurement system for automated analysis of EP output

    Science.gov (United States)

    Kipman, Yair; Mehta, Prashant; Johnson, Kate

    2003-12-01

    Inspection of electrophotographic print cartridge quality and compatibility requires analysis of hundreds of pages on a wide population of printers and copiers. Although print quality inspection is often achieved through the use of anchor prints and densitometry, more comprehensive analysis and quantitative data is desired for performance tracking, benchmarking and failure mode analysis. Image quality measurement systems range in price and performance, image capture paths and levels of automation. In order to address the requirements of a specific application, careful consideration was made to print volume, budgetary limits, and the scope of the desired image quality measurements. A flatbed scanner-based image quality measurement system was selected to support high throughput, maximal automation, and sufficient flexibility for both measurement methods and image sampling rates. Using an automatic document feeder (ADF) for sample management, a half ream of prints can be measured automatically without operator intervention. The system includes optical character recognition (OCR) for automatic determination of target type for measurement suite selection. This capability also enables measurement of mixed stacks of targets since each sample is identified prior to measurement. In addition, OCR is used to read toner ID, machine ID, print count, and other pertinent information regarding the printing conditions and environment. This data is saved to a data file along with the measurement results for complete test documentation. Measurement methods were developed to replace current methods of visual inspection and densitometry. The features that were being analyzed visually could be addressed via standard measurement algorithms. Measurement of density proved to be less simple since the scanner is not a densitometer and anything short of an excellent estimation would be meaningless. In order to address the measurement of density, a transfer curve was built to translate the

  18. Semi-automated Digital Imaging and Processing System for Measuring Lake Ice Thickness

    Science.gov (United States)

    Singh, Preetpal

    Canada is home to thousands of freshwater lakes and rivers. Apart from being sources of infinite natural beauty, rivers and lakes are an important source of water, food and transportation. The northern hemisphere of Canada experiences extreme cold temperatures in the winter resulting in a freeze up of regional lakes and rivers. Frozen lakes and rivers tend to offer unique opportunities in terms of wildlife harvesting and winter transportation. Ice roads built on frozen rivers and lakes are vital supply lines for industrial operations in the remote north. Monitoring the ice freeze-up and break-up dates annually can help predict regional climatic changes. Lake ice impacts a variety of physical, ecological and economic processes. The construction and maintenance of a winter road can cost millions of dollars annually. A good understanding of ice mechanics is required to build and deem an ice road safe. A crucial factor in calculating load bearing capacity of ice sheets is the thickness of ice. Construction costs are mainly attributed to producing and maintaining a specific thickness and density of ice that can support different loads. Climate change is leading to warmer temperatures causing the ice to thin faster. At a certain point, a winter road may not be thick enough to support travel and transportation. There is considerable interest in monitoring winter road conditions given the high construction and maintenance costs involved. Remote sensing technologies such as Synthetic Aperture Radar have been successfully utilized to study the extent of ice covers and record freeze-up and break-up dates of ice on lakes and rivers across the north. Ice road builders often used Ultrasound equipment to measure ice thickness. However, an automated monitoring system, based on machine vision and image processing technology, which can measure ice thickness on lakes has not been thought of. Machine vision and image processing techniques have successfully been used in manufacturing

  19. Automated volume of interest delineation and rendering of cone beam CT images in interventional cardiology

    Science.gov (United States)

    Lorenz, Cristian; Schäfer, Dirk; Eshuis, Peter; Carroll, John; Grass, Michael

    2012-02-01

    Interventional C-arm systems allow the efficient acquisition of 3D cone beam CT images. They can be used for intervention planning, navigation, and outcome assessment. We present a fast and completely automated volume of interest (VOI) delineation for cardiac interventions, covering the whole visceral cavity including mediastinum and lungs but leaving out rib-cage and spine. The problem is addressed in a model based approach. The procedure has been evaluated on 22 patient cases and achieves an average surface error below 2mm. The method is able to cope with varying image intensities, varying truncations due to the limited reconstruction volume, and partially with heavy metal and motion artifacts.

  20. Quality Control in Automated Manufacturing Processes – Combined Features for Image Processing

    Directory of Open Access Journals (Sweden)

    B. Kuhlenkötter

    2006-01-01

    Full Text Available In production processes the use of image processing systems is widespread. Hardware solutions and cameras respectively are available for nearly every application. One important challenge of image processing systems is the development and selection of appropriate algorithms and software solutions in order to realise ambitious quality control for production processes. This article characterises the development of innovative software by combining features for an automatic defect classification on product surfaces. The artificial intelligent method Support Vector Machine (SVM is used to execute the classification task according to the combined features. This software is one crucial element for the automation of a manually operated production process. 

  1. Automated Registration of Images from Multiple Bands of Resourcesat-2 Liss-4 camera

    OpenAIRE

    2014-01-01

    Continuous and automated co-registration and geo-tagging of images from multiple bands of Liss-4 camera is one of the interesting challenges of Resourcesat-2 data processing. Three arrays of the Liss-4 camera are physically separated in the focal plane in alongtrack direction. Thus, same line on the ground will be imaged by extreme bands with a time interval of as much as 2.1 seconds. During this time, the satellite would have covered a distance of about 14 km on the ground and the e...

  2. Automated reconstruction of standing posture panoramas from multi-sector long limb x-ray images

    Science.gov (United States)

    Miller, Linzey; Trier, Caroline; Ben-Zikri, Yehuda K.; Linte, Cristian A.

    2016-03-01

    Due to the digital X-ray imaging system's limited field of view, several individual sector images are required to capture the posture of an individual in standing position. These images are then "stitched together" to reconstruct the standing posture. We have created an image processing application that automates the stitching, therefore minimizing user input, optimizing workflow, and reducing human error. The application begins with pre-processing the input images by removing artifacts, filtering out isolated noisy regions, and amplifying a seamless bone edge. The resulting binary images are then registered together using a rigid-body intensity based registration algorithm. The identified registration transformations are then used to map the original sector images into the panorama image. Our method focuses primarily on the use of the anatomical content of the images to generate the panoramas as opposed to using external markers employed to aid with the alignment process. Currently, results show robust edge detection prior to registration and we have tested our approach by comparing the resulting automatically-stitched panoramas to the manually stitched panoramas in terms of registration parameters, target registration error of homologous markers, and the homogeneity of the digitally subtracted automatically- and manually-stitched images using 26 patient datasets.

  3. Sfm_georef: Automating image measurement of ground control points for SfM-based projects

    Science.gov (United States)

    James, Mike R.

    2016-04-01

    Deriving accurate DEM and orthomosaic image products from UAV surveys generally involves the use of multiple ground control points (GCPs). Here, we demonstrate the automated collection of GCP image measurements for SfM-MVS processed projects, using sfm_georef software (James & Robson, 2012; http://www.lancaster.ac.uk/staff/jamesm/software/sfm_georef.htm). Sfm_georef was originally written to provide geo-referencing procedures for SfM-MVS projects. It has now been upgraded with a 3-D patch-based matching routine suitable for automating GCP image measurement in both aerial and ground-based (oblique) projects, with the aim of reducing the time required for accurate geo-referencing. Sfm_georef is compatible with a range of SfM-MVS software and imports the relevant files that describe the image network, including camera models and tie points. 3-D survey measurements of ground control are then provided, either for natural features or artificial targets distributed over the project area. Automated GCP image measurement is manually initiated through identifying a GCP position in an image by mouse click; the GCP is then represented by a square planar patch in 3-D, textured from the image and oriented parallel to the local topographic surface (as defined by the 3-D positions of nearby tie points). Other images are then automatically examined by projecting the patch into the images (to account for differences in viewing geometry) and carrying out a sub-pixel normalised cross-correlation search in the local area. With two or more observations of a GCP, its 3-D co-ordinates are then derived by ray intersection. With the 3-D positions of three or more GCPs identified, an initial geo-referencing transform can be derived to relate the SfM-MVS co-ordinate system to that of the GCPs. Then, if GCPs are symmetric and identical, image texture from one representative GCP can be used to search automatically for all others throughout the image set. Finally, the GCP observations can be

  4. Automated classification of atherosclerotic plaque from magnetic resonance images using predictive models.

    Science.gov (United States)

    Anderson, Russell W; Stomberg, Christopher; Hahm, Charles W; Mani, Venkatesh; Samber, Daniel D; Itskovich, Vitalii V; Valera-Guallar, Laura; Fallon, John T; Nedanov, Pavel B; Huizenga, Joel; Fayad, Zahi A

    2007-01-01

    The information contained within multicontrast magnetic resonance images (MRI) promises to improve tissue classification accuracy, once appropriately analyzed. Predictive models capture relationships empirically, from known outcomes thereby combining pattern classification with experience. In this study, we examine the applicability of predictive modeling for atherosclerotic plaque component classification of multicontrast ex vivo MR images using stained, histopathological sections as ground truth. Ten multicontrast images from seven human coronary artery specimens were obtained on a 9.4 T imaging system using multicontrast-weighted fast spin-echo (T1-, proton density-, and T2-weighted) imaging with 39-mum isotropic voxel size. Following initial data transformations, predictive modeling focused on automating the identification of specimen's plaque, lipid, and media. The outputs of these three models were used to calculate statistics such as total plaque burden and the ratio of hard plaque (fibrous tissue) to lipid. Both logistic regression and an artificial neural network model (Relevant Input Processor Network-RIPNet) were used for predictive modeling. When compared against segmentation resulting from cluster analysis, the RIPNet models performed between 25 and 30% better in absolute terms. This translates to a 50% higher true positive rate over given levels of false positives. This work indicates that it is feasible to build an automated system of plaque detection using MRI and data mining.

  5. Towards Automated Three-Dimensional Tracking of Nephrons through Stacked Histological Image Sets.

    Science.gov (United States)

    Bhikha, Charita; Andreasen, Arne; Christensen, Erik I; Letts, Robyn F R; Pantanowitz, Adam; Rubin, David M; Thomsen, Jesper S; Zhai, Xiao-Yue

    2015-01-01

    An automated approach for tracking individual nephrons through three-dimensional histological image sets of mouse and rat kidneys is presented. In a previous study, the available images were tracked manually through the image sets in order to explore renal microarchitecture. The purpose of the current research is to reduce the time and effort required to manually trace nephrons by creating an automated, intelligent system as a standard tool for such datasets. The algorithm is robust enough to isolate closely packed nephrons and track their convoluted paths despite a number of nonideal, interfering conditions such as local image distortions, artefacts, and interstitial tissue interference. The system comprises image preprocessing, feature extraction, and a custom graph-based tracking algorithm, which is validated by a rule base and a machine learning algorithm. A study of a selection of automatically tracked nephrons, when compared with manual tracking, yields a 95% tracking accuracy for structures in the cortex, while those in the medulla have lower accuracy due to narrower diameter and higher density. Limited manual intervention is introduced to improve tracking, enabling full nephron paths to be obtained with an average of 17 manual corrections per mouse nephron and 58 manual corrections per rat nephron.

  6. Towards Automated Three-Dimensional Tracking of Nephrons through Stacked Histological Image Sets

    Directory of Open Access Journals (Sweden)

    Charita Bhikha

    2015-01-01

    Full Text Available An automated approach for tracking individual nephrons through three-dimensional histological image sets of mouse and rat kidneys is presented. In a previous study, the available images were tracked manually through the image sets in order to explore renal microarchitecture. The purpose of the current research is to reduce the time and effort required to manually trace nephrons by creating an automated, intelligent system as a standard tool for such datasets. The algorithm is robust enough to isolate closely packed nephrons and track their convoluted paths despite a number of nonideal, interfering conditions such as local image distortions, artefacts, and interstitial tissue interference. The system comprises image preprocessing, feature extraction, and a custom graph-based tracking algorithm, which is validated by a rule base and a machine learning algorithm. A study of a selection of automatically tracked nephrons, when compared with manual tracking, yields a 95% tracking accuracy for structures in the cortex, while those in the medulla have lower accuracy due to narrower diameter and higher density. Limited manual intervention is introduced to improve tracking, enabling full nephron paths to be obtained with an average of 17 manual corrections per mouse nephron and 58 manual corrections per rat nephron.

  7. NeuriteTracer: a novel ImageJ plugin for automated quantification of neurite outgrowth.

    Science.gov (United States)

    Pool, Madeline; Thiemann, Joachim; Bar-Or, Amit; Fournier, Alyson E

    2008-02-15

    In vitro assays to measure neuronal growth are a fundamental tool used by many neurobiologists studying neuronal development and regeneration. The quantification of these assays requires accurate measurements of neurite length and neuronal cell numbers in neuronal cultures. Generally, these measurements are obtained through labor-intensive manual or semi-manual tracing of images. To automate these measurements, we have written NeuriteTracer, a neurite tracing plugin for the freely available image-processing program ImageJ. The plugin analyzes fluorescence microscopy images of neurites and nuclei of dissociated cultured neurons. Given user-defined thresholds, the plugin counts neuronal nuclei, and traces and measures neurite length. We find that NeuriteTracer accurately measures neurite outgrowth from cerebellar, DRG and hippocampal neurons. Values obtained by NeuriteTracer correlate strongly with those obtained by semi-manual tracing with NeuronJ and by using a sophisticated analysis package, MetaXpress. We reveal the utility of NeuriteTracer by demonstrating its ability to detect the neurite outgrowth promoting capacity of the rho kinase inhibitor Y-27632. Our plugin is an attractive alternative to existing tracing tools because it is fully automated and ready for use within a freely accessible imaging program.

  8. An automated method for comparing motion artifacts in cine four-dimensional computed tomography images.

    Science.gov (United States)

    Cui, Guoqiang; Jew, Brian; Hong, Julian C; Johnston, Eric W; Loo, Billy W; Maxim, Peter G

    2012-11-08

    The aim of this study is to develop an automated method to objectively compare motion artifacts in two four-dimensional computed tomography (4D CT) image sets, and identify the one that would appear to human observers with fewer or smaller artifacts. Our proposed method is based on the difference of the normalized correlation coefficients between edge slices at couch transitions, which we hypothesize may be a suitable metric to identify motion artifacts. We evaluated our method using ten pairs of 4D CT image sets that showed subtle differences in artifacts between images in a pair, which were identifiable by human observers. One set of 4D CT images was sorted using breathing traces in which our clinically implemented 4D CT sorting software miscalculated the respiratory phase, which expectedly led to artifacts in the images. The other set of images consisted of the same images; however, these were sorted using the same breathing traces but with corrected phases. Next we calculated the normalized correlation coefficients between edge slices at all couch transitions for all respiratory phases in both image sets to evaluate for motion artifacts. For nine image set pairs, our method identified the 4D CT sets sorted using the breathing traces with the corrected respiratory phase to result in images with fewer or smaller artifacts, whereas for one image pair, no difference was noted. Two observers independently assessed the accuracy of our method. Both observers identified 9 image sets that were sorted using the breathing traces with corrected respiratory phase as having fewer or smaller artifacts. In summary, using the 4D CT data of ten pairs of 4D CT image sets, we have demonstrated proof of principle that our method is able to replicate the results of two human observers in identifying the image set with fewer or smaller artifacts.

  9. Automated segmentation of cardiac visceral fat in low-dose non-contrast chest CT images

    Science.gov (United States)

    Xie, Yiting; Liang, Mingzhu; Yankelevitz, David F.; Henschke, Claudia I.; Reeves, Anthony P.

    2015-03-01

    Cardiac visceral fat was segmented from low-dose non-contrast chest CT images using a fully automated method. Cardiac visceral fat is defined as the fatty tissues surrounding the heart region, enclosed by the lungs and posterior to the sternum. It is measured by constraining the heart region with an Anatomy Label Map that contains robust segmentations of the lungs and other major organs and estimating the fatty tissue within this region. The algorithm was evaluated on 124 low-dose and 223 standard-dose non-contrast chest CT scans from two public datasets. Based on visual inspection, 343 cases had good cardiac visceral fat segmentation. For quantitative evaluation, manual markings of cardiac visceral fat regions were made in 3 image slices for 45 low-dose scans and the Dice similarity coefficient (DSC) was computed. The automated algorithm achieved an average DSC of 0.93. Cardiac visceral fat volume (CVFV), heart region volume (HRV) and their ratio were computed for each case. The correlation between cardiac visceral fat measurement and coronary artery and aortic calcification was also evaluated. Results indicated the automated algorithm for measuring cardiac visceral fat volume may be an alternative method to the traditional manual assessment of thoracic region fat content in the assessment of cardiovascular disease risk.

  10. Single-cell bacteria growth monitoring by automated DEP-facilitated image analysis.

    Science.gov (United States)

    Peitz, Ingmar; van Leeuwen, Rien

    2010-11-07

    Growth monitoring is the method of choice in many assays measuring the presence or properties of pathogens, e.g. in diagnostics and food quality. Established methods, relying on culturing large numbers of bacteria, are rather time-consuming, while in healthcare time often is crucial. Several new approaches have been published, mostly aiming at assaying growth or other properties of a small number of bacteria. However, no method so far readily achieves single-cell resolution with a convenient and easy to handle setup that offers the possibility for automation and high throughput. We demonstrate these benefits in this study by employing dielectrophoretic capturing of bacteria in microfluidic electrode structures, optical detection and automated bacteria identification and counting with image analysis algorithms. For a proof-of-principle experiment we chose an antibiotic susceptibility test with Escherichia coli and polymyxin B. Growth monitoring is demonstrated on single cells and the impact of the antibiotic on the growth rate is shown. The minimum inhibitory concentration as a standard diagnostic parameter is derived from a dose-response plot. This report is the basis for further integration of image analysis code into device control. Ultimately, an automated and parallelized setup may be created, using an optical microscanner and many of the electrode structures simultaneously. Sufficient data for a sound statistical evaluation and a confirmation of the initial findings can then be generated in a single experiment.

  11. NOTE: Automated wavelet denoising of photoacoustic signals for circulating melanoma cell detection and burn image reconstruction

    Science.gov (United States)

    Holan, Scott H.; Viator, John A.

    2008-06-01

    Photoacoustic image reconstruction may involve hundreds of point measurements, each of which contributes unique information about the subsurface absorbing structures under study. For backprojection imaging, two or more point measurements of photoacoustic waves induced by irradiating a biological sample with laser light are used to produce an image of the acoustic source. Each of these measurements must undergo some signal processing, such as denoising or system deconvolution. In order to process the numerous signals, we have developed an automated wavelet algorithm for denoising signals. We appeal to the discrete wavelet transform for denoising photoacoustic signals generated in a dilute melanoma cell suspension and in thermally coagulated blood. We used 5, 9, 45 and 270 melanoma cells in the laser beam path as test concentrations. For the burn phantom, we used coagulated blood in 1.6 mm silicon tube submerged in Intralipid. Although these two targets were chosen as typical applications for photoacoustic detection and imaging, they are of independent interest. The denoising employs level-independent universal thresholding. In order to accommodate nonradix-2 signals, we considered a maximal overlap discrete wavelet transform (MODWT). For the lower melanoma cell concentrations, as the signal-to-noise ratio approached 1, denoising allowed better peak finding. For coagulated blood, the signals were denoised to yield a clean photoacoustic resulting in an improvement of 22% in the reconstructed image. The entire signal processing technique was automated so that minimal user intervention was needed to reconstruct the images. Such an algorithm may be used for image reconstruction and signal extraction for applications such as burn depth imaging, depth profiling of vascular lesions in skin and the detection of single cancer cells in blood samples.

  12. Automated Line Tracking of lambda-DNA for Single-Molecule Imaging

    CERN Document Server

    Guan, Juan; Granick, Steve

    2011-01-01

    We describe a straightforward, automated line tracking method to visualize within optical resolution the contour of linear macromolecules as they rearrange shape as a function of time by Brownian diffusion and under external fields such as electrophoresis. Three sequential stages of analysis underpin this method: first, "feature finding" to discriminate signal from noise; second, "line tracking" to approximate those shapes as lines; third, "temporal consistency check" to discriminate reasonable from unreasonable fitted conformations in the time domain. The automated nature of this data analysis makes it straightforward to accumulate vast quantities of data while excluding the unreliable parts of it. We implement the analysis on fluorescence images of lambda-DNA molecules in agarose gel to demonstrate its capability to produce large datasets for subsequent statistical analysis.

  13. Estimation of urinary stone composition by automated processing of CT images

    CERN Document Server

    Chevreau, Grégoire; Conort, Pierre; Renard-Penna, Raphaëlle; Mallet, Alain; Daudon, Michel; Mozer, Pierre; 10.1007/s00240-009-0195-3

    2009-01-01

    The objective of this article was developing an automated tool for routine clinical practice to estimate urinary stone composition from CT images based on the density of all constituent voxels. A total of 118 stones for which the composition had been determined by infrared spectroscopy were placed in a helical CT scanner. A standard acquisition, low-dose and high-dose acquisitions were performed. All voxels constituting each stone were automatically selected. A dissimilarity index evaluating variations of density around each voxel was created in order to minimize partial volume effects: stone composition was established on the basis of voxel density of homogeneous zones. Stone composition was determined in 52% of cases. Sensitivities for each compound were: uric acid: 65%, struvite: 19%, cystine: 78%, carbapatite: 33.5%, calcium oxalate dihydrate: 57%, calcium oxalate monohydrate: 66.5%, brushite: 75%. Low-dose acquisition did not lower the performances (P < 0.05). This entirely automated approach eliminat...

  14. Fully automated segmentation of left ventricle using dual dynamic programming in cardiac cine MR images

    Science.gov (United States)

    Jiang, Luan; Ling, Shan; Li, Qiang

    2016-03-01

    Cardiovascular diseases are becoming a leading cause of death all over the world. The cardiac function could be evaluated by global and regional parameters of left ventricle (LV) of the heart. The purpose of this study is to develop and evaluate a fully automated scheme for segmentation of LV in short axis cardiac cine MR images. Our fully automated method consists of three major steps, i.e., LV localization, LV segmentation at end-diastolic phase, and LV segmentation propagation to the other phases. First, the maximum intensity projection image along the time phases of the midventricular slice, located at the center of the image, was calculated to locate the region of interest of LV. Based on the mean intensity of the roughly segmented blood pool in the midventricular slice at each phase, end-diastolic (ED) and end-systolic (ES) phases were determined. Second, the endocardial and epicardial boundaries of LV of each slice at ED phase were synchronously delineated by use of a dual dynamic programming technique. The external costs of the endocardial and epicardial boundaries were defined with the gradient values obtained from the original and enhanced images, respectively. Finally, with the advantages of the continuity of the boundaries of LV across adjacent phases, we propagated the LV segmentation from the ED phase to the other phases by use of dual dynamic programming technique. The preliminary results on 9 clinical cardiac cine MR cases show that the proposed method can obtain accurate segmentation of LV based on subjective evaluation.

  15. A method for the automated detection phishing websites through both site characteristics and image analysis

    Science.gov (United States)

    White, Joshua S.; Matthews, Jeanna N.; Stacy, John L.

    2012-06-01

    Phishing website analysis is largely still a time-consuming manual process of discovering potential phishing sites, verifying if suspicious sites truly are malicious spoofs and if so, distributing their URLs to the appropriate blacklisting services. Attackers increasingly use sophisticated systems for bringing phishing sites up and down rapidly at new locations, making automated response essential. In this paper, we present a method for rapid, automated detection and analysis of phishing websites. Our method relies on near real-time gathering and analysis of URLs posted on social media sites. We fetch the pages pointed to by each URL and characterize each page with a set of easily computed values such as number of images and links. We also capture a screen-shot of the rendered page image, compute a hash of the image and use the Hamming distance between these image hashes as a form of visual comparison. We provide initial results demonstrate the feasibility of our techniques by comparing legitimate sites to known fraudulent versions from Phishtank.com, by actively introducing a series of minor changes to a phishing toolkit captured in a local honeypot and by performing some initial analysis on a set of over 2.8 million URLs posted to Twitter over a 4 days in August 2011. We discuss the issues encountered during our testing such as resolvability and legitimacy of URL's posted on Twitter, the data sets used, the characteristics of the phishing sites we discovered, and our plans for future work.

  16. Detailed interrogation of trypanosome cell biology via differential organelle staining and automated image analysis

    Directory of Open Access Journals (Sweden)

    Wheeler Richard J

    2012-01-01

    Full Text Available Abstract Background Many trypanosomatid protozoa are important human or animal pathogens. The well defined morphology and precisely choreographed division of trypanosomatid cells makes morphological analysis a powerful tool for analyzing the effect of mutations, chemical insults and changes between lifecycle stages. High-throughput image analysis of micrographs has the potential to accelerate collection of quantitative morphological data. Trypanosomatid cells have two large DNA-containing organelles, the kinetoplast (mitochondrial DNA and nucleus, which provide useful markers for morphometric analysis; however they need to be accurately identified and often lie in close proximity. This presents a technical challenge. Accurate identification and quantitation of the DNA content of these organelles is a central requirement of any automated analysis method. Results We have developed a technique based on double staining of the DNA with a minor groove binding (4'', 6-diamidino-2-phenylindole (DAPI and a base pair intercalating (propidium iodide (PI or SYBR green fluorescent stain and color deconvolution. This allows the identification of kinetoplast and nuclear DNA in the micrograph based on whether the organelle has DNA with a more A-T or G-C rich composition. Following unambiguous identification of the kinetoplasts and nuclei the resulting images are amenable to quantitative automated analysis of kinetoplast and nucleus number and DNA content. On this foundation we have developed a demonstrative analysis tool capable of measuring kinetoplast and nucleus DNA content, size and position and cell body shape, length and width automatically. Conclusions Our approach to DNA staining and automated quantitative analysis of trypanosomatid morphology accelerated analysis of trypanosomatid protozoa. We have validated this approach using Leishmania mexicana, Crithidia fasciculata and wild-type and mutant Trypanosoma brucei. Automated analysis of T. brucei

  17. Automated diagnoses of attention deficit hyperactive disorder using magnetic resonance imaging

    Directory of Open Access Journals (Sweden)

    Ani eEloyan

    2012-08-01

    Full Text Available Successful automated diagnoses of attention deficit hyperactive disorder (ADHD using imaging and functional biomarkers would have fundamental consequences on the public health impact of the disease. In this work, we show results on the predictability of ADHD using imaging biomarkers and discuss the scientific and diagnostic impacts of the research. We created a prediction model using the landmark ADHD 200 data set focusing on resting state functional connectivity (rs-fc and structural brain imaging. We predicted ADHD status and subtype, obtained by behavioral examination, using imaging data, intelligence quotients and other covariates. The novel contributions of this manuscript include a thorough exploration of prediction and image feature extraction methodology on this form of data, including the use of singular value decompositions, CUR decompositions, random forest, gradient boosting, bagging, voxel-based morphometry and support vector machines as well as important insights into the value, and potentially lack thereof, of imaging biomarkers of disease. The key results include the CUR-based decomposition of the rs-fc-fMRI along with gradient boosting and the prediction algorithm based on a motor network parcellation and random forest algorithm. We conjecture that the CUR decomposition is largely diagnosing common population directions of head motion. Of note, a byproduct of this research is a potential automated method for detecting subtle in-scanner motion. The final prediction algorithm, a weighted combination of several algorithms, had an external test set specificity of 94% with sensitivity of 21%. The most promising imaging biomarker was a correlation graph from a motor network parcellation. In summary, we have undertaken a large-scale statistical exploratory prediction exercise on the unique ADHD 200 data set. The exercise produced several potential leads for future scientific exploration of the neurological basis of ADHD.

  18. Automated diagnoses of attention deficit hyperactive disorder using magnetic resonance imaging.

    Science.gov (United States)

    Eloyan, Ani; Muschelli, John; Nebel, Mary Beth; Liu, Han; Han, Fang; Zhao, Tuo; Barber, Anita D; Joel, Suresh; Pekar, James J; Mostofsky, Stewart H; Caffo, Brian

    2012-01-01

    Successful automated diagnoses of attention deficit hyperactive disorder (ADHD) using imaging and functional biomarkers would have fundamental consequences on the public health impact of the disease. In this work, we show results on the predictability of ADHD using imaging biomarkers and discuss the scientific and diagnostic impacts of the research. We created a prediction model using the landmark ADHD 200 data set focusing on resting state functional connectivity (rs-fc) and structural brain imaging. We predicted ADHD status and subtype, obtained by behavioral examination, using imaging data, intelligence quotients and other covariates. The novel contributions of this manuscript include a thorough exploration of prediction and image feature extraction methodology on this form of data, including the use of singular value decompositions (SVDs), CUR decompositions, random forest, gradient boosting, bagging, voxel-based morphometry, and support vector machines as well as important insights into the value, and potentially lack thereof, of imaging biomarkers of disease. The key results include the CUR-based decomposition of the rs-fc-fMRI along with gradient boosting and the prediction algorithm based on a motor network parcellation and random forest algorithm. We conjecture that the CUR decomposition is largely diagnosing common population directions of head motion. Of note, a byproduct of this research is a potential automated method for detecting subtle in-scanner motion. The final prediction algorithm, a weighted combination of several algorithms, had an external test set specificity of 94% with sensitivity of 21%. The most promising imaging biomarker was a correlation graph from a motor network parcellation. In summary, we have undertaken a large-scale statistical exploratory prediction exercise on the unique ADHD 200 data set. The exercise produced several potential leads for future scientific exploration of the neurological basis of ADHD.

  19. Automated choroid segmentation based on gradual intensity distance in HD-OCT images.

    Science.gov (United States)

    Chen, Qiang; Fan, Wen; Niu, Sijie; Shi, Jiajia; Shen, Honglie; Yuan, Songtao

    2015-04-06

    The choroid is an important structure of the eye and plays a vital role in the pathology of retinal diseases. This paper presents an automated choroid segmentation method for high-definition optical coherence tomography (HD-OCT) images, including Bruch's membrane (BM) segmentation and choroidal-scleral interface (CSI) segmentation. An improved retinal nerve fiber layer (RNFL) complex removal algorithm is presented to segment BM by considering the structure characteristics of retinal layers. By analyzing the characteristics of CSI boundaries, we present a novel algorithm to generate a gradual intensity distance image. Then an improved 2-D graph search method with curve smooth constraints is used to obtain the CSI segmentation. Experimental results with 212 HD-OCT images from 110 eyes in 66 patients demonstrate that the proposed method can achieve high segmentation accuracy. The mean choroid thickness difference and overlap ratio between our proposed method and outlines drawn by experts was 6.72µm and 85.04%, respectively.

  20. Technique for Automated Recognition of Sunspots on Full-Disk Solar Images

    Directory of Open Access Journals (Sweden)

    Zharkov S

    2005-01-01

    Full Text Available A new robust technique is presented for automated identification of sunspots on full-disk white-light (WL solar images obtained from SOHO/MDI instrument and Ca II K1 line images from the Meudon Observatory. Edge-detection methods are applied to find sunspot candidates followed by local thresholding using statistical properties of the region around sunspots. Possible initial oversegmentation of images is remedied with a median filter. The features are smoothed by using morphological closing operations and filled by applying watershed, followed by dilation operator to define regions of interest containing sunspots. A number of physical and geometrical parameters of detected sunspot features are extracted and stored in a relational database along with umbra-penumbra information in the form of pixel run-length data within a bounding rectangle. The detection results reveal very good agreement with the manual synoptic maps and a very high correlation with those produced manually by NOAA Observatory, USA.

  1. Automated system for acquisition and image processing for the control and monitoring boned nopal

    Science.gov (United States)

    Luevano, E.; de Posada, E.; Arronte, M.; Ponce, L.; Flores, T.

    2013-11-01

    This paper describes the design and fabrication of a system for acquisition and image processing to control the removal of thorns nopal vegetable (Opuntia ficus indica) in an automated machine that uses pulses of a laser of Nd: YAG. The areolas, areas where thorns grow on the bark of the Nopal, are located applying segmentation algorithms to the images obtained by a CCD. Once the position of the areolas is known, coordinates are sent to a motors system that controls the laser to interact with all areolas and remove the thorns of the nopal. The electronic system comprises a video decoder, memory for image and software storage, and digital signal processor for system control. The firmware programmed tasks on acquisition, preprocessing, segmentation, recognition and interpretation of the areolas. This system achievement identifying areolas and generating table of coordinates of them, which will be send the motor galvo system that controls the laser for removal

  2. Automated characterization of blood vessels as arteries and veins in retinal images.

    Science.gov (United States)

    Mirsharif, Qazaleh; Tajeripour, Farshad; Pourreza, Hamidreza

    2013-01-01

    In recent years researchers have found that alternations in arterial or venular tree of the retinal vasculature are associated with several public health problems such as diabetic retinopathy which is also the leading cause of blindness in the world. A prerequisite for automated assessment of subtle changes in arteries and veins, is to accurately separate those vessels from each other. This is a difficult task due to high similarity between arteries and veins in addition to variation of color and non-uniform illumination inter and intra retinal images. In this paper a novel structural and automated method is presented for artery/vein classification of blood vessels in retinal images. The proposed method consists of three main steps. In the first step, several image enhancement techniques are employed to improve the images. Then a specific feature extraction process is applied to separate major arteries from veins. Indeed, vessels are divided to smaller segments and feature extraction and vessel classification are applied to each small vessel segment instead of each vessel point. Finally, a post processing step is added to improve the results obtained from the previous step using structural characteristics of the retinal vascular network. In the last stage, vessel features at intersection and bifurcation points are processed for detection of arterial and venular sub trees. Ultimately vessel labels are revised by publishing the dominant label through each identified connected tree of arteries or veins. Evaluation of the proposed approach against two different datasets of retinal images including DRIVE database demonstrates the good performance and robustness of the method. The proposed method may be used for determination of arteriolar to venular diameter ratio in retinal images. Also the proposed method potentially allows for further investigation of labels of thinner arteries and veins which might be found by tracing them back to the major vessels.

  3. Difference Tracker: ImageJ plugins for fully automated analysis of multiple axonal transport parameters.

    Science.gov (United States)

    Andrews, Simon; Gilley, Jonathan; Coleman, Michael P

    2010-11-30

    Studies of axonal transport are critical, not only to understand its normal regulation, but also to determine the roles of transport impairment in disease. Exciting new resources have recently become available allowing live imaging of axonal transport in physiologically relevant settings, such as mammalian nerves. Thus the effects of disease, ageing and therapies can now be assessed directly in nervous system tissue. However, these imaging studies present new challenges. Manual or semi-automated analysis of the range of transport parameters required for a suitably complete evaluation is very time-consuming and can be subjective due to the complexity of the particle movements in axons in ex vivo explants or in vivo. We have developed Difference Tracker, a program combining two new plugins for the ImageJ image-analysis freeware, to provide fast, fully automated and objective analysis of a number of relevant measures of trafficking of fluorescently labeled particles so that axonal transport in different situations can be easily compared. We confirm that Difference Tracker can accurately track moving particles in highly simplified, artificial simulations. It can also identify and track multiple motile fluorescently labeled mitochondria simultaneously in time-lapse image stacks from live imaging of tibial nerve axons, reporting values for a number of parameters that are comparable to those obtained through manual analysis of the same axons. Difference Tracker therefore represents a useful free resource for the comparative analysis of axonal transport under different conditions, and could potentially be used and developed further in many other studies requiring quantification of particle movements.

  4. The use of the Kalman filter in the automated segmentation of EIT lung images.

    Science.gov (United States)

    Zifan, A; Liatsis, P; Chapman, B E

    2013-06-01

    In this paper, we present a new pipeline for the fast and accurate segmentation of impedance images of the lungs using electrical impedance tomography (EIT). EIT is an emerging, promising, non-invasive imaging modality that produces real-time, low spatial but high temporal resolution images of impedance inside a body. Recovering impedance itself constitutes a nonlinear ill-posed inverse problem, therefore the problem is usually linearized, which produces impedance-change images, rather than static impedance ones. Such images are highly blurry and fuzzy along object boundaries. We provide a mathematical reasoning behind the high suitability of the Kalman filter when it comes to segmenting and tracking conductivity changes in EIT lung images. Next, we use a two-fold approach to tackle the segmentation problem. First, we construct a global lung shape to restrict the search region of the Kalman filter. Next, we proceed with augmenting the Kalman filter by incorporating an adaptive foreground detection system to provide the boundary contours for the Kalman filter to carry out the tracking of the conductivity changes as the lungs undergo deformation in a respiratory cycle. The proposed method has been validated by using performance statistics such as misclassified area, and false positive rate, and compared to previous approaches. The results show that the proposed automated method can be a fast and reliable segmentation tool for EIT imaging.

  5. Automated generation of curved planar reformations from MR images of the spine

    Energy Technology Data Exchange (ETDEWEB)

    Vrtovec, Tomaz [Faculty of Electrical Engineering, University of Ljubljana, Trzaska 25, SI-1000 Ljubljana (Slovenia); Ourselin, Sebastien [CSIRO ICT Centre, Autonomous Systems Laboratory, BioMedIA Lab, Locked Bag 17, North Ryde, NSW 2113 (Australia); Gomes, Lavier [Department of Radiology, Westmead Hospital, University of Sydney, Hawkesbury Road, Westmead NSW 2145 (Australia); Likar, Bostjan [Faculty of Electrical Engineering, University of Ljubljana, Trzaska 25, SI-1000 Ljubljana (Slovenia); Pernus, Franjo [Faculty of Electrical Engineering, University of Ljubljana, Trzaska 25, SI-1000 Ljubljana (Slovenia)

    2007-05-21

    A novel method for automated curved planar reformation (CPR) of magnetic resonance (MR) images of the spine is presented. The CPR images, generated by a transformation from image-based to spine-based coordinate system, follow the structural shape of the spine and allow the whole course of the curved anatomy to be viewed in individual cross-sections. The three-dimensional (3D) spine curve and the axial vertebral rotation, which determine the transformation, are described by polynomial functions. The 3D spine curve passes through the centres of vertebral bodies, while the axial vertebral rotation determines the rotation of vertebrae around the axis of the spinal column. The optimal polynomial parameters are obtained by a robust refinement of the initial estimates of the centres of vertebral bodies and axial vertebral rotation. The optimization framework is based on the automatic image analysis of MR spine images that exploits some basic anatomical properties of the spine. The method was evaluated on 21 MR images from 12 patients and the results provided a good description of spine anatomy, with mean errors of 2.5 mm and 1.7{sup 0} for the position of the 3D spine curve and axial rotation of vertebrae, respectively. The generated CPR images are independent of the position of the patient in the scanner while comprising both anatomical and geometrical properties of the spine.

  6. Automated generation of curved planar reformations from MR images of the spine

    Science.gov (United States)

    Vrtovec, Tomaz; Ourselin, Sébastien; Gomes, Lavier; Likar, Boštjan; Pernuš, Franjo

    2007-05-01

    A novel method for automated curved planar reformation (CPR) of magnetic resonance (MR) images of the spine is presented. The CPR images, generated by a transformation from image-based to spine-based coordinate system, follow the structural shape of the spine and allow the whole course of the curved anatomy to be viewed in individual cross-sections. The three-dimensional (3D) spine curve and the axial vertebral rotation, which determine the transformation, are described by polynomial functions. The 3D spine curve passes through the centres of vertebral bodies, while the axial vertebral rotation determines the rotation of vertebrae around the axis of the spinal column. The optimal polynomial parameters are obtained by a robust refinement of the initial estimates of the centres of vertebral bodies and axial vertebral rotation. The optimization framework is based on the automatic image analysis of MR spine images that exploits some basic anatomical properties of the spine. The method was evaluated on 21 MR images from 12 patients and the results provided a good description of spine anatomy, with mean errors of 2.5 mm and 1.7° for the position of the 3D spine curve and axial rotation of vertebrae, respectively. The generated CPR images are independent of the position of the patient in the scanner while comprising both anatomical and geometrical properties of the spine.

  7. Fully automated quantitative analysis of breast cancer risk in DCE-MR images

    Science.gov (United States)

    Jiang, Luan; Hu, Xiaoxin; Gu, Yajia; Li, Qiang

    2015-03-01

    Amount of fibroglandular tissue (FGT) and background parenchymal enhancement (BPE) in dynamic contrast enhanced magnetic resonance (DCE-MR) images are two important indices for breast cancer risk assessment in the clinical practice. The purpose of this study is to develop and evaluate a fully automated scheme for quantitative analysis of FGT and BPE in DCE-MR images. Our fully automated method consists of three steps, i.e., segmentation of whole breast, fibroglandular tissues, and enhanced fibroglandular tissues. Based on the volume of interest extracted automatically, dynamic programming method was applied in each 2-D slice of a 3-D MR scan to delineate the chest wall and breast skin line for segmenting the whole breast. This step took advantages of the continuity of chest wall and breast skin line across adjacent slices. We then further used fuzzy c-means clustering method with automatic selection of cluster number for segmenting the fibroglandular tissues within the segmented whole breast area. Finally, a statistical method was used to set a threshold based on the estimated noise level for segmenting the enhanced fibroglandular tissues in the subtraction images of pre- and post-contrast MR scans. Based on the segmented whole breast, fibroglandular tissues, and enhanced fibroglandular tissues, FGT and BPE were automatically computed. Preliminary results of technical evaluation and clinical validation showed that our fully automated scheme could obtain good segmentation of the whole breast, fibroglandular tissues, and enhanced fibroglandular tissues to achieve accurate assessment of FGT and BPE for quantitative analysis of breast cancer risk.

  8. Automating quality assurance of digital linear accelerators using a radioluminescent phosphor coated phantom and optical imaging

    Science.gov (United States)

    Jenkins, Cesare H.; Naczynski, Dominik J.; Yu, Shu-Jung S.; Yang, Yong; Xing, Lei

    2016-09-01

    Performing mechanical and geometric quality assurance (QA) tests for medical linear accelerators (LINAC) is a predominantly manual process that consumes significant time and resources. In order to alleviate this burden this study proposes a novel strategy to automate the process of performing these tests. The autonomous QA system consists of three parts: (1) a customized phantom coated with radioluminescent material; (2) an optical imaging system capable of visualizing the incidence of the radiation beam, light field or lasers on the phantom; and (3) software to process the captured signals. The radioluminescent phantom, which enables visualization of the radiation beam on the same surface as the light field and lasers, is placed on the couch and imaged while a predefined treatment plan is delivered from the LINAC. The captured images are then processed to self-calibrate the system and perform measurements for evaluating light field/radiation coincidence, jaw position indicators, cross-hair centering, treatment couch position indicators and localizing laser alignment. System accuracy is probed by intentionally introducing errors and by comparing with current clinical methods. The accuracy of self-calibration is evaluated by examining measurement repeatability under fixed and variable phantom setups. The integrated system was able to automatically collect, analyze and report the results for the mechanical alignment tests specified by TG-142. The average difference between introduced and measured errors was 0.13 mm. The system was shown to be consistent with current techniques. Measurement variability increased slightly from 0.1 mm to 0.2 mm when the phantom setup was varied, but no significant difference in the mean measurement value was detected. Total measurement time was less than 10 minutes for all tests as a result of automation. The system’s unique features of a phosphor-coated phantom and fully automated, operator independent self-calibration offer the

  9. Chest-wall segmentation in automated 3D breast ultrasound images using thoracic volume classification

    Science.gov (United States)

    Tan, Tao; van Zelst, Jan; Zhang, Wei; Mann, Ritse M.; Platel, Bram; Karssemeijer, Nico

    2014-03-01

    Computer-aided detection (CAD) systems are expected to improve effectiveness and efficiency of radiologists in reading automated 3D breast ultrasound (ABUS) images. One challenging task on developing CAD is to reduce a large number of false positives. A large amount of false positives originate from acoustic shadowing caused by ribs. Therefore determining the location of the chestwall in ABUS is necessary in CAD systems to remove these false positives. Additionally it can be used as an anatomical landmark for inter- and intra-modal image registration. In this work, we extended our previous developed chestwall segmentation method that fits a cylinder to automated detected rib-surface points and we fit the cylinder model by minimizing a cost function which adopted a term of region cost computed from a thoracic volume classifier to improve segmentation accuracy. We examined the performance on a dataset of 52 images where our previous developed method fails. Using region-based cost, the average mean distance of the annotated points to the segmented chest wall decreased from 7.57±2.76 mm to 6.22±2.86 mm.art.

  10. Automatic nipple detection on 3D images of an automated breast ultrasound system (ABUS)

    Science.gov (United States)

    Javanshir Moghaddam, Mandana; Tan, Tao; Karssemeijer, Nico; Platel, Bram

    2014-03-01

    Recent studies have demonstrated that applying Automated Breast Ultrasound in addition to mammography in women with dense breasts can lead to additional detection of small, early stage breast cancers which are occult in corresponding mammograms. In this paper, we proposed a fully automatic method for detecting the nipple location in 3D ultrasound breast images acquired from Automated Breast Ultrasound Systems. The nipple location is a valuable landmark to report the position of possible abnormalities in a breast or to guide image registration. To detect the nipple location, all images were normalized. Subsequently, features have been extracted in a multi scale approach and classification experiments were performed using a gentle boost classifier to identify the nipple location. The method was applied on a dataset of 100 patients with 294 different 3D ultrasound views from Siemens and U-systems acquisition systems. Our database is a representative sample of cases obtained in clinical practice by four medical centers. The automatic method could accurately locate the nipple in 90% of AP (Anterior-Posterior) views and in 79% of the other views.

  11. Comparison of manually produced and automated cross country movement maps using digital image processing techniques

    Science.gov (United States)

    Wynn, L. K.

    1985-01-01

    The Image-Based Information System (IBIS) was used to automate the cross country movement (CCM) mapping model developed by the Defense Mapping Agency (DMA). Existing terrain factor overlays and a CCM map, produced by DMA for the Fort Lewis, Washington area, were digitized and reformatted into geometrically registered images. Terrain factor data from Slope, Soils, and Vegetation overlays were entered into IBIS, and were then combined utilizing IBIS-programmed equations to implement the DMA CCM model. The resulting IBIS-generated CCM map was then compared with the digitized manually produced map to test similarity. The numbers of pixels comprising each CCM region were compared between the two map images, and percent agreement between each two regional counts was computed. The mean percent agreement equalled 86.21%, with an areally weighted standard deviation of 11.11%. Calculation of Pearson's correlation coefficient yielded +9.997. In some cases, the IBIS-calculated map code differed from the DMA codes: analysis revealed that IBIS had calculated the codes correctly. These highly positive results demonstrate the power and accuracy of IBIS in automating models which synthesize a variety of thematic geographic data.

  12. Automated segmentation of oral mucosa from wide-field OCT images (Conference Presentation)

    Science.gov (United States)

    Goldan, Ryan N.; Lee, Anthony M. D.; Cahill, Lucas; Liu, Kelly; MacAulay, Calum; Poh, Catherine F.; Lane, Pierre

    2016-03-01

    Optical Coherence Tomography (OCT) can discriminate morphological tissue features important for oral cancer detection such as the presence or absence of basement membrane and epithelial thickness. We previously reported an OCT system employing a rotary-pullback catheter capable of in vivo, rapid, wide-field (up to 90 x 2.5mm2) imaging in the oral cavity. Due to the size and complexity of these OCT data sets, rapid automated image processing software that immediately displays important tissue features is required to facilitate prompt bed-side clinical decisions. We present an automated segmentation algorithm capable of detecting the epithelial surface and basement membrane in 3D OCT images of the oral cavity. The algorithm was trained using volumetric OCT data acquired in vivo from a variety of tissue types and histology-confirmed pathologies spanning normal through cancer (8 sites, 21 patients). The algorithm was validated using a second dataset of similar size and tissue diversity. We demonstrate application of the algorithm to an entire OCT volume to map epithelial thickness, and detection of the basement membrane, over the tissue surface. These maps may be clinically useful for delineating pre-surgical tumor margins, or for biopsy site guidance.

  13. Automated detection of regions of interest for tissue microarray experiments: an image texture analysis

    Directory of Open Access Journals (Sweden)

    Tözeren Aydin

    2007-03-01

    Full Text Available Abstract Background Recent research with tissue microarrays led to a rapid progress toward quantifying the expressions of large sets of biomarkers in normal and diseased tissue. However, standard procedures for sampling tissue for molecular profiling have not yet been established. Methods This study presents a high throughput analysis of texture heterogeneity on breast tissue images for the purpose of identifying regions of interest in the tissue for molecular profiling via tissue microarray technology. Image texture of breast histology slides was described in terms of three parameters: the percentage of area occupied in an image block by chromatin (B, percentage occupied by stroma-like regions (P, and a statistical heterogeneity index H commonly used in image analysis. Texture parameters were defined and computed for each of the thousands of image blocks in our dataset using both the gray scale and color segmentation. The image blocks were then classified into three categories using the texture feature parameters in a novel statistical learning algorithm. These categories are as follows: image blocks specific to normal breast tissue, blocks specific to cancerous tissue, and those image blocks that are non-specific to normal and disease states. Results Gray scale and color segmentation techniques led to identification of same regions in histology slides as cancer-specific. Moreover the image blocks identified as cancer-specific belonged to those cell crowded regions in whole section image slides that were marked by two pathologists as regions of interest for further histological studies. Conclusion These results indicate the high efficiency of our automated method for identifying pathologic regions of interest on histology slides. Automation of critical region identification will help minimize the inter-rater variability among different raters (pathologists as hundreds of tumors that are used to develop an array have typically been evaluated

  14. Hybrid Segmentation of Vessels and Automated Flow Measures in In-Vivo Ultrasound Imaging

    DEFF Research Database (Denmark)

    Moshavegh, Ramin; Martins, Bo; Hansen, Kristoffer Lindskov

    2016-01-01

    Vector Flow Imaging (VFI) has received an increasing attention in the scientific field of ultrasound, as it enables angle independent visualization of blood flow. VFI can be used in volume flow estimation, but a vessel segmentation is needed to make it fully automatic. A novel vessel segmentation...... procedure is crucial for wall-to-wall visualization, automation of adjustments, and quantification of flow in state-of-the-art ultrasound scanners. We propose and discuss a method for accurate vessel segmentation that fuses VFI data and B-mode for robustly detecting and delineating vessels. The proposed...

  15. Automated Image Segmentation And Characterization Technique For Effective Isolation And Representation Of Human Face

    Directory of Open Access Journals (Sweden)

    Rajesh Reddy N

    2014-01-01

    Full Text Available In areas such as defense and forensics, it is necessary to identify the face of the criminals from the already available database. Automated face recognition system involves face isolation, feature extraction and classification technique. Challenges in face recognition system are isolating the face effectively as it may be affected by illumination, posture and variation in skin color. Hence it is necessary to develop an effective algorithm that isolates face from the image. In this paper, advanced face isolation technique and feature extraction technique has been proposed.

  16. Bacterial growth on surfaces: Automated image analysis for quantification of growth rate-related parameters

    DEFF Research Database (Denmark)

    Møller, S.; Sternberg, Claus; Poulsen, L. K.

    1995-01-01

    species-specific hybridizations with fluorescence-labelled ribosomal probes to estimate the single-cell concentration of RNA. By automated analysis of digitized images of stained cells, we determined four independent growth rate-related parameters: cellular RNA and DNA contents, cell volume......, and the frequency of dividing cells in a cell population. These parameters were used to compare physiological states of liquid-suspended and surfacegrowing Pseudomonas putida KT2442 in chemostat cultures. The major finding is that the correlation between substrate availability and cellular growth rate found...

  17. Semi-automated discrimination of retinal pigmented epithelial cells in two-photon fluorescence images of mouse retinas

    Science.gov (United States)

    Alexander, Nathan S.; Palczewska, Grazyna; Palczewski, Krzysztof

    2015-01-01

    Automated image segmentation is a critical step toward achieving a quantitative evaluation of disease states with imaging techniques. Two-photon fluorescence microscopy (TPM) has been employed to visualize the retinal pigmented epithelium (RPE) and provide images indicating the health of the retina. However, segmentation of RPE cells within TPM images is difficult due to small differences in fluorescence intensity between cell borders and cell bodies. Here we present a semi-automated method for segmenting RPE cells that relies upon multiple weak features that differentiate cell borders from the remaining image. These features were scored by a search optimization procedure that built up the cell border in segments around a nucleus of interest. With six images used as a test, our method correctly identified cell borders for 69% of nuclei on average. Performance was strongly dependent upon increasing retinosome content in the RPE. TPM image analysis has the potential of providing improved early quantitative assessments of diseases affecting the RPE. PMID:26309765

  18. Automating the Analysis of Spatial Grids A Practical Guide to Data Mining Geospatial Images for Human & Environmental Applications

    CERN Document Server

    Lakshmanan, Valliappa

    2012-01-01

    The ability to create automated algorithms to process gridded spatial data is increasingly important as remotely sensed datasets increase in volume and frequency. Whether in business, social science, ecology, meteorology or urban planning, the ability to create automated applications to analyze and detect patterns in geospatial data is increasingly important. This book provides students with a foundation in topics of digital image processing and data mining as applied to geospatial datasets. The aim is for readers to be able to devise and implement automated techniques to extract information from spatial grids such as radar, satellite or high-resolution survey imagery.

  19. AI (artificial intelligence in histopathology--from image analysis to automated diagnosis.

    Directory of Open Access Journals (Sweden)

    Aleksandar Bogovac

    2010-02-01

    Full Text Available The technological progress in digitalization of complete histological glass slides has opened a new door in tissue--based diagnosis. The presentation of microscopic images as a whole in a digital matrix is called virtual slide. A virtual slide allows calculation and related presentation of image information that otherwise can only be seen by individual human performance. The digital world permits attachments of several (if not all fields of view and the contemporary visualization on a screen. The presentation of all microscopic magnifications is possible if the basic pixel resolution is less than 0.25 microns. To introduce digital tissue--based diagnosis into the daily routine work of a surgical pathologist requires a new setup of workflow arrangement and procedures. The quality of digitized images is sufficient for diagnostic purposes; however, the time needed for viewing virtual slides exceeds that of viewing original glass slides by far. The reason lies in a slower and more difficult sampling procedure, which is the selection of information containing fields of view. By application of artificial intelligence, tissue--based diagnosis in routine work can be managed automatically in steps as follows: 1. The individual image quality has to be measured, and corrected, if necessary. 2. A diagnostic algorithm has to be applied. An algorithm has be developed, that includes both object based (object features, structures and pixel based (texture measures. 3. These measures serve for diagnosis classification and feedback to order additional information, for example in virtual immunohistochemical slides. 4. The measures can serve for automated image classification and detection of relevant image information by themselves without any labeling. 5. The pathologists' duty will not be released by such a system; to the contrary, it will manage and supervise the system, i.e., just working at a "higher level". Virtual slides are already in use for teaching and

  20. AI (artificial intelligence) in histopathology--from image analysis to automated diagnosis.

    Science.gov (United States)

    Kayser, Klaus; Görtler, Jürgen; Bogovac, Milica; Bogovac, Aleksandar; Goldmann, Torsten; Vollmer, Ekkehard; Kayser, Gian

    2009-01-01

    The technological progress in digitalization of complete histological glass slides has opened a new door in tissue--based diagnosis. The presentation of microscopic images as a whole in a digital matrix is called virtual slide. A virtual slide allows calculation and related presentation of image information that otherwise can only be seen by individual human performance. The digital world permits attachments of several (if not all) fields of view and the contemporary visualization on a screen. The presentation of all microscopic magnifications is possible if the basic pixel resolution is less than 0.25 microns. To introduce digital tissue--based diagnosis into the daily routine work of a surgical pathologist requires a new setup of workflow arrangement and procedures. The quality of digitized images is sufficient for diagnostic purposes; however, the time needed for viewing virtual slides exceeds that of viewing original glass slides by far. The reason lies in a slower and more difficult sampling procedure, which is the selection of information containing fields of view. By application of artificial intelligence, tissue--based diagnosis in routine work can be managed automatically in steps as follows: 1. The individual image quality has to be measured, and corrected, if necessary. 2. A diagnostic algorithm has to be applied. An algorithm has be developed, that includes both object based (object features, structures) and pixel based (texture) measures. 3. These measures serve for diagnosis classification and feedback to order additional information, for example in virtual immunohistochemical slides. 4. The measures can serve for automated image classification and detection of relevant image information by themselves without any labeling. 5. The pathologists' duty will not be released by such a system; to the contrary, it will manage and supervise the system, i.e., just working at a "higher level". Virtual slides are already in use for teaching and continuous

  1. Automation of PCXMC and ImPACT for NASA Astronaut Medical Imaging Dose and Risk Tracking

    Science.gov (United States)

    Bahadori, Amir; Picco, Charles; Flores-McLaughlin, John; Shavers, Mark; Semones, Edward

    2011-01-01

    To automate astronaut organ and effective dose calculations from occupational X-ray and computed tomography (CT) examinations incorporating PCXMC and ImPACT tools and to estimate the associated lifetime cancer risk per the National Council on Radiation Protection & Measurements (NCRP) using MATLAB(R). Methods: NASA follows guidance from the NCRP on its operational radiation safety program for astronauts. NCRP Report 142 recommends that astronauts be informed of the cancer risks from reported exposures to ionizing radiation from medical imaging. MATLAB(R) code was written to retrieve exam parameters for medical imaging procedures from a NASA database, calculate associated dose and risk, and return results to the database, using the Microsoft .NET Framework. This code interfaces with the PCXMC executable and emulates the ImPACT Excel spreadsheet to calculate organ doses from X-rays and CTs, respectively, eliminating the need to utilize the PCXMC graphical user interface (except for a few special cases) and the ImPACT spreadsheet. Results: Using MATLAB(R) code to interface with PCXMC and replicate ImPACT dose calculation allowed for rapid evaluation of multiple medical imaging exams. The user inputs the exam parameter data into the database and runs the code. Based on the imaging modality and input parameters, the organ doses are calculated. Output files are created for record, and organ doses, effective dose, and cancer risks associated with each exam are written to the database. Annual and post-flight exposure reports, which are used by the flight surgeon to brief the astronaut, are generated from the database. Conclusions: Automating PCXMC and ImPACT for evaluation of NASA astronaut medical imaging radiation procedures allowed for a traceable and rapid method for tracking projected cancer risks associated with over 12,000 exposures. This code will be used to evaluate future medical radiation exposures, and can easily be modified to accommodate changes to the risk

  2. Automated segmentation and geometrical modeling of the tricuspid aortic valve in 3D echocardiographic images.

    Science.gov (United States)

    Pouch, Alison M; Wang, Hongzhi; Takabe, Manabu; Jackson, Benjamin M; Sehgal, Chandra M; Gorman, Joseph H; Gorman, Robert C; Yushkevich, Paul A

    2013-01-01

    The aortic valve has been described with variable anatomical definitions, and the consistency of 2D manual measurement of valve dimensions in medical image data has been questionable. Given the importance of image-based morphological assessment in the diagnosis and surgical treatment of aortic valve disease, there is considerable need to develop a standardized framework for 3D valve segmentation and shape representation. Towards this goal, this work integrates template-based medial modeling and multi-atlas label fusion techniques to automatically delineate and quantitatively describe aortic leaflet geometry in 3D echocardiographic (3DE) images, a challenging task that has been explored only to a limited extent. The method makes use of expert knowledge of aortic leaflet image appearance, generates segmentations with consistent topology, and establishes a shape-based coordinate system on the aortic leaflets that enables standardized automated measurements. In this study, the algorithm is evaluated on 11 3DE images of normal human aortic leaflets acquired at mid systole. The clinical relevance of the method is its ability to capture leaflet geometry in 3DE image data with minimal user interaction while producing consistent measurements of 3D aortic leaflet geometry.

  3. Automated Abnormal Mass Detection in the Mammogram Images Using Chebyshev Moments

    Directory of Open Access Journals (Sweden)

    Alireza Talebpour

    2013-01-01

    Full Text Available Breast cancer is the second leading cause of cancer mortality among women after lung cancer. Early diagnosis of this disease has a major role in its treatment. Thus the use of computer systems as a detection tool could be viewed as essential to helping with this disease. In this study a new system for automated mass detection in mammography images is presented as being more accurate and valid. After optimization of the image and extracting a better picture of the breast tissue from the image and applying log-polar transformation, Chebyshev moments can be calculated in all areas of breast tissue. Then after extracting effective features in the diagnosis of mammography images, abnormal masses, which are important for the physician and specialists, can be determined with applying the appropriate threshold. To check the system performance, images in the MIAS (Mammographic Image Analysis Society mammogram database have been used and the results allowed us to draw a FROC (Free Response Receiver Operating Characteristic curve. When compared the FROC curve with similar systems experts, the high ability of our system was confirmed. In this system, images of different thresholds, specifically 445, 450, 455 are processed and then put through a sensitivity analysis. The process garnered good results 100, 92 and 84%, respectively and a false positive rate per image 2.56, 0.86, 0.26, respectively have been calculated. Comparing other automatic mass detection systems, the proposed method has a few advantages over prior systems: Our process allows us to determine the amount of false positives and/or sensitivity parameters within the system. This can be determined by the importance of the detection work being done. The proposed system achieves 100% sensitivity and 2.56 false positive for every image.

  4. Image patch-based method for automated classification and detection of focal liver lesions on CT

    Science.gov (United States)

    Safdari, Mustafa; Pasari, Raghav; Rubin, Daniel; Greenspan, Hayit

    2013-03-01

    We developed a method for automated classification and detection of liver lesions in CT images based on image patch representation and bag-of-visual-words (BoVW). BoVW analysis has been extensively used in the computer vision domain to analyze scenery images. In the current work we discuss how it can be used for liver lesion classification and detection. The methodology includes building a dictionary for a training set using local descriptors and representing a region in the image using a visual word histogram. Two tasks are described: a classification task, for lesion characterization, and a detection task in which a scan window moves across the image and is determined to be normal liver tissue or a lesion. Data: In the classification task 73 CT images of liver lesions were used, 25 images having cysts, 24 having metastasis and 24 having hemangiomas. A radiologist circumscribed the lesions, creating a region of interest (ROI), in each of the images. He then provided the diagnosis, which was established either by biopsy or clinical follow-up. Thus our data set comprises 73 images and 73 ROIs. In the detection task, a radiologist drew ROIs around each liver lesion and two regions of normal liver, for a total of 159 liver lesion ROIs and 146 normal liver ROIs. The radiologist also demarcated the liver boundary. Results: Classification results of more than 95% were obtained. In the detection task, F1 results obtained is 0.76. Recall is 84%, with precision of 73%. Results show the ability to detect lesions, regardless of shape.

  5. Automated segmentation of murine lung tumors in x-ray micro-CT images

    Science.gov (United States)

    Swee, Joshua K. Y.; Sheridan, Clare; de Bruin, Elza; Downward, Julian; Lassailly, Francois; Pizarro, Luis

    2014-03-01

    Recent years have seen micro-CT emerge as a means of providing imaging analysis in pre-clinical study, with in-vivo micro-CT having been shown to be particularly applicable to the examination of murine lung tumors. Despite this, existing studies have involved substantial human intervention during the image analysis process, with the use of fully-automated aids found to be almost non-existent. We present a new approach to automate the segmentation of murine lung tumors designed specifically for in-vivo micro-CT-based pre-clinical lung cancer studies that addresses the specific requirements of such study, as well as the limitations human-centric segmentation approaches experience when applied to such micro-CT data. Our approach consists of three distinct stages, and begins by utilizing edge enhancing and vessel enhancing non-linear anisotropic diffusion filters to extract anatomy masks (lung/vessel structure) in a pre-processing stage. Initial candidate detection is then performed through ROI reduction utilizing obtained masks and a two-step automated segmentation approach that aims to extract all disconnected objects within the ROI, and consists of Otsu thresholding, mathematical morphology and marker-driven watershed. False positive reduction is finally performed on initial candidates through random-forest-driven classification using the shape, intensity, and spatial features of candidates. We provide validation of our approach using data from an associated lung cancer study, showing favorable results both in terms of detection (sensitivity=86%, specificity=89%) and structural recovery (Dice Similarity=0.88) when compared against manual specialist annotation.

  6. A High-Throughput Automated Microfluidic Platform for Calcium Imaging of Taste Sensing

    Directory of Open Access Journals (Sweden)

    Yi-Hsing Hsiao

    2016-07-01

    Full Text Available The human enteroendocrine L cell line NCI-H716, expressing taste receptors and taste signaling elements, constitutes a unique model for the studies of cellular responses to glucose, appetite regulation, gastrointestinal motility, and insulin secretion. Targeting these gut taste receptors may provide novel treatments for diabetes and obesity. However, NCI-H716 cells are cultured in suspension and tend to form multicellular aggregates, preventing high-throughput calcium imaging due to interferences caused by laborious immobilization and stimulus delivery procedures. Here, we have developed an automated microfluidic platform that is capable of trapping more than 500 single cells into microwells with a loading efficiency of 77% within two minutes, delivering multiple chemical stimuli and performing calcium imaging with enhanced spatial and temporal resolutions when compared to bath perfusion systems. Results revealed the presence of heterogeneity in cellular responses to the type, concentration, and order of applied sweet and bitter stimuli. Sucralose and denatonium benzoate elicited robust increases in the intracellular Ca2+ concentration. However, glucose evoked a rapid elevation of intracellular Ca2+ followed by reduced responses to subsequent glucose stimulation. Using Gymnema sylvestre as a blocking agent for the sweet taste receptor confirmed that different taste receptors were utilized for sweet and bitter tastes. This automated microfluidic platform is cost-effective, easy to fabricate and operate, and may be generally applicable for high-throughput and high-content single-cell analysis and drug screening.

  7. Deep learning for automated skeletal bone age assessment in X-ray images.

    Science.gov (United States)

    Spampinato, C; Palazzo, S; Giordano, D; Aldinucci, M; Leonardi, R

    2017-02-01

    Skeletal bone age assessment is a common clinical practice to investigate endocrinology, genetic and growth disorders in children. It is generally performed by radiological examination of the left hand by using either the Greulich and Pyle (G&P) method or the Tanner-Whitehouse (TW) one. However, both clinical procedures show several limitations, from the examination effort of radiologists to (most importantly) significant intra- and inter-operator variability. To address these problems, several automated approaches (especially relying on the TW method) have been proposed; nevertheless, none of them has been proved able to generalize to different races, age ranges and genders. In this paper, we propose and test several deep learning approaches to assess skeletal bone age automatically; the results showed an average discrepancy between manual and automatic evaluation of about 0.8 years, which is state-of-the-art performance. Furthermore, this is the first automated skeletal bone age assessment work tested on a public dataset and for all age ranges, races and genders, for which the source code is available, thus representing an exhaustive baseline for future research in the field. Beside the specific application scenario, this paper aims at providing answers to more general questions about deep learning on medical images: from the comparison between deep-learned features and manually-crafted ones, to the usage of deep-learning methods trained on general imagery for medical problems, to how to train a CNN with few images.

  8. Fully automated image-guided needle insertion: application to small animal biopsies.

    Science.gov (United States)

    Ayadi, A; Bour, G; Aprahamian, M; Bayle, B; Graebling, P; Gangloff, J; Soler, L; Egly, J M; Marescaux, J

    2007-01-01

    The study of biological process evolution in small animals requires time-consuming and expansive analyses of a large population of animals. Serial analyses of the same animal is potentially a great alternative. However non-invasive procedures must be set up, to retrieve valuable tissue samples from precisely defined areas in living animals. Taking advantage of the high resolution level of in vivo molecular imaging, we defined a procedure to perform image-guided needle insertion and automated biopsy using a micro CT-scan, a robot and a vision system. Workspace limitations in the scanner require the animal to be removed and laid in front of the robot. A vision system composed of a grid projector and a camera is used to register the designed animal-bed with to respect to the robot and to calibrate automatically the needle position and orientation. Automated biopsy is then synchronised with respiration and performed with a pneumatic translation device, at high velocity, to minimize organ deformation. We have experimentally tested our biopsy system with different needles.

  9. GAUSSIAN MIXTURE MODEL BASED LEVEL SET TECHNIQUE FOR AUTOMATED SEGMENTATION OF CARDIAC MR IMAGES

    Directory of Open Access Journals (Sweden)

    G. Dharanibai,

    2011-04-01

    Full Text Available In this paper we propose a Gaussian Mixture Model (GMM integrated level set method for automated segmentation of left ventricle (LV, right ventricle (RV and myocardium from short axis views of cardiacmagnetic resonance image. By fitting GMM to the image histogram, global pixel intensity characteristics of the blood pool, myocardium and background are estimated. GMM provides initial segmentation andthe segmentation solution is regularized using level set. Parameters for controlling the level set evolution are automatically estimated from the Bayesian inference classification of pixels. We propose a new speed function that combines edge and region information that stops the evolving level set at the myocardial boundary. Segmentation efficacy is analyzed qualitatively via visual inspection. Results show the improved performance of our of proposed speed function over the conventional Bayesian driven adaptive speed function in automatic segmentation of myocardium

  10. Quantitative Assessment of Mouse Mammary Gland Morphology Using Automated Digital Image Processing and TEB Detection.

    Science.gov (United States)

    Blacher, Silvia; Gérard, Céline; Gallez, Anne; Foidart, Jean-Michel; Noël, Agnès; Péqueux, Christel

    2016-04-01

    The assessment of rodent mammary gland morphology is largely used to study the molecular mechanisms driving breast development and to analyze the impact of various endocrine disruptors with putative pathological implications. In this work, we propose a methodology relying on fully automated digital image analysis methods including image processing and quantification of the whole ductal tree and of the terminal end buds as well. It allows to accurately and objectively measure both growth parameters and fine morphological glandular structures. Mammary gland elongation was characterized by 2 parameters: the length and the epithelial area of the ductal tree. Ductal tree fine structures were characterized by: 1) branch end-point density, 2) branching density, and 3) branch length distribution. The proposed methodology was compared with quantification methods classically used in the literature. This procedure can be transposed to several software and thus largely used by scientists studying rodent mammary gland morphology.

  11. Automated analysis of heterogeneous carbon nanostructures by high-resolution electron microscopy and on-line image processing

    Energy Technology Data Exchange (ETDEWEB)

    Toth, P., E-mail: toth.pal@uni-miskolc.hu [Department of Chemical Engineering, University of Utah, 50 S. Central Campus Drive, Salt Lake City, UT 84112-9203 (United States); Farrer, J.K. [Department of Physics and Astronomy, Brigham Young University, N283 ESC, Provo, UT 84602 (United States); Palotas, A.B. [Department of Combustion Technology and Thermal Energy, University of Miskolc, H3515, Miskolc-Egyetemvaros (Hungary); Lighty, J.S.; Eddings, E.G. [Department of Chemical Engineering, University of Utah, 50 S. Central Campus Drive, Salt Lake City, UT 84112-9203 (United States)

    2013-06-15

    High-resolution electron microscopy is an efficient tool for characterizing heterogeneous nanostructures; however, currently the analysis is a laborious and time-consuming manual process. In order to be able to accurately and robustly quantify heterostructures, one must obtain a statistically high number of micrographs showing images of the appropriate sub-structures. The second step of analysis is usually the application of digital image processing techniques in order to extract meaningful structural descriptors from the acquired images. In this paper it will be shown that by applying on-line image processing and basic machine vision algorithms, it is possible to fully automate the image acquisition step; therefore, the number of acquired images in a given time can be increased drastically without the need for additional human labor. The proposed automation technique works by computing fields of structural descriptors in situ and thus outputs sets of the desired structural descriptors in real-time. The merits of the method are demonstrated by using combustion-generated black carbon samples. - Highlights: ► The HRTEM analysis of heterogeneous nanostructures is a tedious manual process. ► Automatic HRTEM image acquisition and analysis can improve data quantity and quality. ► We propose a method based on on-line image analysis for the automation of HRTEM image acquisition. ► The proposed method is demonstrated using HRTEM images of soot particles.

  12. Automated image analysis of the host-pathogen interaction between phagocytes and Aspergillus fumigatus.

    Directory of Open Access Journals (Sweden)

    Franziska Mech

    Full Text Available Aspergillus fumigatus is a ubiquitous airborne fungus and opportunistic human pathogen. In immunocompromised hosts, the fungus can cause life-threatening diseases like invasive pulmonary aspergillosis. Since the incidence of fungal systemic infections drastically increased over the last years, it is a major goal to investigate the pathobiology of A. fumigatus and in particular the interactions of A. fumigatus conidia with immune cells. Many of these studies include the activity of immune effector cells, in particular of macrophages, when they are confronted with conidia of A. fumigus wild-type and mutant strains. Here, we report the development of an automated analysis of confocal laser scanning microscopy images from macrophages coincubated with different A. fumigatus strains. At present, microscopy images are often analysed manually, including cell counting and determination of interrelations between cells, which is very time consuming and error-prone. Automation of this process overcomes these disadvantages and standardises the analysis, which is a prerequisite for further systems biological studies including mathematical modeling of the infection process. For this purpose, the cells in our experimental setup were differentially stained and monitored by confocal laser scanning microscopy. To perform the image analysis in an automatic fashion, we developed a ruleset that is generally applicable to phagocytosis assays and in the present case was processed by the software Definiens Developer XD. As a result of a complete image analysis we obtained features such as size, shape, number of cells and cell-cell contacts. The analysis reported here, reveals that different mutants of A. fumigatus have a major influence on the ability of macrophages to adhere and to phagocytose the respective conidia. In particular, we observe that the phagocytosis ratio and the aggregation behaviour of pksP mutant compared to wild-type conidia are both significantly

  13. Content-based analysis and indexing of sports video

    Science.gov (United States)

    Luo, Ming; Bai, Xuesheng; Xu, Guang-you

    2001-12-01

    An explosion of on-line image and video data in digital form is already well underway. With the exponential rise in interactive information exploration and dissemination through the World-Wide Web, the major inhibitors of rapid access to on-line video data are the management of capture and storage, and content-based intelligent search and indexing techniques. This paper proposes an approach for content-based analysis and event-based indexing of sports video. It includes a novel method to organize shots - classifying shots as close shots and far shots, an original idea of blur extent-based event detection, and an innovative local mutation-based algorithm for caption detection and retrieval. Results on extensive real TV programs demonstrate the applicability of our approach.

  14. Automated segmentation of geographic atrophy in fundus autofluorescence images using supervised pixel classification.

    Science.gov (United States)

    Hu, Zhihong; Medioni, Gerard G; Hernandez, Matthias; Sadda, Srinivas R

    2015-01-01

    Geographic atrophy (GA) is a manifestation of the advanced or late stage of age-related macular degeneration (AMD). AMD is the leading cause of blindness in people over the age of 65 in the western world. The purpose of this study is to develop a fully automated supervised pixel classification approach for segmenting GA, including uni- and multifocal patches in fundus autofluorescene (FAF) images. The image features include region-wise intensity measures, gray-level co-occurrence matrix measures, and Gaussian filter banks. A [Formula: see text]-nearest-neighbor pixel classifier is applied to obtain a GA probability map, representing the likelihood that the image pixel belongs to GA. Sixteen randomly chosen FAF images were obtained from 16 subjects with GA. The algorithm-defined GA regions are compared with manual delineation performed by a certified image reading center grader. Eight-fold cross-validation is applied to evaluate the algorithm performance. The mean overlap ratio (OR), area correlation (Pearson's [Formula: see text]), accuracy (ACC), true positive rate (TPR), specificity (SPC), positive predictive value (PPV), and false discovery rate (FDR) between the algorithm- and manually defined GA regions are [Formula: see text], [Formula: see text], [Formula: see text], [Formula: see text], [Formula: see text], [Formula: see text], and [Formula: see text], respectively.

  15. Long-term live cell imaging and automated 4D analysis of drosophila neuroblast lineages.

    Directory of Open Access Journals (Sweden)

    Catarina C F Homem

    Full Text Available The developing Drosophila brain is a well-studied model system for neurogenesis and stem cell biology. In the Drosophila central brain, around 200 neural stem cells called neuroblasts undergo repeated rounds of asymmetric cell division. These divisions typically generate a larger self-renewing neuroblast and a smaller ganglion mother cell that undergoes one terminal division to create two differentiating neurons. Although single mitotic divisions of neuroblasts can easily be imaged in real time, the lack of long term imaging procedures has limited the use of neuroblast live imaging for lineage analysis. Here we describe a method that allows live imaging of cultured Drosophila neuroblasts over multiple cell cycles for up to 24 hours. We describe a 4D image analysis protocol that can be used to extract cell cycle times and growth rates from the resulting movies in an automated manner. We use it to perform lineage analysis in type II neuroblasts where clonal analysis has indicated the presence of a transit-amplifying population that potentiates the number of neurons. Indeed, our experiments verify type II lineages and provide quantitative parameters for all cell types in those lineages. As defects in type II neuroblast lineages can result in brain tumor formation, our lineage analysis method will allow more detailed and quantitative analysis of tumorigenesis and asymmetric cell division in the Drosophila brain.

  16. Automated Brain Image classification using Neural Network Approach and Abnormality Analysis

    Directory of Open Access Journals (Sweden)

    P.Muthu Krishnammal

    2015-06-01

    Full Text Available Image segmentation of surgical images plays an important role in diagnosis and analysis the anatomical structure of human body. Magnetic Resonance Imaging (MRI helps in obtaining a structural image of internal parts of the body. This paper aims at developing an automatic support system for stage classification using learning machine and to detect brain Tumor by fuzzy clustering methods to detect the brain Tumor in its early stages and to analyze anatomical structures. The three stages involved are: feature extraction using GLCM and the tumor classification using PNN-RBF network and segmentation using SFCM. Here fast discrete curvelet transformation is used to analyze texture of an image which be used as a base for a Computer Aided Diagnosis (CAD system .The Probabilistic Neural Network with radial basis function is employed to implement an automated Brain Tumor classification. It classifies the stage of Brain Tumor that is benign, malignant or normal automatically. Then the segmentation of the brain abnormality using Spatial FCM and the severity of the tumor is analysed using the number of tumor cells in the detected abnormal region.The proposed method reports promising results in terms of training performance and classification accuracies.

  17. Automated detection of synapses in serial section transmission electron microscopy image stacks.

    Directory of Open Access Journals (Sweden)

    Anna Kreshuk

    Full Text Available We describe a method for fully automated detection of chemical synapses in serial electron microscopy images with highly anisotropic axial and lateral resolution, such as images taken on transmission electron microscopes. Our pipeline starts from classification of the pixels based on 3D pixel features, which is followed by segmentation with an Ising model MRF and another classification step, based on object-level features. Classifiers are learned on sparse user labels; a fully annotated data subvolume is not required for training. The algorithm was validated on a set of 238 synapses in 20 serial 7197×7351 pixel images (4.5×4.5×45 nm resolution of mouse visual cortex, manually labeled by three independent human annotators and additionally re-verified by an expert neuroscientist. The error rate of the algorithm (12% false negative, 7% false positive detections is better than state-of-the-art, even though, unlike the state-of-the-art method, our algorithm does not require a prior segmentation of the image volume into cells. The software is based on the ilastik learning and segmentation toolkit and the vigra image processing library and is freely available on our website, along with the test data and gold standard annotations (http://www.ilastik.org/synapse-detection/sstem.

  18. Whole-slide imaging and automated image analysis: considerations and opportunities in the practice of pathology.

    Science.gov (United States)

    Webster, J D; Dunstan, R W

    2014-01-01

    Digital pathology, the practice of pathology using digitized images of pathologic specimens, has been transformed in recent years by the development of whole-slide imaging systems, which allow for the evaluation and interpretation of digital images of entire histologic sections. Applications of whole-slide imaging include rapid transmission of pathologic data for consultations and collaborations, standardization and distribution of pathologic materials for education, tissue specimen archiving, and image analysis of histologic specimens. Histologic image analysis allows for the acquisition of objective measurements of histomorphologic, histochemical, and immunohistochemical properties of tissue sections, increasing both the quantity and quality of data obtained from histologic assessments. Currently, numerous histologic image analysis software solutions are commercially available. Choosing the appropriate solution is dependent on considerations of the investigative question, computer programming and image analysis expertise, and cost. However, all studies using histologic image analysis require careful consideration of preanalytical variables, such as tissue collection, fixation, and processing, and experimental design, including sample selection, controls, reference standards, and the variables being measured. The fields of digital pathology and histologic image analysis are continuing to evolve, and their potential impact on pathology is still growing. These methodologies will increasingly transform the practice of pathology, allowing it to mature toward a quantitative science. However, this maturation requires pathologists to be at the forefront of the process, ensuring their appropriate application and the validity of their results. Therefore, histologic image analysis and the field of pathology should co-evolve, creating a symbiotic relationship that results in high-quality reproducible, objective data.

  19. Automated Astrometric Analysis of Satellite Observations using Wide-field Imaging

    Science.gov (United States)

    Skuljan, J.; Kay, J.

    2016-09-01

    An observational trial was conducted in the South Island of New Zealand from 24 to 28 February 2015, as a collaborative effort between the United Kingdom and New Zealand in the area of space situational awareness. The aim of the trial was to observe a number of satellites in low Earth orbit using wide-field imaging from two separate locations, in order to determine the space trajectory and compare the measurements with the predictions based on the standard two-line elements. This activity was an initial step in building a space situational awareness capability at the Defence Technology Agency of the New Zealand Defence Force. New Zealand has an important strategic position as the last land mass that many satellites selected for deorbiting pass before entering the Earth's atmosphere over the dedicated disposal area in the South Pacific. A preliminary analysis of the trial data has demonstrated that relatively inexpensive equipment can be used to successfully detect satellites at moderate altitudes. A total of 60 satellite passes were observed over the five nights of observation and about 2600 images were collected. A combination of cooled CCD and standard DSLR cameras were used, with a selection of lenses between 17 mm and 50 mm in focal length, covering a relatively wide field of view of 25 to 60 degrees. The CCD cameras were equipped with custom-made GPS modules to record the time of exposure with a high accuracy of one millisecond, or better. Specialised software has been developed for automated astrometric analysis of the trial data. The astrometric solution is obtained as a two-dimensional least-squares polynomial fit to the measured pixel positions of a large number of stars (typically 1000) detected across the image. The star identification is fully automated and works well for all camera-lens combinations used in the trial. A moderate polynomial degree of 3 to 5 is selected to take into account any image distortions introduced by the lens. A typical RMS

  20. Automated detection and labeling of high-density EEG electrodes from structural MR images

    Science.gov (United States)

    Marino, Marco; Liu, Quanying; Brem, Silvia; Wenderoth, Nicole; Mantini, Dante

    2016-10-01

    Objective. Accurate knowledge about the positions of electrodes in electroencephalography (EEG) is very important for precise source localizations. Direct detection of electrodes from magnetic resonance (MR) images is particularly interesting, as it is possible to avoid errors of co-registration between electrode and head coordinate systems. In this study, we propose an automated MR-based method for electrode detection and labeling, particularly tailored to high-density montages. Approach. Anatomical MR images were processed to create an electrode-enhanced image in individual space. Image processing included intensity non-uniformity correction, background noise and goggles artifact removal. Next, we defined a search volume around the head where electrode positions were detected. Electrodes were identified as local maxima in the search volume and registered to the Montreal Neurological Institute standard space using an affine transformation. This allowed the matching of the detected points with the specific EEG montage template, as well as their labeling. Matching and labeling were performed by the coherent point drift method. Our method was assessed on 8 MR images collected in subjects wearing a 256-channel EEG net, using the displacement with respect to manually selected electrodes as performance metric. Main results. Average displacement achieved by our method was significantly lower compared to alternative techniques, such as the photogrammetry technique. The maximum displacement was for more than 99% of the electrodes lower than 1 cm, which is typically considered an acceptable upper limit for errors in electrode positioning. Our method showed robustness and reliability, even in suboptimal conditions, such as in the case of net rotation, imprecisely gathered wires, electrode detachment from the head, and MR image ghosting. Significance. We showed that our method provides objective, repeatable and precise estimates of EEG electrode coordinates. We hope our work

  1. Automated Analysis of {sup 123}I-beta-CIT SPECT Images with Statistical Probabilistic Anatomical Mapping

    Energy Technology Data Exchange (ETDEWEB)

    Eo, Jae Seon; Lee, Hoyoung; Lee, Jae Sung; Kim, Yu Kyung; Jeon, Bumseok; Lee, Dong Soo [Seoul National Univ., Seoul (Korea, Republic of)

    2014-03-15

    Population-based statistical probabilistic anatomical maps have been used to generate probabilistic volumes of interest for analyzing perfusion and metabolic brain imaging. We investigated the feasibility of automated analysis for dopamine transporter images using this technique and evaluated striatal binding potentials in Parkinson's disease and Wilson's disease. We analyzed 2β-Carbomethoxy-3β-(4-{sup 123}I-iodophenyl)tropane ({sup 123}I-beta-CIT) SPECT images acquired from 26 people with Parkinson's disease (M:F=11:15,mean age=49±12 years), 9 people with Wilson's disease (M: F=6:3, mean age=26±11 years) and 17 normal controls (M:F=5:12, mean age=39±16 years). A SPECT template was created using striatal statistical probabilistic map images. All images were spatially normalized onto the template, and probability-weighted regional counts in striatal structures were estimated. The binding potential was calculated using the ratio of specific and nonspecific binding activities at equilibrium. Voxel-based comparisons between groups were also performed using statistical parametric mapping. Qualitative assessment showed that spatial normalizations of the SPECT images were successful for all images. The striatal binding potentials of participants with Parkinson's disease and Wilson's disease were significantly lower than those of normal controls. Statistical parametric mapping analysis found statistically significant differences only in striatal regions in both disease groups compared to controls. We successfully evaluated the regional {sup 123}I-beta-CIT distribution using the SPECT template and probabilistic map data automatically. This procedure allows an objective and quantitative comparison of the binding potential, which in this case showed a significantly decreased binding potential in the striata of patients with Parkinson's disease or Wilson's disease.

  2. Automated bone segmentation from dental CBCT images using patch-based sparse representation and convex optimization

    Energy Technology Data Exchange (ETDEWEB)

    Wang, Li; Gao, Yaozong; Shi, Feng; Liao, Shu; Li, Gang [Department of Radiology and BRIC, University of North Carolina at Chapel Hill, North Carolina 27599 (United States); Chen, Ken Chung [Department of Oral and Maxillofacial Surgery, Houston Methodist Hospital Research Institute, Houston, Texas 77030 and Department of Stomatology, National Cheng Kung University Medical College and Hospital, Tainan, Taiwan 70403 (China); Shen, Steve G. F.; Yan, Jin [Department of Oral and Craniomaxillofacial Surgery and Science, Shanghai Ninth People' s Hospital, Shanghai Jiao Tong University College of Medicine, Shanghai, China 200011 (China); Lee, Philip K. M.; Chow, Ben [Hong Kong Dental Implant and Maxillofacial Centre, Hong Kong, China 999077 (China); Liu, Nancy X. [Department of Oral and Maxillofacial Surgery, Houston Methodist Hospital Research Institute, Houston, Texas 77030 and Department of Oral and Maxillofacial Surgery, Peking University School and Hospital of Stomatology, Beijing, China 100050 (China); Xia, James J. [Department of Oral and Maxillofacial Surgery, Houston Methodist Hospital Research Institute, Houston, Texas 77030 (United States); Department of Surgery (Oral and Maxillofacial Surgery), Weill Medical College, Cornell University, New York, New York 10065 (United States); Department of Oral and Craniomaxillofacial Surgery and Science, Shanghai Ninth People' s Hospital, Shanghai Jiao Tong University College of Medicine, Shanghai, China 200011 (China); Shen, Dinggang, E-mail: dgshen@med.unc.edu [Department of Radiology and BRIC, University of North Carolina at Chapel Hill, North Carolina 27599 and Department of Brain and Cognitive Engineering, Korea University, Seoul, 136701 (Korea, Republic of)

    2014-04-15

    Purpose: Cone-beam computed tomography (CBCT) is an increasingly utilized imaging modality for the diagnosis and treatment planning of the patients with craniomaxillofacial (CMF) deformities. Accurate segmentation of CBCT image is an essential step to generate three-dimensional (3D) models for the diagnosis and treatment planning of the patients with CMF deformities. However, due to the poor image quality, including very low signal-to-noise ratio and the widespread image artifacts such as noise, beam hardening, and inhomogeneity, it is challenging to segment the CBCT images. In this paper, the authors present a new automatic segmentation method to address these problems. Methods: To segment CBCT images, the authors propose a new method for fully automated CBCT segmentation by using patch-based sparse representation to (1) segment bony structures from the soft tissues and (2) further separate the mandible from the maxilla. Specifically, a region-specific registration strategy is first proposed to warp all the atlases to the current testing subject and then a sparse-based label propagation strategy is employed to estimate a patient-specific atlas from all aligned atlases. Finally, the patient-specific atlas is integrated into amaximum a posteriori probability-based convex segmentation framework for accurate segmentation. Results: The proposed method has been evaluated on a dataset with 15 CBCT images. The effectiveness of the proposed region-specific registration strategy and patient-specific atlas has been validated by comparing with the traditional registration strategy and population-based atlas. The experimental results show that the proposed method achieves the best segmentation accuracy by comparison with other state-of-the-art segmentation methods. Conclusions: The authors have proposed a new CBCT segmentation method by using patch-based sparse representation and convex optimization, which can achieve considerably accurate segmentation results in CBCT

  3. Breast Density Analysis with Automated Whole-Breast Ultrasound: Comparison with 3-D Magnetic Resonance Imaging.

    Science.gov (United States)

    Chen, Jeon-Hor; Lee, Yan-Wei; Chan, Si-Wa; Yeh, Dah-Cherng; Chang, Ruey-Feng

    2016-05-01

    In this study, a semi-automatic breast segmentation method was proposed on the basis of the rib shadow to extract breast regions from 3-D automated whole-breast ultrasound (ABUS) images. The density results were correlated with breast density values acquired with 3-D magnetic resonance imaging (MRI). MRI images of 46 breasts were collected from 23 women without a history of breast disease. Each subject also underwent ABUS. We used Otsu's thresholding method on ABUS images to obtain local rib shadow information, which was combined with the global rib shadow information (extracted from all slice projections) and integrated with the anatomy's breast tissue structure to determine the chest wall line. The fuzzy C-means classifier was used to extract the fibroglandular tissues from the acquired images. Whole-breast volume (WBV) and breast percentage density (BPD) were calculated in both modalities. Linear regression was used to compute the correlation of density results between the two modalities. The consistency of density measurement was also analyzed on the basis of intra- and inter-operator variation. There was a high correlation of density results between MRI and ABUS (R(2) = 0.798 for WBV, R(2) = 0.825 for PBD). The mean WBV from ABUS images was slightly smaller than the mean WBV from MR images (MRI: 342.24 ± 128.08 cm(3), ABUS: 325.47 ± 136.16 cm(3), p MRI: 24.71 ± 15.16%, ABUS: 28.90 ± 17.73%, p breast density measurement variation between the two modalities. Our results revealed a high correlation in WBV and BPD between MRI and ABUS. Our study suggests that ABUS provides breast density information useful in the assessment of breast health.

  4. Automated segmentation of lung airway wall area measurements from bronchoscopic optical coherence tomography imaging

    Science.gov (United States)

    Heydarian, Mohammadreza; Choy, Stephen; Wheatley, Andrew; McCormack, David; Coxson, Harvey O.; Lam, Stephen; Parraga, Grace

    2011-03-01

    Chronic Obstructive Pulmonary Disease (COPD) affects almost 600 million people and is currently the fourth leading cause of death worldwide. COPD is an umbrella term for respiratory symptoms that accompany destruction of the lung parenchyma and/or remodeling of the airway wall, the sum of which result in decreased expiratory flow, dyspnea and gas trapping. Currently, x-ray computed tomography (CT) is the main clinical method used for COPD imaging, providing excellent spatial resolution for quantitative tissue measurements although dose limitations and the fundamental spatial resolution of CT limit the measurement of airway dimensions beyond the 5th generation. To address this limitation, we are piloting the use of bronchoscopic Optical Coherence Tomography (OCT), by exploiting its superior spatial resolution of 5-15 micrometers for in vivo airway imaging. Currently, only manual segmentation of OCT airway lumen and wall have been reported but manual methods are time consuming and prone to observer variability. To expand the utility of bronchoscopic OCT, automatic and robust measurement methods are required. Therefore, our objective was to develop a fully automated method for segmenting OCT airway wall dimensions and here we explore several different methods of image-regeneration, voxel clustering and post-processing. Our resultant automated method used K-means or Fuzzy c-means to cluster pixel intensity and then a series of algorithms (i.e. cluster selection, artifact removal, de-noising) was applied to process the clustering results and segment airway wall dimensions. This approach provides a way to automatically and rapidly segment and reproducibly measure airway lumen and wall area.

  5. Development of an automated imaging pipeline for the analysis of the zebrafish larval kidney.

    Directory of Open Access Journals (Sweden)

    Jens H Westhoff

    Full Text Available The analysis of kidney malformation caused by environmental influences during nephrogenesis or by hereditary nephropathies requires animal models allowing the in vivo observation of developmental processes. The zebrafish has emerged as a useful model system for the analysis of vertebrate organ development and function, and it is suitable for the identification of organotoxic or disease-modulating compounds on a larger scale. However, to fully exploit its potential in high content screening applications, dedicated protocols are required allowing the consistent visualization of inner organs such as the embryonic kidney. To this end, we developed a high content screening compatible pipeline for the automated imaging of standardized views of the developing pronephros in zebrafish larvae. Using a custom designed tool, cavities were generated in agarose coated microtiter plates allowing for accurate positioning and orientation of zebrafish larvae. This enabled the subsequent automated acquisition of stable and consistent dorsal views of pronephric kidneys. The established pipeline was applied in a pilot screen for the analysis of the impact of potentially nephrotoxic drugs on zebrafish pronephros development in the Tg(wt1b:EGFP transgenic line in which the developing pronephros is highlighted by GFP expression. The consistent image data that was acquired allowed for quantification of gross morphological pronephric phenotypes, revealing concentration dependent effects of several compounds on nephrogenesis. In addition, applicability of the imaging pipeline was further confirmed in a morpholino based model for cilia-associated human genetic disorders associated with different intraflagellar transport genes. The developed tools and pipeline can be used to study various aspects in zebrafish kidney research, and can be readily adapted for the analysis of other organ systems.

  6. Vision 20/20: perspectives on automated image segmentation for radiotherapy.

    Science.gov (United States)

    Sharp, Gregory; Fritscher, Karl D; Pekar, Vladimir; Peroni, Marta; Shusharina, Nadya; Veeraraghavan, Harini; Yang, Jinzhong

    2014-05-01

    Due to rapid advances in radiation therapy (RT), especially image guidance and treatment adaptation, a fast and accurate segmentation of medical images is a very important part of the treatment. Manual delineation of target volumes and organs at risk is still the standard routine for most clinics, even though it is time consuming and prone to intra- and interobserver variations. Automated segmentation methods seek to reduce delineation workload and unify the organ boundary definition. In this paper, the authors review the current autosegmentation methods particularly relevant for applications in RT. The authors outline the methods' strengths and limitations and propose strategies that could lead to wider acceptance of autosegmentation in routine clinical practice. The authors conclude that currently, autosegmentation technology in RT planning is an efficient tool for the clinicians to provide them with a good starting point for review and adjustment. Modern hardware platforms including GPUs allow most of the autosegmentation tasks to be done in a range of a few minutes. In the nearest future, improvements in CT-based autosegmentation tools will be achieved through standardization of imaging and contouring protocols. In the longer term, the authors expect a wider use of multimodality approaches and better understanding of correlation of imaging with biology and pathology.

  7. Vision 20/20: Perspectives on automated image segmentation for radiotherapy

    Energy Technology Data Exchange (ETDEWEB)

    Sharp, Gregory, E-mail: gcsharp@partners.org; Fritscher, Karl D.; Shusharina, Nadya [Department of Radiation Oncology, Massachusetts General Hospital, Boston, Massachusetts 02114 (United States); Pekar, Vladimir [Philips Healthcare, Markham, Ontario 6LC 2S3 (Canada); Peroni, Marta [Center for Proton Therapy, Paul Scherrer Institut, 5232 Villigen-PSI (Switzerland); Veeraraghavan, Harini [Department of Medical Physics, Memorial Sloan-Kettering Cancer Center, New York, New York 10065 (United States); Yang, Jinzhong [Department of Radiation Physics, MD Anderson Cancer Center, Houston, Texas 77030 (United States)

    2014-05-15

    Due to rapid advances in radiation therapy (RT), especially image guidance and treatment adaptation, a fast and accurate segmentation of medical images is a very important part of the treatment. Manual delineation of target volumes and organs at risk is still the standard routine for most clinics, even though it is time consuming and prone to intra- and interobserver variations. Automated segmentation methods seek to reduce delineation workload and unify the organ boundary definition. In this paper, the authors review the current autosegmentation methods particularly relevant for applications in RT. The authors outline the methods’ strengths and limitations and propose strategies that could lead to wider acceptance of autosegmentation in routine clinical practice. The authors conclude that currently, autosegmentation technology in RT planning is an efficient tool for the clinicians to provide them with a good starting point for review and adjustment. Modern hardware platforms including GPUs allow most of the autosegmentation tasks to be done in a range of a few minutes. In the nearest future, improvements in CT-based autosegmentation tools will be achieved through standardization of imaging and contouring protocols. In the longer term, the authors expect a wider use of multimodality approaches and better understanding of correlation of imaging with biology and pathology.

  8. Automated classification of patients with coronary artery disease using grayscale features from left ventricle echocardiographic images.

    Science.gov (United States)

    Acharya, U Rajendra; Sree, S Vinitha; Muthu Rama Krishnan, M; Krishnananda, N; Ranjan, Shetty; Umesh, Pai; Suri, Jasjit S

    2013-12-01

    Coronary Artery Disease (CAD), caused by the buildup of plaque on the inside of the coronary arteries, has a high mortality rate. To efficiently detect this condition from echocardiography images, with lesser inter-observer variability and visual interpretation errors, computer based data mining techniques may be exploited. We have developed and presented one such technique in this paper for the classification of normal and CAD affected cases. A multitude of grayscale features (fractal dimension, entropies based on the higher order spectra, features based on image texture and local binary patterns, and wavelet based features) were extracted from echocardiography images belonging to a huge database of 400 normal cases and 400 CAD patients. Only the features that had good discriminating capability were selected using t-test. Several combinations of the resultant significant features were used to evaluate many supervised classifiers to find the combination that presents a good accuracy. We observed that the Gaussian Mixture Model (GMM) classifier trained with a feature subset made up of nine significant features presented the highest accuracy, sensitivity, specificity, and positive predictive value of 100%. We have also developed a novel, highly discriminative HeartIndex, which is a single number that is calculated from the combination of the features, in order to objectively classify the images from either of the two classes. Such an index allows for an easier implementation of the technique for automated CAD detection in the computers in hospitals and clinics.

  9. Automated tracking of lava lake level using thermal images at Kīlauea Volcano, Hawai’i

    Science.gov (United States)

    Patrick, Matthew R.; Swanson, Don; Orr, Tim

    2016-01-01

    Tracking the level of the lava lake in Halema‘uma‘u Crater, at the summit of Kīlauea Volcano, Hawai’i, is an essential part of monitoring the ongoing eruption and forecasting potentially hazardous changes in activity. We describe a simple automated image processing routine that analyzes continuously-acquired thermal images of the lava lake and measures lava level. The method uses three image segmentation approaches, based on edge detection, short-term change analysis, and composite temperature thresholding, to identify and track the lake margin in the images. These relative measurements from the images are periodically calibrated with laser rangefinder measurements to produce real-time estimates of lake elevation. Continuous, automated tracking of the lava level has been an important tool used by the U.S. Geological Survey’s Hawaiian Volcano Observatory since 2012 in real-time operational monitoring of the volcano and its hazard potential.

  10. Dislocation tomography made easy: a reconstruction from ADF STEM images obtained using automated image shift correction

    Energy Technology Data Exchange (ETDEWEB)

    Sharp, J H; Barnard, J S; Midgley, P A [Department of Materials Science, University of Cambridge, Pembroke Street, Cambridge, CB2 3QZ (United Kingdom); Kaneko, K; Higashida, K [Department of Materials Science and Engineering, Kyushu University, 744 Motooka, Nishi-ku, Fukuoka, 819-0395 (Japan)], E-mail: jhd28@cam.ac.uk

    2008-08-15

    After previous work producing a successful 3D tomographic reconstruction of dislocations in GaN from conventional weak-beam dark-field (WBDF) images, we have reconstructed a cascade of dislocations in deformed and annealed silicon to a comparable standard using the more experimentally straightforward technique of STEM annular dark-field imaging (STEM ADF). In this mode, image contrast was much more consistent over the specimen tilt range than in conventional weak-beam dark-field imaging. Automatic acquisition software could thus restore the correct dislocation array to the field of view at each tilt angle, though manual focusing was still required. Reconstruction was carried out by sequential iterative reconstruction technique using FEI's Inspect3D software. Dislocations were distributed non-uniformly along cascades, with sparse areas between denser clumps in which individual dislocations of in-plane image width 24 nm could be distinguished in images and reconstruction. Denser areas showed more complicated stacking-fault contrast, hampering tomographic reconstruction. The general three-dimensional form of the denser areas was reproduced well, showing the dislocation array to be planar and not parallel to the foil surfaces.

  11. Automated multidimensional image analysis reveals a role for Abl in embryonic wound repair.

    Science.gov (United States)

    Zulueta-Coarasa, Teresa; Tamada, Masako; Lee, Eun J; Fernandez-Gonzalez, Rodrigo

    2014-07-01

    The embryonic epidermis displays a remarkable ability to repair wounds rapidly. Embryonic wound repair is driven by the evolutionary conserved redistribution of cytoskeletal and junctional proteins around the wound. Drosophila has emerged as a model to screen for factors implicated in wound closure. However, genetic screens have been limited by the use of manual analysis methods. We introduce MEDUSA, a novel image-analysis tool for the automated quantification of multicellular and molecular dynamics from time-lapse confocal microscopy data. We validate MEDUSA by quantifying wound closure in Drosophila embryos, and we show that the results of our automated analysis are comparable to analysis by manual delineation and tracking of the wounds, while significantly reducing the processing time. We demonstrate that MEDUSA can also be applied to the investigation of cellular behaviors in three and four dimensions. Using MEDUSA, we find that the conserved nonreceptor tyrosine kinase Abelson (Abl) contributes to rapid embryonic wound closure. We demonstrate that Abl plays a role in the organization of filamentous actin and the redistribution of the junctional protein β-catenin at the wound margin during embryonic wound repair. Finally, we discuss different models for the role of Abl in the regulation of actin architecture and adhesion dynamics at the wound margin.

  12. Automated detection of optic disk in retinal fundus images using intuitionistic fuzzy histon segmentation.

    Science.gov (United States)

    Mookiah, Muthu Rama Krishnan; Acharya, U Rajendra; Chua, Chua Kuang; Min, Lim Choo; Ng, E Y K; Mushrif, Milind M; Laude, Augustinus

    2013-01-01

    The human eye is one of the most sophisticated organs, with perfectly interrelated retina, pupil, iris cornea, lens, and optic nerve. Automatic retinal image analysis is emerging as an important screening tool for early detection of eye diseases. Uncontrolled diabetic retinopathy (DR) and glaucoma may lead to blindness. The identification of retinal anatomical regions is a prerequisite for the computer-aided diagnosis of several retinal diseases. The manual examination of optic disk (OD) is a standard procedure used for detecting different stages of DR and glaucoma. In this article, a novel automated, reliable, and efficient OD localization and segmentation method using digital fundus images is proposed. General-purpose edge detection algorithms often fail to segment the OD due to fuzzy boundaries, inconsistent image contrast, or missing edge features. This article proposes a novel and probably the first method using the Attanassov intuitionistic fuzzy histon (A-IFSH)-based segmentation to detect OD in retinal fundus images. OD pixel intensity and column-wise neighborhood operation are employed to locate and isolate the OD. The method has been evaluated on 100 images comprising 30 normal, 39 glaucomatous, and 31 DR images. Our proposed method has yielded precision of 0.93, recall of 0.91, F-score of 0.92, and mean segmentation accuracy of 93.4%. We have also compared the performance of our proposed method with the Otsu and gradient vector flow (GVF) snake methods. Overall, our result shows the superiority of proposed fuzzy segmentation technique over other two segmentation methods.

  13. SparkMaster: automated calcium spark analysis with ImageJ.

    Science.gov (United States)

    Picht, Eckard; Zima, Aleksey V; Blatter, Lothar A; Bers, Donald M

    2007-09-01

    Ca sparks are elementary Ca-release events from intracellular Ca stores that are observed in virtually all types of muscle. Typically, Ca sparks are measured in the line-scan mode with confocal laser-scanning microscopes, yielding two-dimensional images (distance vs. time). The manual analysis of these images is time consuming and prone to errors as well as investigator bias. Therefore, we developed SparkMaster, an automated analysis program that allows rapid and reliable spark analysis. The underlying analysis algorithm is adapted from the threshold-based standard method of spark analysis developed by Cheng et al. (Biophys J 76: 606-617, 1999) and is implemented here in the freely available image-processing software ImageJ. SparkMaster offers a graphical user interface through which all analysis parameters and output options are selected. The analysis includes general image parameters (number of detected sparks, spark frequency) and individual spark parameters (amplitude, full width at half-maximum amplitude, full duration at half-maximum amplitude, full width, full duration, time to peak, maximum steepness of spark upstroke, time constant of spark decay). We validated the algorithm using images with synthetic sparks embedded into backgrounds with different signal-to-noise ratios to determine an analysis criteria at which a high sensitivity is combined with a low frequency of false-positive detections. Finally, we applied SparkMaster to analyze experimental data of sparks measured in intact and permeabilized ventricular cardiomyocytes, permeabilized mammalian skeletal muscle, and intact smooth muscle cells. We found that SparkMaster provides a reliable, easy to use, and fast way of analyzing Ca sparks in a wide variety of experimental conditions.

  14. An automated image processing method to quantify collagen fibre organization within cutaneous scar tissue.

    Science.gov (United States)

    Quinn, Kyle P; Golberg, Alexander; Broelsch, G Felix; Khan, Saiqa; Villiger, Martin; Bouma, Brett; Austen, William G; Sheridan, Robert L; Mihm, Martin C; Yarmush, Martin L; Georgakoudi, Irene

    2015-01-01

    Standard approaches to evaluate scar formation within histological sections rely on qualitative evaluations and scoring, which limits our understanding of the remodelling process. We have recently developed an image analysis technique for the rapid quantification of fibre alignment at each pixel location. The goal of this study was to evaluate its application for quantitatively mapping scar formation in histological sections of cutaneous burns. To this end, we utilized directional statistics to define maps of fibre density and directional variance from Masson's trichrome-stained sections for quantifying changes in collagen organization during scar remodelling. Significant increases in collagen fibre density are detectable soon after burn injury in a rat model. Decreased fibre directional variance in the scar was also detectable between 3 weeks and 6 months after injury, indicating increasing fibre alignment. This automated analysis of fibre organization can provide objective surrogate endpoints for evaluating cutaneous wound repair and regeneration.

  15. Analysis of irradiated U-7wt%Mo dispersion fuel microstructures using automated image processing

    Science.gov (United States)

    Collette, R.; King, J.; Buesch, C.; Keiser, D. D.; Williams, W.; Miller, B. D.; Schulthess, J.

    2016-07-01

    The High Performance Research Reactor Fuel Development (HPPRFD) program is responsible for developing low enriched uranium (LEU) fuel substitutes for high performance reactors fueled with highly enriched uranium (HEU) that have not yet been converted to LEU. The uranium-molybdenum (U-Mo) fuel system was selected for this effort. In this study, fission gas pore segmentation was performed on U-7wt%Mo dispersion fuel samples at three separate fission densities using an automated image processing interface developed in MATLAB. Pore size distributions were attained that showed both expected and unexpected fission gas behavior. In general, it proved challenging to identify any dominant trends when comparing fission bubble data across samples from different fuel plates due to varying compositions and fabrication techniques. The results exhibited fair agreement with the fission density vs. porosity correlation developed by the Russian reactor conversion program.

  16. Automated Detection of Coronal Mass Ejections in STEREO Heliospheric Imager data

    CERN Document Server

    Pant, V; Rodriguez, L; Mierla, M; Banerjee, D; Davies, J A

    2016-01-01

    We have performed, for the first time, the successful automated detection of Coronal Mass Ejections (CMEs) in data from the inner heliospheric imager (HI-1) cameras on the STEREO A spacecraft. Detection of CMEs is done in time-height maps based on the application of the Hough transform, using a modified version of the CACTus software package, conventionally applied to coronagraph data. In this paper we describe the method of detection. We present the result of the application of the technique to a few CMEs that are well detected in the HI-1 imagery, and compare these results with those based on manual cataloging methodologies. We discuss in detail the advantages and disadvantages of this method.

  17. Automated Detection of Coronal Mass Ejections in STEREO Heliospheric Imager Data

    Science.gov (United States)

    Pant, V.; Willems, S.; Rodriguez, L.; Mierla, M.; Banerjee, D.; Davies, J. A.

    2016-12-01

    We have performed, for the first time, the successful automated detection of coronal mass ejections (CMEs) in data from the inner heliospheric imager (HI-1) cameras on the STEREO-A spacecraft. Detection of CMEs is done in time-height maps based on the application of the Hough transform, using a modified version of the CACTus software package, conventionally applied to coronagraph data. In this paper, we describe the method of detection. We present the results of the application of the technique to a few CMEs, which are well detected in the HI-1 imagery, and compare these results with those based on manual-cataloging methodologies. We discuss, in detail, the advantages and disadvantages of this method.

  18. Automated analysis of images acquired with electronic portal imaging device during delivery of quality assurance plans for inversely optimized arc therapy

    DEFF Research Database (Denmark)

    Fredh, Anna; Korreman, Stine; Rosenschöld, Per Munck af

    2010-01-01

    This work presents an automated method for comprehensively analyzing EPID images acquired for quality assurance of RapidArc treatment delivery. In-house-developed software has been used for the analysis and long-term results from measurements on three linacs are presented.......This work presents an automated method for comprehensively analyzing EPID images acquired for quality assurance of RapidArc treatment delivery. In-house-developed software has been used for the analysis and long-term results from measurements on three linacs are presented....

  19. Improving cervical region of interest by eliminating vaginal walls and cotton-swabs for automated image analysis

    Science.gov (United States)

    Venkataraman, Sankar; Li, Wenjing

    2008-03-01

    Image analysis for automated diagnosis of cervical cancer has attained high prominence in the last decade. Automated image analysis at all levels requires a basic segmentation of the region of interest (ROI) within a given image. The precision of the diagnosis is often reflected by the precision in detecting the initial region of interest, especially when some features outside the ROI mimic the ones within the same. Work described here discusses algorithms that are used to improve the cervical region of interest as a part of automated cervical image diagnosis. A vital visual aid in diagnosing cervical cancer is the aceto-whitening of the cervix after the application of acetic acid. Color and texture are used to segment acetowhite regions within the cervical ROI. Vaginal walls along with cottonswabs sometimes mimic these essential features leading to several false positives. Work presented here is focused towards detecting in-focus vaginal wall boundaries and then extrapolating them to exclude vaginal walls from the cervical ROI. In addition, discussed here is a marker-controlled watershed segmentation that is used to detect cottonswabs from the cervical ROI. A dataset comprising 50 high resolution images of the cervix acquired after 60 seconds of acetic acid application were used to test the algorithm. Out of the 50 images, 27 benefited from a new cervical ROI. Significant improvement in overall diagnosis was observed in these images as false positives caused by features outside the actual ROI mimicking acetowhite region were eliminated.

  20. Automated Transient Recovery Algorithm using Discrete Zernike Polynomials on Image-Subtracted Data

    Science.gov (United States)

    Ackley, Kendall; Eikenberry, Stephen S.; Klimenko, Sergey

    2016-01-01

    We present an unsupervised algorithm for the automated identification of astrophysical transients recovered through image subtraction techniques. We use a set of discrete Zernike polynomials to decompose and characterize residual energy discovered in the final subtracted image, identifying candidate sources which appear point-like in nature. This work is motivated for use in collaboration with Advanced gravitational wave (GW) interferometers, such as Advanced LIGO and Virgo, where multiwavelength electromagnetic (EM) emission is expected in parallel with gravitational radiation from compact binary object mergers of neutron stars (NS-NS) and stellar-mass black holes (NS-BH). Imaging an EM counterpart coincident with a GW trigger will help to constrain the multi-dimensional GW parameter space as well as aid in the resolution of long-standing astrophysical mysteries, such as the true nature of the progenitor relationship between short-duration GRBs and massive compact binary mergers. We are working on making our method an open-source package optimized for low-latency response for community use during the upcoming era of GW astronomy.

  1. Methodology for fully automated segmentation and plaque characterization in intracoronary optical coherence tomography images.

    Science.gov (United States)

    Athanasiou, Lambros S; Bourantas, Christos V; Rigas, George; Sakellarios, Antonis I; Exarchos, Themis P; Siogkas, Panagiotis K; Ricciardi, Andrea; Naka, Katerina K; Papafaklis, Michail I; Michalis, Lampros K; Prati, Francesco; Fotiadis, Dimitrios I

    2014-02-01

    Optical coherence tomography (OCT) is a light-based intracoronary imaging modality that provides high-resolution cross-sectional images of the luminal and plaque morphology. Currently, the segmentation of OCT images and identification of the composition of plaque are mainly performed manually by expert observers. However, this process is laborious and time consuming and its accuracy relies on the expertise of the observer. To address these limitations, we present a methodology that is able to process the OCT data in a fully automated fashion. The proposed methodology is able to detect the lumen borders in the OCT frames, identify the plaque region, and detect four tissue types: calcium (CA), lipid tissue (LT), fibrous tissue (FT), and mixed tissue (MT). The efficiency of the developed methodology was evaluated using annotations from 27 OCT pullbacks acquired from 22 patients. High Pearson's correlation coefficients were obtained between the output of the developed methodology and the manual annotations (from 0.96 to 0.99), while no significant bias with good limits of agreement was shown in the Bland-Altman analysis. The overlapping areas ratio between experts' annotations and methodology in detecting CA, LT, FT, and MT was 0.81, 0.71, 0.87, and 0.81, respectively.

  2. A method for automated snow avalanche debris detection through use of synthetic aperture radar (SAR) imaging

    Science.gov (United States)

    Vickers, H.; Eckerstorfer, M.; Malnes, E.; Larsen, Y.; Hindberg, H.

    2016-11-01

    Avalanches are a natural hazard that occur in mountainous regions of Troms County in northern Norway during winter and can cause loss of human life and damage to infrastructure. Knowledge of when and where they occur especially in remote, high mountain areas is often lacking due to difficult access. However, complete, spatiotemporal avalanche activity data sets are important for accurate avalanche forecasting, as well as for deeper understanding of the link between avalanche occurrences and the triggering snowpack and meteorological factors. It is therefore desirable to develop a technique that enables active mapping and monitoring of avalanches over an entire winter. Avalanche debris can be observed remotely over large spatial areas, under all weather and light conditions by synthetic aperture radar (SAR) satellites. The recently launched Sentinel-1A satellite acquires SAR images covering the entire Troms County with frequent updates. By focusing on a case study from New Year 2015 we use Sentinel-1A images to develop an automated avalanche debris detection algorithm that utilizes change detection and unsupervised object classification methods. We compare our results with manually identified avalanche debris and field-based images to quantify the algorithm accuracy. Our results indicate that a correct detection rate of over 60% can be achieved, which is sensitive to several algorithm parameters that may need revising. With further development and refinement of the algorithm, we believe that this method could play an effective role in future operational monitoring of avalanches within Troms and has potential application in avalanche forecasting areas worldwide.

  3. Automated centreline extraction of neuronal dendrite from optical microscopy image stacks

    Science.gov (United States)

    Xiao, Liang; Zhang, Fanbiao

    2010-11-01

    In this work we present a novel vision-based pipeline for automated skeleton detection and centreline extraction of neuronal dendrite from optical microscopy image stacks. The proposed pipeline is an integrated solution that merges image stacks pre-processing, the seed points detection, ridge traversal procedure, minimum spanning tree optimization and tree trimming into to a unified framework to deal with the challenge problem. In image stacks preprocessing, we first apply a curvelet transform based shrinkage and cycle spinning technique to remove the noise. This is followed by the adaptive threshold method to compute the result of neuronal object segmentation, and the 3D distance transformation is performed to get the distance map. According to the eigenvalues and eigenvectors of the Hessian matrix, the skeleton seed points are detected. Staring from the seed points, the initial centrelines are obtained using ridge traversal procedure. After that, we use minimum spanning tree to organize the geometrical structure of the skeleton points, and then we use graph trimming post-processing to compute the final centreline. Experimental results on different datasets demonstrate that our approach has high reliability, good robustness and requires less user interaction.

  4. Cell Image Velocimetry (CIV): boosting the automated quantification of cell migration in wound healing assays.

    Science.gov (United States)

    Milde, Florian; Franco, Davide; Ferrari, Aldo; Kurtcuoglu, Vartan; Poulikakos, Dimos; Koumoutsakos, Petros

    2012-11-01

    Cell migration is commonly quantified by tracking the speed of the cell layer interface in wound healing assays. This quantification is often hampered by low signal to noise ratio, in particular when complex substrates are employed to emulate in vivo cell migration in geometrically complex environments. Moreover, information about the cell motion, readily available inside the migrating cell layers, is not usually harvested. We introduce Cell Image Velocimetry (CIV), a combination of cell layer segmentation and image velocimetry algorithms, to drastically enhance the quantification of cell migration by wound healing assays. The resulting software analyses the speed of the interface as well as the detailed velocity field inside the cell layers in an automated fashion. CIV is shown to be highly robust for images with low signal to noise ratio, low contrast and frame shifting and it is portable across various experimental settings. The modular design and parametrization of CIV is not restricted to wound healing assays and allows for the exploration and quantification of flow phenomena in any optical microscopy dataset. Here, we demonstrate the capabilities of CIV in wound healing assays over topographically engineered surfaces and quantify the relative merits of differently aligned gratings on cell migration.

  5. Automated Segmentation of in Vivo and Ex Vivo Mouse Brain Magnetic Resonance Images

    Directory of Open Access Journals (Sweden)

    Alize E.H. Scheenstra

    2009-01-01

    Full Text Available Segmentation of magnetic resonance imaging (MRI data is required for many applications, such as the comparison of different structures or time points, and for annotation purposes. Currently, the gold standard for automated image segmentation is nonlinear atlas-based segmentation. However, these methods are either not sufficient or highly time consuming for mouse brains, owing to the low signal to noise ratio and low contrast between structures compared with other applications. We present a novel generic approach to reduce processing time for segmentation of various structures of mouse brains, in vivo and ex vivo. The segmentation consists of a rough affine registration to a template followed by a clustering approach to refine the rough segmentation near the edges. Compared with manual segmentations, the presented segmentation method has an average kappa index of 0.7 for 7 of 12 structures in in vivo MRI and 11 of 12 structures in ex vivo MRI. Furthermore, we found that these results were equal to the performance of a nonlinear segmentation method, but with the advantage of being 8 times faster. The presented automatic segmentation method is quick and intuitive and can be used for image registration, volume quantification of structures, and annotation.

  6. Semi-automated porosity identification from thin section images using image analysis and intelligent discriminant classifiers

    Science.gov (United States)

    Ghiasi-Freez, Javad; Soleimanpour, Iman; Kadkhodaie-Ilkhchi, Ali; Ziaii, Mansur; Sedighi, Mahdi; Hatampour, Amir

    2012-08-01

    Identification of different types of porosity within a reservoir rock is a functional parameter for reservoir characterization since various pore types play different roles in fluid transport and also, the pore spaces determine the fluid storage capacity of the reservoir. The present paper introduces a model for semi-automatic identification of porosity types within thin section images. To get this goal, a pattern recognition algorithm is followed. Firstly, six geometrical shape parameters of sixteen largest pores of each image are extracted using image analysis techniques. The extracted parameters and their corresponding pore types of 294 pores are used for training two intelligent discriminant classifiers, namely linear and quadratic discriminant analysis. The trained classifiers take the geometrical features of the pores to identify the type and percentage of five types of porosity, including interparticle, intraparticle, oomoldic, biomoldic, and vuggy in each image. The accuracy of classifiers is determined from two standpoints. Firstly, the predicted and measured percentages of each type of porosity are compared with each other. The results indicate reliable performance for predicting percentage of each type of porosity. In the second step, the precisions of classifiers for categorizing the pore spaces are analyzed. The classifiers also took a high acceptance score when used for individual recognition of pore spaces. The proposed methodology is a further promising application for petroleum geologists allowing statistical study of pore types in a rapid and accurate way.

  7. A new automated method for analysis of gated-SPECT images based on a three-dimensional heart shaped model

    DEFF Research Database (Denmark)

    Lomsky, Milan; Richter, Jens; Johansson, Lena

    2005-01-01

    A new automated method for quantification of left ventricular function from gated-single photon emission computed tomography (SPECT) images has been developed. The method for quantification of cardiac function (CAFU) is based on a heart shaped model and the active shape algorithm. The model...

  8. Automated high-throughput assessment of prostate biopsy tissue using infrared spectroscopic chemical imaging

    Science.gov (United States)

    Bassan, Paul; Sachdeva, Ashwin; Shanks, Jonathan H.; Brown, Mick D.; Clarke, Noel W.; Gardner, Peter

    2014-03-01

    Fourier transform infrared (FT-IR) chemical imaging has been demonstrated as a promising technique to complement histopathological assessment of biomedical tissue samples. Current histopathology practice involves preparing thin tissue sections and staining them using hematoxylin and eosin (H&E) after which a histopathologist manually assess the tissue architecture under a visible microscope. Studies have shown that there is disagreement between operators viewing the same tissue suggesting that a complementary technique for verification could improve the robustness of the evaluation, and improve patient care. FT-IR chemical imaging allows the spatial distribution of chemistry to be rapidly imaged at a high (diffraction-limited) spatial resolution where each pixel represents an area of 5.5 × 5.5 μm2 and contains a full infrared spectrum providing a chemical fingerprint which studies have shown contains the diagnostic potential to discriminate between different cell-types, and even the benign or malignant state of prostatic epithelial cells. We report a label-free (i.e. no chemical de-waxing, or staining) method of imaging large pieces of prostate tissue (typically 1 cm × 2 cm) in tens of minutes (at a rate of 0.704 × 0.704 mm2 every 14.5 s) yielding images containing millions of spectra. Due to refractive index matching between sample and surrounding paraffin, minimal signal processing is required to recover spectra with their natural profile as opposed to harsh baseline correction methods, paving the way for future quantitative analysis of biochemical signatures. The quality of the spectral information is demonstrated by building and testing an automated cell-type classifier based upon spectral features.

  9. Breast Imaging Reporting and Data System (BI-RADS) breast composition descriptors: Automated measurement development for full field digital mammography

    OpenAIRE

    Fowler, E. E.; Sellers, T.A.; Lu, B.; Heine, J.J.

    2013-01-01

    Purpose: The Breast Imaging Reporting and Data System (BI-RADS) breast composition descriptors are used for standardized mammographic reporting and are assessed visually. This reporting is clinically relevant because breast composition can impact mammographic sensitivity and is a breast cancer risk factor. New techniques are presented and evaluated for generating automated BI-RADS breast composition descriptors using both raw and calibrated full field digital mammography (FFDM) image data.

  10. Rapid and Semi-Automated Extraction of Neuronal Cell Bodies and Nuclei from Electron Microscopy Image Stacks

    Science.gov (United States)

    Holcomb, Paul S.; Morehead, Michael; Doretto, Gianfranco; Chen, Peter; Berg, Stuart; Plaza, Stephen; Spirou, George

    2016-01-01

    Connectomics—the study of how neurons wire together in the brain—is at the forefront of modern neuroscience research. However, many connectomics studies are limited by the time and precision needed to correctly segment large volumes of electron microscopy (EM) image data. We present here a semi-automated segmentation pipeline using freely available software that can significantly decrease segmentation time for extracting both nuclei and cell bodies from EM image volumes. PMID:27259933

  11. LOCALIZATION OF PALM DORSAL VEIN PATTERN USING IMAGE PROCESSING FOR AUTOMATED INTRA-VENOUS DRUG NEEDLE INSERTION

    OpenAIRE

    Mrs. Kavitha. R,; Tripty Singh

    2011-01-01

    Vein pattern in palms is a random mesh of interconnected and inter- wining blood vessels. This project is the application of vein detection concept to automate the drug delivery process. It dealswith extracting palm dorsal vein structures, which is a key procedure for selecting the optimal drug needle insertion point. Gray scale images obtained from a low cost IR-webcam are poor in contrast, and usually noisy which make an effective vein segmentation a great challenge. Here a new vein image s...

  12. Hyper-Cam automated calibration method for continuous hyperspectral imaging measurements

    Science.gov (United States)

    Gagnon, Jean-Philippe; Habte, Zewdu; George, Jacks; Farley, Vincent; Tremblay, Pierre; Chamberland, Martin; Romano, Joao; Rosario, Dalton

    2010-04-01

    The midwave and longwave infrared regions of the electromagnetic spectrum contain rich information which can be captured by hyperspectral sensors thus enabling enhanced detection of targets of interest. A continuous hyperspectral imaging measurement capability operated 24/7 over varying seasons and weather conditions permits the evaluation of hyperspectral imaging for detection of different types of targets in real world environments. Such a measurement site was built at Picatinny Arsenal under the Spectral and Polarimetric Imagery Collection Experiment (SPICE), where two Hyper-Cam hyperspectral imagers are installed at the Precision Armament Laboratory (PAL) and are operated autonomously since Fall of 2009. The Hyper-Cam are currently collecting a complete hyperspectral database that contains the MWIR and LWIR hyperspectral measurements of several targets under day, night, sunny, cloudy, foggy, rainy and snowy conditions. The Telops Hyper-Cam sensor is an imaging spectrometer that enables the spatial and spectral analysis capabilities using a single sensor. It is based on the Fourier-transform technology yielding high spectral resolution and enabling high accuracy radiometric calibration. It provides datacubes of up to 320x256 pixels at spectral resolutions of up to 0.25 cm-1. The MWIR version covers the 3 to 5 μm spectral range and the LWIR version covers the 8 to 12 μm spectral range. This paper describes the automated operation of the two Hyper-Cam sensors being used in the SPICE data collection. The Reveal Automation Control Software (RACS) developed collaboratively between Telops, ARDEC, and ARL enables flexible operating parameters and autonomous calibration. Under the RACS software, the Hyper-Cam sensors can autonomously calibrate itself using their internal blackbody targets, and the calibration events are initiated by user defined time intervals and on internal beamsplitter temperature monitoring. The RACS software is the first software developed for

  13. Boosting accuracy of automated classification of fluorescence microscope images for location proteomics

    Directory of Open Access Journals (Sweden)

    Huang Kai

    2004-06-01

    accuracy for single 2D images being higher than 90% for the first time. In particular, the classification accuracy for the easily confused endomembrane compartments (endoplasmic reticulum, Golgi, endosomes, lysosomes was improved by 5–15%. We achieved further improvements when classification was conducted on image sets rather than on individual cell images. Conclusions The availability of accurate, fast, automated classification systems for protein location patterns in conjunction with high throughput fluorescence microscope imaging techniques enables a new subfield of proteomics, location proteomics. The accuracy and sensitivity of this approach represents an important alternative to low-resolution assignments by curation or sequence-based prediction.

  14. Automated Detection of P. falciparum Using Machine Learning Algorithms with Quantitative Phase Images of Unstained Cells

    Science.gov (United States)

    Park, Han Sang; Rinehart, Matthew T.; Walzer, Katelyn A.; Chi, Jen-Tsan Ashley; Wax, Adam

    2016-01-01

    Malaria detection through microscopic examination of stained blood smears is a diagnostic challenge that heavily relies on the expertise of trained microscopists. This paper presents an automated analysis method for detection and staging of red blood cells infected by the malaria parasite Plasmodium falciparum at trophozoite or schizont stage. Unlike previous efforts in this area, this study uses quantitative phase images of unstained cells. Erythrocytes are automatically segmented using thresholds of optical phase and refocused to enable quantitative comparison of phase images. Refocused images are analyzed to extract 23 morphological descriptors based on the phase information. While all individual descriptors are highly statistically different between infected and uninfected cells, each descriptor does not enable separation of populations at a level satisfactory for clinical utility. To improve the diagnostic capacity, we applied various machine learning techniques, including linear discriminant classification (LDC), logistic regression (LR), and k-nearest neighbor classification (NNC), to formulate algorithms that combine all of the calculated physical parameters to distinguish cells more effectively. Results show that LDC provides the highest accuracy of up to 99.7% in detecting schizont stage infected cells compared to uninfected RBCs. NNC showed slightly better accuracy (99.5%) than either LDC (99.0%) or LR (99.1%) for discriminating late trophozoites from uninfected RBCs. However, for early trophozoites, LDC produced the best accuracy of 98%. Discrimination of infection stage was less accurate, producing high specificity (99.8%) but only 45.0%-66.8% sensitivity with early trophozoites most often mistaken for late trophozoite or schizont stage and late trophozoite and schizont stage most often confused for each other. Overall, this methodology points to a significant clinical potential of using quantitative phase imaging to detect and stage malaria infection

  15. Automated local bright feature image analysis of nuclear proteindistribution identifies changes in tissue phenotype

    Energy Technology Data Exchange (ETDEWEB)

    Knowles, David; Sudar, Damir; Bator, Carol; Bissell, Mina

    2006-02-01

    The organization of nuclear proteins is linked to cell and tissue phenotypes. When cells arrest proliferation, undergo apoptosis, or differentiate, the distribution of nuclear proteins changes. Conversely, forced alteration of the distribution of nuclear proteins modifies cell phenotype. Immunostaining and fluorescence microscopy have been critical for such findings. However, there is an increasing need for quantitative analysis of nuclear protein distribution to decipher epigenetic relationships between nuclear structure and cell phenotype, and to unravel the mechanisms linking nuclear structure and function. We have developed imaging methods to quantify the distribution of fluorescently-stained nuclear protein NuMA in different mammary phenotypes obtained using three-dimensional cell culture. Automated image segmentation of DAPI-stained nuclei was generated to isolate thousands of nuclei from three-dimensional confocal images. Prominent features of fluorescently-stained NuMA were detected using a novel local bright feature analysis technique, and their normalized spatial density calculated as a function of the distance from the nuclear perimeter to its center. The results revealed marked changes in the distribution of the density of NuMA bright features as non-neoplastic cells underwent phenotypically normal acinar morphogenesis. In contrast, we did not detect any reorganization of NuMA during the formation of tumor nodules by malignant cells. Importantly, the analysis also discriminated proliferating non-neoplastic cells from proliferating malignant cells, suggesting that these imaging methods are capable of identifying alterations linked not only to the proliferation status but also to the malignant character of cells. We believe that this quantitative analysis will have additional applications for classifying normal and pathological tissues.

  16. Content-based versus semantic-based retrieval: an LIDC case study

    Science.gov (United States)

    Jabon, Sarah A.; Raicu, Daniela S.; Furst, Jacob D.

    2009-02-01

    Content based image retrieval is an active area of medical imaging research. One use of content based image retrieval (CBIR) is presentation of known, reference images similar to an unknown case. These comparison images may reduce the radiologist's uncertainty in interpreting that case. It is, therefore, important to present radiologists with systems whose computed-similarity results correspond to human perceived-similarity. In our previous work, we developed an open-source CBIR system that inputs a computed tomography (CT) image of a lung nodule as a query and retrieves similar lung nodule images based on content-based image features. In this paper, we extend our previous work by studying the relationships between the two types of retrieval, content-based and semantic-based, with the final goal of integrating them into a system that will take advantage of both retrieval approaches. Our preliminary results on the Lung Image Database Consortium (LIDC) dataset using four types of image features, seven radiologists' rated semantic characteristics and two simple similarity measures show that a substantial number of nodules identified as similar based on image features are also identified as similar based on semantic characteristics. Furthermore, by integrating the two types of features, the similarity retrieval improves with respect to certain nodule characteristics.

  17. Primary histologic diagnosis using automated whole slide imaging: a validation study

    Directory of Open Access Journals (Sweden)

    Jukic Drazen M

    2006-04-01

    Full Text Available Abstract Background Only prototypes 5 years ago, high-speed, automated whole slide imaging (WSI systems (also called digital slide systems, virtual microscopes or wide field imagers are becoming increasingly capable and robust. Modern devices can capture a slide in 5 minutes at spatial sampling periods of less than 0.5 micron/pixel. The capacity to rapidly digitize large numbers of slides should eventually have a profound, positive impact on pathology. It is important, however, that pathologists validate these systems during development, not only to identify their limitations but to guide their evolution. Methods Three pathologists fully signed out 25 cases representing 31 parts. The laboratory information system was used to simulate real-world sign-out conditions including entering a full diagnostic field and comment (when appropriate and ordering special stains and recuts. For each case, discrepancies between diagnoses were documented by committee and a "consensus" report was formed and then compared with the microscope-based, sign-out report from the clinical archive. Results In 17 of 25 cases there were no discrepancies between the individual study pathologist reports. In 8 of the remaining cases, there were 12 discrepancies, including 3 in which image quality could be at least partially implicated. When the WSI consensus diagnoses were compared with the original sign-out diagnoses, no significant discrepancies were found. Full text of the pathologist reports, the WSI consensus diagnoses, and the original sign-out diagnoses are available as an attachment to this publication. Conclusion The results indicated that the image information contained in current whole slide images is sufficient for pathologists to make reliable diagnostic decisions and compose complex diagnostic reports. This is a very positive result; however, this does not mean that WSI is as good as a microscope. Virtually every slide had focal areas in which image quality (focus

  18. Fully automated prostate magnetic resonance imaging and transrectal ultrasound fusion via a probabilistic registration metric

    Science.gov (United States)

    Sparks, Rachel; Bloch, B. Nicholas; Feleppa, Ernest; Barratt, Dean; Madabhushi, Anant

    2013-03-01

    In this work, we present a novel, automated, registration method to fuse magnetic resonance imaging (MRI) and transrectal ultrasound (TRUS) images of the prostate. Our methodology consists of: (1) delineating the prostate on MRI, (2) building a probabilistic model of prostate location on TRUS, and (3) aligning the MRI prostate segmentation to the TRUS probabilistic model. TRUS-guided needle biopsy is the current gold standard for prostate cancer (CaP) diagnosis. Up to 40% of CaP lesions appear isoechoic on TRUS, hence TRUS-guided biopsy cannot reliably target CaP lesions and is associated with a high false negative rate. MRI is better able to distinguish CaP from benign prostatic tissue, but requires special equipment and training. MRI-TRUS fusion, whereby MRI is acquired pre-operatively and aligned to TRUS during the biopsy procedure, allows for information from both modalities to be used to help guide the biopsy. The use of MRI and TRUS in combination to guide biopsy at least doubles the yield of positive biopsies. Previous work on MRI-TRUS fusion has involved aligning manually determined fiducials or prostate surfaces to achieve image registration. The accuracy of these methods is dependent on the reader's ability to determine fiducials or prostate surfaces with minimal error, which is a difficult and time-consuming task. Our novel, fully automated MRI-TRUS fusion method represents a significant advance over the current state-of-the-art because it does not require manual intervention after TRUS acquisition. All necessary preprocessing steps (i.e. delineation of the prostate on MRI) can be performed offline prior to the biopsy procedure. We evaluated our method on seven patient studies, with B-mode TRUS and a 1.5 T surface coil MRI. Our method has a root mean square error (RMSE) for expertly selected fiducials (consisting of the urethra, calcifications, and the centroids of CaP nodules) of 3.39 +/- 0.85 mm.

  19. Automated image analysis reveals the dynamic 3-dimensional organization of multi-ciliary arrays.

    Science.gov (United States)

    Galati, Domenico F; Abuin, David S; Tauber, Gabriel A; Pham, Andrew T; Pearson, Chad G

    2015-12-23

    Multi-ciliated cells (MCCs) use polarized fields of undulating cilia (ciliary array) to produce fluid flow that is essential for many biological processes. Cilia are positioned by microtubule scaffolds called basal bodies (BBs) that are arranged within a spatially complex 3-dimensional geometry (3D). Here, we develop a robust and automated computational image analysis routine to quantify 3D BB organization in the ciliate, Tetrahymena thermophila. Using this routine, we generate the first morphologically constrained 3D reconstructions of Tetrahymena cells and elucidate rules that govern the kinetics of MCC organization. We demonstrate the interplay between BB duplication and cell size expansion through the cell cycle. In mutant cells, we identify a potential BB surveillance mechanism that balances large gaps in BB spacing by increasing the frequency of closely spaced BBs in other regions of the cell. Finally, by taking advantage of a mutant predisposed to BB disorganization, we locate the spatial domains that are most prone to disorganization by environmental stimuli. Collectively, our analyses reveal the importance of quantitative image analysis to understand the principles that guide the 3D organization of MCCs.

  20. Automated hierarchical time gain compensation for in-vivo ultrasound imaging

    Science.gov (United States)

    Moshavegh, Ramin; Hemmsen, Martin C.; Martins, Bo; Brandt, Andreas H.; Hansen, Kristoffer L.; Nielsen, Michael B.; Jensen, Jørgen A.

    2015-03-01

    Time gain compensation (TGC) is essential to ensure the optimal image quality of the clinical ultrasound scans. When large fluid collections are present within the scan plane, the attenuation distribution is changed drastically and TGC compensation becomes challenging. This paper presents an automated hierarchical TGC (AHTGC) algorithm that accurately adapts to the large attenuation variation between different types of tissues and structures. The algorithm relies on estimates of tissue attenuation, scattering strength, and noise level to gain a more quantitative understanding of the underlying tissue and the ultrasound signal strength. The proposed algorithm was applied to a set of 44 in vivo abdominal movie sequences each containing 15 frames. Matching pairs of in vivo sequences, unprocessed and processed with the proposed AHTGC were visualized side by side and evaluated by two radiologists in terms of image quality. Wilcoxon signed-rank test was used to evaluate whether radiologists preferred the processed sequences or the unprocessed data. The results indicate that the average visual analogue scale (VAS) is positive ( p-value: 2.34 × 10-13) and estimated to be 1.01 (95% CI: 0.85; 1.16) favoring the processed data with the proposed AHTGC algorithm.

  1. Automated Waterline Detection in the Wadden Sea Using High-Resolution TerraSAR-X Images

    Directory of Open Access Journals (Sweden)

    Stefan Wiehle

    2015-01-01

    Full Text Available We present an algorithm for automatic detection of the land-water-line from TerraSAR-X images acquired over the Wadden Sea. In this coastal region of the southeastern North Sea, a strip of up to 20 km of seabed falls dry during low tide, revealing mudflats and tidal creeks. The tidal currents transport sediments and can change the coastal shape with erosion rates of several meters per month. This rate can be strongly increased by storm surges which also cause flooding of usually dry areas. Due to the high number of ships traveling through the Wadden Sea to the largest ports of Germany, frequent monitoring of the bathymetry is also an important task for maritime security. For such an extended area and the required short intervals of a few months, only remote sensing methods can perform this task efficiently. Automating the waterline detection in weather-independent radar images provides a fast and reliable way to spot changes in the coastal topography. The presented algorithm first performs smoothing, brightness thresholding, and edge detection. In the second step, edge drawing and flood filling are iteratively performed to determine optimal thresholds for the edge drawing. In the last step, small misdetections are removed.

  2. Automated segmentation of muscle and adipose tissue on CT images for human body composition analysis

    Science.gov (United States)

    Chung, Howard; Cobzas, Dana; Birdsell, Laura; Lieffers, Jessica; Baracos, Vickie

    2009-02-01

    The ability to compute body composition in cancer patients lends itself to determining the specific clinical outcomes associated with fat and lean tissue stores. For example, a wasting syndrome of advanced disease associates with shortened survival. Moreover, certain tissue compartments represent sites for drug distribution and are likely determinants of chemotherapy efficacy and toxicity. CT images are abundant, but these cannot be fully exploited unless there exist practical and fast approaches for tissue quantification. Here we propose a fully automated method for segmenting muscle, visceral and subcutaneous adipose tissues, taking the approach of shape modeling for the analysis of skeletal muscle. Muscle shape is represented using PCA encoded Free Form Deformations with respect to a mean shape. The shape model is learned from manually segmented images and used in conjunction with a tissue appearance prior. VAT and SAT are segmented based on the final deformed muscle shape. In comparing the automatic and manual methods, coefficients of variation (COV) (1 - 2%), were similar to or smaller than inter- and intra-observer COVs reported for manual segmentation.

  3. Automated detection and segmentation of synaptic contacts in nearly isotropic serial electron microscopy images.

    Science.gov (United States)

    Kreshuk, Anna; Straehle, Christoph N; Sommer, Christoph; Koethe, Ullrich; Cantoni, Marco; Knott, Graham; Hamprecht, Fred A

    2011-01-01

    We describe a protocol for fully automated detection and segmentation of asymmetric, presumed excitatory, synapses in serial electron microscopy images of the adult mammalian cerebral cortex, taken with the focused ion beam, scanning electron microscope (FIB/SEM). The procedure is based on interactive machine learning and only requires a few labeled synapses for training. The statistical learning is performed on geometrical features of 3D neighborhoods of each voxel and can fully exploit the high z-resolution of the data. On a quantitative validation dataset of 111 synapses in 409 images of 1948×1342 pixels with manual annotations by three independent experts the error rate of the algorithm was found to be comparable to that of the experts (0.92 recall at 0.89 precision). Our software offers a convenient interface for labeling the training data and the possibility to visualize and proofread the results in 3D. The source code, the test dataset and the ground truth annotation are freely available on the website http://www.ilastik.org/synapse-detection.

  4. Automated detection and segmentation of synaptic contacts in nearly isotropic serial electron microscopy images.

    Directory of Open Access Journals (Sweden)

    Anna Kreshuk

    Full Text Available We describe a protocol for fully automated detection and segmentation of asymmetric, presumed excitatory, synapses in serial electron microscopy images of the adult mammalian cerebral cortex, taken with the focused ion beam, scanning electron microscope (FIB/SEM. The procedure is based on interactive machine learning and only requires a few labeled synapses for training. The statistical learning is performed on geometrical features of 3D neighborhoods of each voxel and can fully exploit the high z-resolution of the data. On a quantitative validation dataset of 111 synapses in 409 images of 1948×1342 pixels with manual annotations by three independent experts the error rate of the algorithm was found to be comparable to that of the experts (0.92 recall at 0.89 precision. Our software offers a convenient interface for labeling the training data and the possibility to visualize and proofread the results in 3D. The source code, the test dataset and the ground truth annotation are freely available on the website http://www.ilastik.org/synapse-detection.

  5. Automated initial guess in digital image correlation aided by Fourier-Mellin transform

    Science.gov (United States)

    Pan, Bing; Wang, Yuejiao; Tian, Long

    2017-01-01

    The state-of-the-art digital image correlation (DIC) method using iterative spatial-domain cross correlation, e.g., the inverse-compositional Gauss-Newton algorithm, for full-field displacement mapping requires an initial guess of deformation, which should be sufficiently close to the true value to ensure a rapid and accurate convergence. Although various initial guess approaches have been proposed, automated, robust, and fast initial guess remains to be a challenging task, especially when large rotation occurs to the deformed images. An integrated scheme, which combines the Fourier-Mellin transform-based cross correlation (FMT-CC) for seed point initiation with a reliability-guided displacement tracking (RGDT) strategy for the remaining points, is proposed to provide accurate initial guess for DIC calculation, even in the presence of large rotations. By using FMT-CC algorithm, the initial guess of the seed point can be automatically and accurately determined between pairs of interrogation subsets with up to ±180 deg of rotation even in the presence of large translation. Then the initial guess of the rest of the calculation points can be accurately predicted by the robust RGDT scheme. The robustness and effectiveness of the present initial guess approach are verified by numerical simulation tests and real experiment.

  6. Development of an MRI fiducial marker prototype for automated MR-US fusion of abdominal images

    Science.gov (United States)

    Favazza, C. P.; Gorny, K. R.; Washburn, M. J.; Hangiandreou, N. J.

    2014-03-01

    External MRI fiducial marker devices are expected to facilitate robust, accurate, and efficient image fusion between MRI and other modalities. Automating of this process requires careful selection of a suitable marker size and material visible across a variety of pulse sequences, design of an appropriate fiducial device, and a robust segmentation algorithm. A set of routine clinical abdominal MRI pulse sequences was used to image a variety of marker materials and range of marker sizes. The most successfully detected marker was 12.7 mm diameter cylindrical reservoir filled with 1 g/L copper sulfate solution. A fiducial device was designed and fabricated from four such markers arranged in a tetrahedral orientation. MRI examinations were performed with the device attached to phantom and a volunteer, and custom developed algorithm was used to detect and segment the individual markers. The individual markers were accurately segmented in all sequences for both the phantom and volunteer. The measured intra-marker spacings matched well with the dimensions of the fiducial device. The average deviations from the actual physical spacings were 0.45+/- 0.40 mm and 0.52 +/- 0.36 mm for the phantom and the volunteer data, respectively. These preliminary results suggest that this general fiducial design and detection algorithm could be used for MRI multimodality fusion applications.

  7. Automated tissue classification of intracardiac optical coherence tomography images (Conference Presentation)

    Science.gov (United States)

    Gan, Yu; Tsay, David; Amir, Syed B.; Marboe, Charles C.; Hendon, Christine P.

    2016-03-01

    Remodeling of the myocardium is associated with increased risk of arrhythmia and heart failure. Our objective is to automatically identify regions of fibrotic myocardium, dense collagen, and adipose tissue, which can serve as a way to guide radiofrequency ablation therapy or endomyocardial biopsies. Using computer vision and machine learning, we present an automated algorithm to classify tissue compositions from cardiac optical coherence tomography (OCT) images. Three dimensional OCT volumes were obtained from 15 human hearts ex vivo within 48 hours of donor death (source, NDRI). We first segmented B-scans using a graph searching method. We estimated the boundary of each region by minimizing a cost function, which consisted of intensity, gradient, and contour smoothness. Then, features, including texture analysis, optical properties, and statistics of high moments, were extracted. We used a statistical model, relevance vector machine, and trained this model with abovementioned features to classify tissue compositions. To validate our method, we applied our algorithm to 77 volumes. The datasets for validation were manually segmented and classified by two investigators who were blind to our algorithm results and identified the tissues based on trichrome histology and pathology. The difference between automated and manual segmentation was 51.78 +/- 50.96 μm. Experiments showed that the attenuation coefficients of dense collagen were significantly different from other tissue types (P < 0.05, ANOVA). Importantly, myocardial fibrosis tissues were different from normal myocardium in entropy and kurtosis. The tissue types were classified with an accuracy of 84%. The results show good agreements with histology.

  8. Automated coronary artery calcification detection on low-dose chest CT images

    Science.gov (United States)

    Xie, Yiting; Cham, Matthew D.; Henschke, Claudia; Yankelevitz, David; Reeves, Anthony P.

    2014-03-01

    Coronary artery calcification (CAC) measurement from low-dose CT images can be used to assess the risk of coronary artery disease. A fully automatic algorithm to detect and measure CAC from low-dose non-contrast, non-ECG-gated chest CT scans is presented. Based on the automatically detected CAC, the Agatston score (AS), mass score and volume score were computed. These were compared with scores obtained manually from standard-dose ECG-gated scans and low-dose un-gated scans of the same patient. The automatic algorithm segments the heart region based on other pre-segmented organs to provide a coronary region mask. The mitral valve and aortic valve calcification is identified and excluded. All remaining voxels greater than 180HU within the mask region are considered as CAC candidates. The heart segmentation algorithm was evaluated on 400 non-contrast cases with both low-dose and regular dose CT scans. By visual inspection, 371 (92.8%) of the segmentations were acceptable. The automated CAC detection algorithm was evaluated on 41 low-dose non-contrast CT scans. Manual markings were performed on both low-dose and standard-dose scans for these cases. Using linear regression, the correlation of the automatic AS with the standard-dose manual scores was 0.86; with the low-dose manual scores the correlation was 0.91. Standard risk categories were also computed. The automated method risk category agreed with manual markings of gated scans for 24 cases while 15 cases were 1 category off. For low-dose scans, the automatic method agreed with 33 cases while 7 cases were 1 category off.

  9. An automated classification system for the differentiation of obstructive lung diseases based on the textural analysis of HRCT images

    Energy Technology Data Exchange (ETDEWEB)

    Park, Seong Hoon; Seo, Joon Beom; Kim, Nam Kug; Lee, Young Kyung; Kim, Song Soo; Chae, Eun Jin [University of Ulsan, College of Medicine, Asan Medical Center, Seoul (Korea, Republic of); Lee, June Goo [Seoul National University College of Medicine, Seoul (Korea, Republic of)

    2007-07-15

    To develop an automated classification system for the differentiation of obstructive lung diseases based on the textural analysis of HRCT images, and to evaluate the accuracy and usefulness of the system. For textural analysis, histogram features, gradient features, run length encoding, and a co-occurrence matrix were employed. A Bayesian classifier was used for automated classification. The images (image number n = 256) were selected from the HRCT images obtained from 17 healthy subjects (n = 67), 26 patients with bronchiolitis obliterans (n = 70), 28 patients with mild centrilobular emphysema (n = 65), and 21 patients with panlobular emphysema or severe centrilobular emphysema (n = 63). An five-fold cross-validation method was used to assess the performance of the system. Class-specific sensitivities were analyzed and the overall accuracy of the system was assessed with kappa statistics. The sensitivity of the system for each class was as follows: normal lung 84.9%, bronchiolitis obliterans 83.8%, mild centrilobular emphysema 77.0%, and panlobular emphysema or severe centrilobular emphysema 95.8%. The overall performance for differentiating each disease and the normal lung was satisfactory with a kappa value of 0.779. An automated classification system for the differentiation between obstructive lung diseases based on the textural analysis of HRCT images was developed. The proposed system discriminates well between the various obstructive lung diseases and the normal lung.

  10. Comparison of Automated Image-Based Grain Sizing to Standard Pebble Count Methods

    Science.gov (United States)

    Strom, K. B.

    2009-12-01

    This study explores the use of an automated, image-based method for characterizing grain-size distributions (GSDs) of exposed, open-framework gravel beds. This was done by comparing the GSDs measured with an image-based method to distributions obtained with two pebble-count methods. Selection of grains for the two pebble-count methods was carried out using a gridded sampling frame and the heel-to-toe Wolman walk method at six field sites. At each site, 500-partcle pebble-count samples were collected with each of the two pebble-count methods and digital images were systematically collected over the same sampling area. For the methods used, the pebble counts collected with the gridded sampling frame were assumed to be the most accurate representations of the true grain-size population, and results from the image-based method were compared to the grid derived GSDs for accuracy estimates; comparisons between the grid and Wolman walk methods were conducted to give an indication of possible variation between commonly used methods for each particular field site. Comparison of grain sizes were made at two spatial scales. At the larger scale, results from the image-based method were integrated over the sampling area required to collect the 500-particle pebble-count samples. At the smaller sampling scale, the image derived GSDs were compared to those from 100-particle, pebble-count samples obtained with the gridded sampling frame. The comparisons show that the image-based method performed reasonably well on five of the six study sites. For those five sites, the image-based method slightly underestimate all grain-size percentiles relative to the pebble counts collected with the gridded sampling frame. The average bias for Ψ5, Ψ50, and Ψ95 between the image and grid count methods at the larger sampling scale was 0.07Ψ, 0.04Ψ, and 0.19Ψ respectively; at the smaller sampling scale the average bias was 0.004Ψ, 0.03Ψ, and 0.18Ψ respectively. The average bias between the

  11. Optic disc boundary segmentation from diffeomorphic demons registration of monocular fundus image sequences versus 3D visualization of stereo fundus image pairs for automated early stage glaucoma assessment

    Science.gov (United States)

    Gatti, Vijay; Hill, Jason; Mitra, Sunanda; Nutter, Brian

    2014-03-01

    Despite the current availability in resource-rich regions of advanced technologies in scanning and 3-D imaging in current ophthalmology practice, world-wide screening tests for early detection and progression of glaucoma still consist of a variety of simple tools, including fundus image-based parameters such as CDR (cup to disc diameter ratio) and CAR (cup to disc area ratio), especially in resource -poor regions. Reliable automated computation of the relevant parameters from fundus image sequences requires robust non-rigid registration and segmentation techniques. Recent research work demonstrated that proper non-rigid registration of multi-view monocular fundus image sequences could result in acceptable segmentation of cup boundaries for automated computation of CAR and CDR. This research work introduces a composite diffeomorphic demons registration algorithm for segmentation of cup boundaries from a sequence of monocular images and compares the resulting CAR and CDR values with those computed manually by experts and from 3-D visualization of stereo pairs. Our preliminary results show that the automated computation of CDR and CAR from composite diffeomorphic segmentation of monocular image sequences yield values comparable with those from the other two techniques and thus may provide global healthcare with a cost-effective yet accurate tool for management of glaucoma in its early stage.

  12. Evaluation of a software package for automated quality assessment of contrast detail images--comparison with subjective visual assessment.

    Science.gov (United States)

    Pascoal, A; Lawinski, C P; Honey, I; Blake, P

    2005-12-07

    Contrast detail analysis is commonly used to assess image quality (IQ) associated with diagnostic imaging systems. Applications include routine assessment of equipment performance and optimization studies. Most frequently, the evaluation of contrast detail images involves human observers visually detecting the threshold contrast detail combinations in the image. However, the subjective nature of human perception and the variations in the decision threshold pose limits to the minimum image quality variations detectable with reliability. Objective methods of assessment of image quality such as automated scoring have the potential to overcome the above limitations. A software package (CDRAD analyser) developed for automated scoring of images produced with the CDRAD test object was evaluated. Its performance to assess absolute and relative IQ was compared with that of an average observer. Results show that the software does not mimic the absolute performance of the average observer. The software proved more sensitive and was able to detect smaller low-contrast variations. The observer's performance was superior to the software's in the detection of smaller details. Both scoring methods showed frequent agreement in the detection of image quality variations resulting from changes in kVp and KERMA(detector), which indicates the potential to use the software CDRAD analyser for assessment of relative IQ.

  13. Evaluation of a software package for automated quality assessment of contrast detail images-comparison with subjective visual assessment

    Energy Technology Data Exchange (ETDEWEB)

    Pascoal, A [Medical Engineering and Physics, King' s College London, Faraday Building Denmark Hill, London SE5 8RX (Denmark); Lawinski, C P [KCARE - King' s Centre for Assessment of Radiological Equipment, King' s College Hospital, Faraday Building Denmark Hill, London SE5 8RX (Denmark); Honey, I [KCARE - King' s Centre for Assessment of Radiological Equipment, King' s College Hospital, Faraday Building Denmark Hill, London SE5 8RX (Denmark); Blake, P [KCARE - King' s Centre for Assessment of Radiological Equipment, King' s College Hospital, Faraday Building Denmark Hill, London SE5 8RX (Denmark)

    2005-12-07

    Contrast detail analysis is commonly used to assess image quality (IQ) associated with diagnostic imaging systems. Applications include routine assessment of equipment performance and optimization studies. Most frequently, the evaluation of contrast detail images involves human observers visually detecting the threshold contrast detail combinations in the image. However, the subjective nature of human perception and the variations in the decision threshold pose limits to the minimum image quality variations detectable with reliability. Objective methods of assessment of image quality such as automated scoring have the potential to overcome the above limitations. A software package (CDRAD analyser) developed for automated scoring of images produced with the CDRAD test object was evaluated. Its performance to assess absolute and relative IQ was compared with that of an average observer. Results show that the software does not mimic the absolute performance of the average observer. The software proved more sensitive and was able to detect smaller low-contrast variations. The observer's performance was superior to the software's in the detection of smaller details. Both scoring methods showed frequent agreement in the detection of image quality variations resulting from changes in kVp and KERMA{sub detector}, which indicates the potential to use the software CDRAD analyser for assessment of relative IQ.

  14. Investigation into Cloud Computing for More Robust Automated Bulk Image Geoprocessing

    Science.gov (United States)

    Brown, Richard B.; Smoot, James C.; Underwood, Lauren; Armstrong, C. Duane

    2012-01-01

    Geospatial resource assessments frequently require timely geospatial data processing that involves large multivariate remote sensing data sets. In particular, for disasters, response requires rapid access to large data volumes, substantial storage space and high performance processing capability. The processing and distribution of this data into usable information products requires a processing pipeline that can efficiently manage the required storage, computing utilities, and data handling requirements. In recent years, with the availability of cloud computing technology, cloud processing platforms have made available a powerful new computing infrastructure resource that can meet this need. To assess the utility of this resource, this project investigates cloud computing platforms for bulk, automated geoprocessing capabilities with respect to data handling and application development requirements. This presentation is of work being conducted by Applied Sciences Program Office at NASA-Stennis Space Center. A prototypical set of image manipulation and transformation processes that incorporate sample Unmanned Airborne System data were developed to create value-added products and tested for implementation on the "cloud". This project outlines the steps involved in creating and testing of open source software developed process code on a local prototype platform, and then transitioning this code with associated environment requirements into an analogous, but memory and processor enhanced cloud platform. A data processing cloud was used to store both standard digital camera panchromatic and multi-band image data, which were subsequently subjected to standard image processing functions such as NDVI (Normalized Difference Vegetation Index), NDMI (Normalized Difference Moisture Index), band stacking, reprojection, and other similar type data processes. Cloud infrastructure service providers were evaluated by taking these locally tested processing functions, and then

  15. Automated sub-5 nm image registration in integrated correlative fluorescence and electron microscopy using cathodoluminescence pointers

    Science.gov (United States)

    Haring, Martijn T.; Liv, Nalan; Zonnevylle, A. Christiaan; Narvaez, Angela C.; Voortman, Lenard M.; Kruit, Pieter; Hoogenboom, Jacob P.

    2017-01-01

    In the biological sciences, data from fluorescence and electron microscopy is correlated to allow fluorescence biomolecule identification within the cellular ultrastructure and/or ultrastructural analysis following live-cell imaging. High-accuracy (sub-100 nm) image overlay requires the addition of fiducial markers, which makes overlay accuracy dependent on the number of fiducials present in the region of interest. Here, we report an automated method for light-electron image overlay at high accuracy, i.e. below 5 nm. Our method relies on direct visualization of the electron beam position in the fluorescence detection channel using cathodoluminescence pointers. We show that image overlay using cathodoluminescence pointers corrects for image distortions, is independent of user interpretation, and does not require fiducials, allowing image correlation with molecular precision anywhere on a sample. PMID:28252673

  16. Automated sub-5 nm image registration in integrated correlative fluorescence and electron microscopy using cathodoluminescence pointers

    Science.gov (United States)

    Haring, Martijn T.; Liv, Nalan; Zonnevylle, A. Christiaan; Narvaez, Angela C.; Voortman, Lenard M.; Kruit, Pieter; Hoogenboom, Jacob P.

    2017-03-01

    In the biological sciences, data from fluorescence and electron microscopy is correlated to allow fluorescence biomolecule identification within the cellular ultrastructure and/or ultrastructural analysis following live-cell imaging. High-accuracy (sub-100 nm) image overlay requires the addition of fiducial markers, which makes overlay accuracy dependent on the number of fiducials present in the region of interest. Here, we report an automated method for light-electron image overlay at high accuracy, i.e. below 5 nm. Our method relies on direct visualization of the electron beam position in the fluorescence detection channel using cathodoluminescence pointers. We show that image overlay using cathodoluminescence pointers corrects for image distortions, is independent of user interpretation, and does not require fiducials, allowing image correlation with molecular precision anywhere on a sample.

  17. Automated method for the rapid and precise estimation of adherent cell culture characteristics from phase contrast microscopy images.

    Science.gov (United States)

    Jaccard, Nicolas; Griffin, Lewis D; Keser, Ana; Macown, Rhys J; Super, Alexandre; Veraitch, Farlan S; Szita, Nicolas

    2014-03-01

    The quantitative determination of key adherent cell culture characteristics such as confluency, morphology, and cell density is necessary for the evaluation of experimental outcomes and to provide a suitable basis for the establishment of robust cell culture protocols. Automated processing of images acquired using phase contrast microscopy (PCM), an imaging modality widely used for the visual inspection of adherent cell cultures, could enable the non-invasive determination of these characteristics. We present an image-processing approach that accurately detects cellular objects in PCM images through a combination of local contrast thresholding and post hoc correction of halo artifacts. The method was thoroughly validated using a variety of cell lines, microscope models and imaging conditions, demonstrating consistently high segmentation performance in all cases and very short processing times (Source-code for MATLAB and ImageJ is freely available under a permissive open-source license.

  18. Precision automation of cell type classification and sub-cellular fluorescence quantification from laser scanning confocal images

    Directory of Open Access Journals (Sweden)

    Hardy Craig Hall

    2016-02-01

    Full Text Available While novel whole-plant phenotyping technologies have been successfully implemented into functional genomics and breeding programs, the potential of automated phenotyping with cellular resolution is largely unexploited. Laser scanning confocal microscopy has the potential to close this gap by providing spatially highly resolved images containing anatomic as well as chemical information on a subcellular basis. However, in the absence of automated methods, the assessment of the spatial patterns and abundance of fluorescent markers with subcellular resolution is still largely qualitative and time-consuming. Recent advances in image acquisition and analysis, coupled with improvements in microprocessor performance, have brought such automated methods within reach, so that information from thousands of cells per image for hundreds of images may be derived in an experimentally convenient time-frame. Here, we present a MATLAB-based analytical pipeline to 1 segment radial plant organs into individual cells, 2 classify cells into cell type categories based upon random forest classification, 3 divide each cell into sub-regions, and 4 quantify fluorescence intensity to a subcellular degree of precision for a separate fluorescence channel. In this research advance, we demonstrate the precision of this analytical process for the relatively complex tissues of Arabidopsis hypocotyls at various stages of development. High speed and robustness make our approach suitable for phenotyping of large collections of stem-like material and other tissue types.

  19. Automated gas bubble imaging at sea floor – a new method of in situ gas flux quantification

    Directory of Open Access Journals (Sweden)

    K. Thomanek

    2010-02-01

    Full Text Available Photo-optical systems are common in marine sciences and have been extensively used in coastal and deep-sea research. However, due to technical limitations in the past photo images had to be processed manually or semi-automatically. Recent advances in technology have rapidly improved image recording, storage and processing capabilities which are used in a new concept of automated in situ gas quantification by photo-optical detection. The design for an in situ high-speed image acquisition and automated data processing system is reported ("Bubblemeter". New strategies have been followed with regards to back-light illumination, bubble extraction, automated image processing and data management. This paper presents the design of the novel method, its validation procedures and calibration experiments. The system will be positioned and recovered from the sea floor using a remotely operated vehicle (ROV. It is able to measure bubble flux rates up to 10 L/min with a maximum error of 33% for worst case conditions. The Bubblemeter has been successfully deployed at a water depth of 1023 m at the Makran accretionary prism offshore Pakistan during a research expedition with R/V Meteor in November 2007.

  20. Automated MALDI matrix deposition method with inkjet printing for imaging mass spectrometry.

    Science.gov (United States)

    Baluya, Dodge L; Garrett, Timothy J; Yost, Richard A

    2007-09-01

    Careful matrix deposition on tissue samples for matrix-assisted laser desorption/ionization (MALDI) is critical for producing reproducible analyte ion signals. Traditional methods for matrix deposition are often considered an art rather than a science, with significant sample-to-sample variability. Here we report an automated method for matrix deposition, employing a desktop inkjet printer (printer tray, designed to hold CDs and DVDs, was modified to hold microscope slides. Empty ink cartridges were filled with MALDI matrix solutions, including DHB in methanol/water (70:30) at concentrations up to 40 mg/mL. Various samples (including rat brain tissue sections and standards of small drug molecules) were prepared using three deposition methods (electrospray, airbrush, inkjet). A linear ion trap equipped with an intermediate-pressure MALDI source was used for analyses. Optical microscopic examination showed that matrix crystals were formed evenly across the sample. There was minimal background signal after storing the matrix in the cartridges over a 6-month period. Overall, the mass spectral images gathered from inkjet-printed tissue specimens were of better quality and more reproducible than from specimens prepared by the electrospray and airbrush methods.

  1. Automated MALDI Matrix Coating System for Multiple Tissue Samples for Imaging Mass Spectrometry

    Science.gov (United States)

    Mounfield, William P.; Garrett, Timothy J.

    2012-03-01

    Uniform matrix deposition on tissue samples for matrix-assisted laser desorption/ionization (MALDI) is key for reproducible analyte ion signals. Current methods often result in nonhomogenous matrix deposition, and take time and effort to produce acceptable ion signals. Here we describe a fully-automated method for matrix deposition using an enclosed spray chamber and spray nozzle for matrix solution delivery. A commercial air-atomizing spray nozzle was modified and combined with solenoid controlled valves and a Programmable Logic Controller (PLC) to control and deliver the matrix solution. A spray chamber was employed to contain the nozzle, sample, and atomized matrix solution stream, and to prevent any interference from outside conditions as well as allow complete control of the sample environment. A gravity cup was filled with MALDI matrix solutions, including DHB in chloroform/methanol (50:50) at concentrations up to 60 mg/mL. Various samples (including rat brain tissue sections) were prepared using two deposition methods (spray chamber, inkjet). A linear ion trap equipped with an intermediate-pressure MALDI source was used for analyses. Optical microscopic examination showed a uniform coating of matrix crystals across the sample. Overall, the mass spectral images gathered from tissues coated using the spray chamber system were of better quality and more reproducible than from tissue specimens prepared by the inkjet deposition method.

  2. Automated midline shift and intracranial pressure estimation based on brain CT images.

    Science.gov (United States)

    Chen, Wenan; Belle, Ashwin; Cockrell, Charles; Ward, Kevin R; Najarian, Kayvan

    2013-04-13

    In this paper we present an automated system based mainly on the computed tomography (CT) images consisting of two main components: the midline shift estimation and intracranial pressure (ICP) pre-screening system. To estimate the midline shift, first an estimation of the ideal midline is performed based on the symmetry of the skull and anatomical features in the brain CT scan. Then, segmentation of the ventricles from the CT scan is performed and used as a guide for the identification of the actual midline through shape matching. These processes mimic the measuring process by physicians and have shown promising results in the evaluation. In the second component, more features are extracted related to ICP, such as the texture information, blood amount from CT scans and other recorded features, such as age, injury severity score to estimate the ICP are also incorporated. Machine learning techniques including feature selection and classification, such as Support Vector Machines (SVMs), are employed to build the prediction model using RapidMiner. The evaluation of the prediction shows potential usefulness of the model. The estimated ideal midline shift and predicted ICP levels may be used as a fast pre-screening step for physicians to make decisions, so as to recommend for or against invasive ICP monitoring.

  3. Automated foveola localization in retinal 3D-OCT images using structural support vector machine prediction.

    Science.gov (United States)

    Liu, Yu-Ying; Ishikawa, Hiroshi; Chen, Mei; Wollstein, Gadi; Schuman, Joel S; Rehg, James M

    2012-01-01

    We develop an automated method to determine the foveola location in macular 3D-OCT images in either healthy or pathological conditions. Structural Support Vector Machine (S-SVM) is trained to directly predict the location of the foveola, such that the score at the ground truth position is higher than that at any other position by a margin scaling with the associated localization loss. This S-SVM formulation directly minimizes the empirical risk of localization error, and makes efficient use of all available training data. It deals with the localization problem in a more principled way compared to the conventional binary classifier learning that uses zero-one loss and random sampling of negative examples. A total of 170 scans were collected for the experiment. Our method localized 95.1% of testing scans within the anatomical area of the foveola. Our experimental results show that the proposed method can effectively identify the location of the foveola, facilitating diagnosis around this important landmark.

  4. Experimental saltwater intrusion in coastal aquifers using automated image analysis: Applications to homogeneous aquifers

    Science.gov (United States)

    Robinson, G.; Ahmed, Ashraf A.; Hamill, G. A.

    2016-07-01

    This paper presents the applications of a novel methodology to quantify saltwater intrusion parameters in laboratory-scale experiments. The methodology uses an automated image analysis procedure, minimising manual inputs and the subsequent systematic errors that can be introduced. This allowed the quantification of the width of the mixing zone which is difficult to measure in experimental methods that are based on visual observations. Glass beads of different grain sizes were tested for both steady-state and transient conditions. The transient results showed good correlation between experimental and numerical intrusion rates. The experimental intrusion rates revealed that the saltwater wedge reached a steady state condition sooner while receding than advancing. The hydrodynamics of the experimental mixing zone exhibited similar traits; a greater increase in the width of the mixing zone was observed in the receding saltwater wedge, which indicates faster fluid velocities and higher dispersion. The angle of intrusion analysis revealed the formation of a volume of diluted saltwater at the toe position when the saltwater wedge is prompted to recede. In addition, results of different physical repeats of the experiment produced an average coefficient of variation less than 0.18 of the measured toe length and width of the mixing zone.

  5. Automated collection of imaging and phenotypic data to centralized and distributed data repositories

    Directory of Open Access Journals (Sweden)

    Margaret D King

    2014-06-01

    Full Text Available Accurate data collection at the ground level is vital to the integrity of neuroimaging research. Similarly important is the ability to connect and curate data in order to make it meaningful and sharable with other investigators. Collecting data, especially with several different modalities, can be time consuming and expensive. These issues have driven the development of automated collection of neuroimaging and clinical assessment data within COINS (Collaborative Informatics and Neuroimaging Suite. COINS is an end-to-end data management system. It provides a comprehensive platform for data collection, management, secure storage, and flexible data retrieval (Bockholt et al., 2010, Scott et al., 2011. Self Assessment (SAis an application embedded in the Assessment Manager tool in the COINS. It is an innovative tool that allows participants to fill out assessments via the web-based Participant Portal. It eliminates the need for paper collection and data entry by allowing participants to submit their assessments directly to COINS. After a queue has been created for the participant, they can access the Participant Portal via the internet to fill out their assessments. This allows them the flexibility to participate from home, a library, on site, etc. The collected data is stored in a PostgresSQL database at the Mind Research Network behind a firewall to protect sensitive data. An added benefit to using COINS is the ability to collect, store and share imaging data and assessment data with no interaction with outside tools or programs. All study data collected (imaging and assessment are stored and exported with a participant's unique subject identifier so there is no need to keep extra spreadsheets or databases to link and keep track of the data. There is a great need for data collection tools that limit human intervention and error. COINS aims to be a leader in database solutions for research studies collecting data from several different modalities

  6. Automated discrimination of dicentric and monocentric chromosomes by machine learning-based image processing.

    Science.gov (United States)

    Li, Yanxin; Knoll, Joan H; Wilkins, Ruth C; Flegal, Farrah N; Rogan, Peter K

    2016-05-01

    Dose from radiation exposure can be estimated from dicentric chromosome (DC) frequencies in metaphase cells of peripheral blood lymphocytes. We automated DC detection by extracting features in Giemsa-stained metaphase chromosome images and classifying objects by machine learning (ML). DC detection involves (i) intensity thresholded segmentation of metaphase objects, (ii) chromosome separation by watershed transformation and elimination of inseparable chromosome clusters, fragments and staining debris using a morphological decision tree filter, (iii) determination of chromosome width and centreline, (iv) derivation of centromere candidates, and (v) distinction of DCs from monocentric chromosomes (MC) by ML. Centromere candidates are inferred from 14 image features input to a Support Vector Machine (SVM). Sixteen features derived from these candidates are then supplied to a Boosting classifier and a second SVM which determines whether a chromosome is either a DC or MC. The SVM was trained with 292 DCs and 3135 MCs, and then tested with cells exposed to either low (1 Gy) or high (2-4 Gy) radiation dose. Results were then compared with those of 3 experts. True positive rates (TPR) and positive predictive values (PPV) were determined for the tuning parameter, σ. At larger σ, PPV decreases and TPR increases. At high dose, for σ = 1.3, TPR = 0.52 and PPV = 0.83, while at σ = 1.6, the TPR = 0.65 and PPV = 0.72. At low dose and σ = 1.3, TPR = 0.67 and PPV = 0.26. The algorithm differentiates DCs from MCs, overlapped chromosomes and other objects with acceptable accuracy over a wide range of radiation exposures.

  7. Quantification of Eosinophilic Granule Protein Deposition in Biopsies of Inflammatory Skin Diseases by Automated Image Analysis of Highly Sensitive Immunostaining

    Directory of Open Access Journals (Sweden)

    Peter Kiehl

    1999-01-01

    Full Text Available Eosinophilic granulocytes are major effector cells in inflammation. Extracellular deposition of toxic eosinophilic granule proteins (EGPs, but not the presence of intact eosinophils, is crucial for their functional effect in situ. As even recent morphometric approaches to quantify the involvement of eosinophils in inflammation have been only based on cell counting, we developed a new method for the cell‐independent quantification of EGPs by image analysis of immunostaining. Highly sensitive, automated immunohistochemistry was done on paraffin sections of inflammatory skin diseases with 4 different primary antibodies against EGPs. Image analysis of immunostaining was performed by colour translation, linear combination and automated thresholding. Using strictly standardized protocols, the assay was proven to be specific and accurate concerning segmentation in 8916 fields of 520 sections, well reproducible in repeated measurements and reliable over 16 weeks observation time. The method may be valuable for the cell‐independent segmentation of immunostaining in other applications as well.

  8. Content-based retrieval in videos from laparoscopic surgery

    Science.gov (United States)

    Schoeffmann, Klaus; Beecks, Christian; Lux, Mathias; Uysal, Merih Seran; Seidl, Thomas

    2016-03-01

    In the field of medical endoscopy more and more surgeons are changing over to record and store videos of their endoscopic procedures for long-term archival. These endoscopic videos are a good source of information for explanations to patients and follow-up operations. As the endoscope is the "eye of the surgeon", the video shows the same information the surgeon has seen during the operation, and can describe the situation inside the patient much more precisely than an operation report would do. Recorded endoscopic videos can also be used for training young surgeons and in some countries the long-term archival of video recordings from endoscopic procedures is even enforced by law. A major challenge, however, is to efficiently access these very large video archives for later purposes. One problem, for example, is to locate specific images in the videos that show important situations, which are additionally captured as static images during the procedure. This work addresses this problem and focuses on contentbased video retrieval in data from laparoscopic surgery. We propose to use feature signatures, which can appropriately and concisely describe the content of laparoscopic images, and show that by using this content descriptor with an appropriate metric, we are able to efficiently perform content-based retrieval in laparoscopic videos. In a dataset with 600 captured static images from 33 hours recordings, we are able to find the correct video segment for more than 88% of these images.

  9. Immunohistochemical Ki-67/KL1 double stains increase accuracy of Ki-67 indices in breast cancer and simplify automated image analysis

    DEFF Research Database (Denmark)

    Nielsen, Patricia S; Bentzer, Nina K; Jensen, Vibeke

    2014-01-01

    observers and automated image analysis. RESULTS: Indices were predominantly higher for single stains than double stains (P≤0.002), yet the difference between observers was statistically significant (PPearson correlation coefficient for manual and automated indices ranged from 0.......69 to 0.85 (Pcorrelating automated indices with tumor characteristics, for example, tumor size (P... stains, Ki-67 should be quantified on double stains to reach a higher accuracy. Automated indices correlated well with manual estimates and tumor characteristics, and they are thus possibly valuable tools in future exploration of Ki-67 in breast cancer....

  10. Automated quantification and sizing of unbranched filamentous cyanobacteria by model-based object-oriented image analysis.

    Science.gov (United States)

    Zeder, Michael; Van den Wyngaert, Silke; Köster, Oliver; Felder, Kathrin M; Pernthaler, Jakob

    2010-03-01

    Quantification and sizing of filamentous cyanobacteria in environmental samples or cultures are time-consuming and are often performed by using manual or semiautomated microscopic analysis. Automation of conventional image analysis is difficult because filaments may exhibit great variations in length and patchy autofluorescence. Moreover, individual filaments frequently cross each other in microscopic preparations, as deduced by modeling. This paper describes a novel approach based on object-oriented image analysis to simultaneously determine (i) filament number, (ii) individual filament lengths, and (iii) the cumulative filament length of unbranched cyanobacterial morphotypes in fluorescent microscope images in a fully automated high-throughput manner. Special emphasis was placed on correct detection of overlapping objects by image analysis and on appropriate coverage of filament length distribution by using large composite images. The method was validated with a data set for Planktothrix rubescens from field samples and was compared with manual filament tracing, the line intercept method, and the Utermöhl counting approach. The computer program described allows batch processing of large images from any appropriate source and annotation of detected filaments. It requires no user interaction, is available free, and thus might be a useful tool for basic research and drinking water quality control.

  11. Development and application of an automated analysis method for individual cerebral perfusion single photon emission tomography images

    CERN Document Server

    Cluckie, A J

    2001-01-01

    Neurological images may be analysed by performing voxel by voxel comparisons with a group of control subject images. An automated, 3D, voxel-based method has been developed for the analysis of individual single photon emission tomography (SPET) scans. Clusters of voxels are identified that represent regions of abnormal radiopharmaceutical uptake. Morphological operators are applied to reduce noise in the clusters, then quantitative estimates of the size and degree of the radiopharmaceutical uptake abnormalities are derived. Statistical inference has been performed using a Monte Carlo method that has not previously been applied to SPET scans, or for the analysis of individual images. This has been validated for group comparisons of SPET scans and for the analysis of an individual image using comparison with a group. Accurate statistical inference was obtained independent of experimental factors such as degrees of freedom, image smoothing and voxel significance level threshold. The analysis method has been eval...

  12. LOCALIZATION OF PALM DORSAL VEIN PATTERN USING IMAGE PROCESSING FOR AUTOMATED INTRA-VENOUS DRUG NEEDLE INSERTION

    Directory of Open Access Journals (Sweden)

    Mrs. Kavitha. R,

    2011-06-01

    Full Text Available Vein pattern in palms is a random mesh of interconnected and inter- wining blood vessels. This project is the application of vein detection concept to automate the drug delivery process. It dealswith extracting palm dorsal vein structures, which is a key procedure for selecting the optimal drug needle insertion point. Gray scale images obtained from a low cost IR-webcam are poor in contrast, and usually noisy which make an effective vein segmentation a great challenge. Here a new vein image segmentation method is introduced, based on enhancement techniques resolves the conflict between poor contrast vein image and good quality image segmentation. Gaussian filter is used to remove the high frequency noise in the image. The ultimate goal is to identify venous bifurcations and determine the insertion point for the needle in between their branches.

  13. Automated detection, 3D segmentation and analysis of high resolution spine MR images using statistical shape models

    Science.gov (United States)

    Neubert, A.; Fripp, J.; Engstrom, C.; Schwarz, R.; Lauer, L.; Salvado, O.; Crozier, S.

    2012-12-01

    Recent advances in high resolution magnetic resonance (MR) imaging of the spine provide a basis for the automated assessment of intervertebral disc (IVD) and vertebral body (VB) anatomy. High resolution three-dimensional (3D) morphological information contained in these images may be useful for early detection and monitoring of common spine disorders, such as disc degeneration. This work proposes an automated approach to extract the 3D segmentations of lumbar and thoracic IVDs and VBs from MR images using statistical shape analysis and registration of grey level intensity profiles. The algorithm was validated on a dataset of volumetric scans of the thoracolumbar spine of asymptomatic volunteers obtained on a 3T scanner using the relatively new 3D T2-weighted SPACE pulse sequence. Manual segmentations and expert radiological findings of early signs of disc degeneration were used in the validation. There was good agreement between manual and automated segmentation of the IVD and VB volumes with the mean Dice scores of 0.89 ± 0.04 and 0.91 ± 0.02 and mean absolute surface distances of 0.55 ± 0.18 mm and 0.67 ± 0.17 mm respectively. The method compares favourably to existing 3D MR segmentation techniques for VBs. This is the first time IVDs have been automatically segmented from 3D volumetric scans and shape parameters obtained were used in preliminary analyses to accurately classify (100% sensitivity, 98.3% specificity) disc abnormalities associated with early degenerative changes.

  14. Using dual-energy x-ray imaging to enhance automated lung tumor tracking during real-time adaptive radiotherapy

    Energy Technology Data Exchange (ETDEWEB)

    Menten, Martin J., E-mail: martin.menten@icr.ac.uk; Fast, Martin F.; Nill, Simeon; Oelfke, Uwe, E-mail: uwe.oelfke@icr.ac.uk [Joint Department of Physics at The Institute of Cancer Research and The Royal Marsden NHS Foundation Trust, London SM2 5NG (United Kingdom)

    2015-12-15

    Purpose: Real-time, markerless localization of lung tumors with kV imaging is often inhibited by ribs obscuring the tumor and poor soft-tissue contrast. This study investigates the use of dual-energy imaging, which can generate radiographs with reduced bone visibility, to enhance automated lung tumor tracking for real-time adaptive radiotherapy. Methods: kV images of an anthropomorphic breathing chest phantom were experimentally acquired and radiographs of actual lung cancer patients were Monte-Carlo-simulated at three imaging settings: low-energy (70 kVp, 1.5 mAs), high-energy (140 kVp, 2.5 mAs, 1 mm additional tin filtration), and clinical (120 kVp, 0.25 mAs). Regular dual-energy images were calculated by weighted logarithmic subtraction of high- and low-energy images and filter-free dual-energy images were generated from clinical and low-energy radiographs. The weighting factor to calculate the dual-energy images was determined by means of a novel objective score. The usefulness of dual-energy imaging for real-time tracking with an automated template matching algorithm was investigated. Results: Regular dual-energy imaging was able to increase tracking accuracy in left–right images of the anthropomorphic phantom as well as in 7 out of 24 investigated patient cases. Tracking accuracy remained comparable in three cases and decreased in five cases. Filter-free dual-energy imaging was only able to increase accuracy in 2 out of 24 cases. In four cases no change in accuracy was observed and tracking accuracy worsened in nine cases. In 9 out of 24 cases, it was not possible to define a tracking template due to poor soft-tissue contrast regardless of input images. The mean localization errors using clinical, regular dual-energy, and filter-free dual-energy radiographs were 3.85, 3.32, and 5.24 mm, respectively. Tracking success was dependent on tumor position, tumor size, imaging beam angle, and patient size. Conclusions: This study has highlighted the influence of

  15. Automated registration of freehand B-mode ultrasound and magnetic resonance imaging of the carotid arteries based on geometric features

    DEFF Research Database (Denmark)

    Carvalho, Diego D. B.; Arias Lorza, Andres Mauricio; Niessen, Wiro J.;

    2017-01-01

    An automated method for registering B-mode ultrasound (US) and magnetic resonance imaging (MRI) of the carotid arteries is proposed. The registration uses geometric features, namely, lumen centerlines and lumen segmentations, which are extracted fully automatically from the images after manual...... annotation of three seed points in US and MRI. The registration procedure starts with alignment of the lumen centerlines using a point-based registration algorithm. The resulting rigid transformation is used to initialize a rigid and subsequent non-rigid registration procedure that jointly aligns centerlines...

  16. Automated Image Analysis for the Detection of Benthic Crustaceans and Bacterial Mat Coverage Using the VENUS Undersea Cabled Network

    Directory of Open Access Journals (Sweden)

    Jacopo Aguzzi

    2011-11-01

    Full Text Available The development and deployment of sensors for undersea cabled observatories is presently biased toward the measurement of habitat variables, while sensor technologies for biological community characterization through species identification and individual counting are less common. The VENUS cabled multisensory network (Vancouver Island, Canada deploys seafloor camera systems at several sites. Our objective in this study was to implement new automated image analysis protocols for the recognition and counting of benthic decapods (i.e., the galatheid squat lobster, Munida quadrispina, as well as for the evaluation of changes in bacterial mat coverage (i.e., Beggiatoa spp., using a camera deployed in Saanich Inlet (103 m depth. For the counting of Munida we remotely acquired 100 digital photos at hourly intervals from 2 to 6 December 2009. In the case of bacterial mat coverage estimation, images were taken from 2 to 8 December 2009 at the same time frequency. The automated image analysis protocols for both study cases were created in MatLab 7.1. Automation for Munida counting incorporated the combination of both filtering and background correction (Median- and Top-Hat Filters with Euclidean Distances (ED on Red-Green-Blue (RGB channels. The Scale-Invariant Feature Transform (SIFT features and Fourier Descriptors (FD of tracked objects were then extracted. Animal classifications were carried out with the tools of morphometric multivariate statistic (i.e., Partial Least Square Discriminant Analysis; PLSDA on Mean RGB (RGBv value for each object and Fourier Descriptors (RGBv+FD matrices plus SIFT and ED. The SIFT approach returned the better results. Higher percentages of images were correctly classified and lower misclassification errors (an animal is present but not detected occurred. In contrast, RGBv+FD and ED resulted in a high incidence of records being generated for non-present animals. Bacterial mat coverage was estimated in terms of Percent

  17. Automated image analysis for the detection of benthic crustaceans and bacterial mat coverage using the VENUS undersea cabled network.

    Science.gov (United States)

    Aguzzi, Jacopo; Costa, Corrado; Robert, Katleen; Matabos, Marjolaine; Antonucci, Francesca; Juniper, S Kim; Menesatti, Paolo

    2011-01-01

    The development and deployment of sensors for undersea cabled observatories is presently biased toward the measurement of habitat variables, while sensor technologies for biological community characterization through species identification and individual counting are less common. The VENUS cabled multisensory network (Vancouver Island, Canada) deploys seafloor camera systems at several sites. Our objective in this study was to implement new automated image analysis protocols for the recognition and counting of benthic decapods (i.e., the galatheid squat lobster, Munida quadrispina), as well as for the evaluation of changes in bacterial mat coverage (i.e., Beggiatoa spp.), using a camera deployed in Saanich Inlet (103 m depth). For the counting of Munida we remotely acquired 100 digital photos at hourly intervals from 2 to 6 December 2009. In the case of bacterial mat coverage estimation, images were taken from 2 to 8 December 2009 at the same time frequency. The automated image analysis protocols for both study cases were created in MatLab 7.1. Automation for Munida counting incorporated the combination of both filtering and background correction (Median- and Top-Hat Filters) with Euclidean Distances (ED) on Red-Green-Blue (RGB) channels. The Scale-Invariant Feature Transform (SIFT) features and Fourier Descriptors (FD) of tracked objects were then extracted. Animal classifications were carried out with the tools of morphometric multivariate statistic (i.e., Partial Least Square Discriminant Analysis; PLSDA) on Mean RGB (RGBv) value for each object and Fourier Descriptors (RGBv+FD) matrices plus SIFT and ED. The SIFT approach returned the better results. Higher percentages of images were correctly classified and lower misclassification errors (an animal is present but not detected) occurred. In contrast, RGBv+FD and ED resulted in a high incidence of records being generated for non-present animals. Bacterial mat coverage was estimated in terms of Percent Coverage

  18. Application of automated methodologies based on digital images for phenological behaviour analysis in Mediterranean species

    Science.gov (United States)

    Cesaraccio, Carla; Piga, Alessandra; Ventura, Andrea; Arca, Angelo; Duce, Pierpaolo; Granados, Joel

    2015-04-01

    The importance of phenological research for understanding the consequences of global environmental change on vegetation is highlighted in the most recent IPCC reports. Collecting time series of phenological events appears to be of crucial importance to better understand how vegetation systems respond to climatic regime fluctuations, and, consequently, to develop effective management and adaptation strategies. Vegetation monitoring based on "near-surface" remote sensing techniques have been proposed in recent researches. In particular, the use of digital cameras has become more common for phenological monitoring. Digital images provide spectral information in the red, green, and blue (RGB) wavelengths. Inflection points in seasonal variations of intensities of each color channel can be used to identify phenological events. In this research, an Automated Phenological Observation System (APOS), based on digital image sensors, was used for monitoring the phenological behavior of shrubland species in a Mediterranean site. Major species of the shrubland ecosystem that were analyzed were: Cistus monspeliensis L., Cistus incanus L., Rosmarinus officinalis L., Pistacia lentiscus L., and Pinus halepensis Mill. The system was developed under the INCREASE (an Integrated Network on Climate Change Research) EU-funded research infrastructure project, which is based upon large scale field experiments with non-intrusive climatic manipulations. Monitoring of phenological behavior was conducted during 2012-2014 years. To the end of retrieve phenological information from digital images, a routine of commands to process the digital image file using the program MATLAB (R2013b, The MathWorks, Natick, Mass.) was specifically created. The images of the dataset have been re-classified and renamed files according to the date and time of acquisition. The analysis was focused on regions of interest (ROIs) of the panoramas acquired, defined by the presence of the most representative species of

  19. A Novel Automated High-Content Analysis Workflow Capturing Cell Population Dynamics from Induced Pluripotent Stem Cell Live Imaging Data

    Science.gov (United States)

    Kerz, Maximilian; Folarin, Amos; Meleckyte, Ruta; Watt, Fiona M.; Dobson, Richard J.; Danovi, Davide

    2016-01-01

    Most image analysis pipelines rely on multiple channels per image with subcellular reference points for cell segmentation. Single-channel phase-contrast images are often problematic, especially for cells with unfavorable morphology, such as induced pluripotent stem cells (iPSCs). Live imaging poses a further challenge, because of the introduction of the dimension of time. Evaluations cannot be easily integrated with other biological data sets including analysis of endpoint images. Here, we present a workflow that incorporates a novel CellProfiler-based image analysis pipeline enabling segmentation of single-channel images with a robust R-based software solution to reduce the dimension of time to a single data point. These two packages combined allow robust segmentation of iPSCs solely on phase-contrast single-channel images and enable live imaging data to be easily integrated to endpoint data sets while retaining the dynamics of cellular responses. The described workflow facilitates characterization of the response of live-imaged iPSCs to external stimuli and definition of cell line–specific, phenotypic signatures. We present an efficient tool set for automated high-content analysis suitable for cells with challenging morphology. This approach has potentially widespread applications for human pluripotent stem cells and other cell types. PMID:27256155

  20. An Improved Method for Measuring Quantitative Resistance to the Wheat Pathogen Zymoseptoria tritici Using High-Throughput Automated Image Analysis.

    Science.gov (United States)

    Stewart, Ethan L; Hagerty, Christina H; Mikaberidze, Alexey; Mundt, Christopher C; Zhong, Ziming; McDonald, Bruce A

    2016-07-01

    Zymoseptoria tritici causes Septoria tritici blotch (STB) on wheat. An improved method of quantifying STB symptoms was developed based on automated analysis of diseased leaf images made using a flatbed scanner. Naturally infected leaves (n = 949) sampled from fungicide-treated field plots comprising 39 wheat cultivars grown in Switzerland and 9 recombinant inbred lines (RIL) grown in Oregon were included in these analyses. Measures of quantitative resistance were percent leaf area covered by lesions, pycnidia size and gray value, and pycnidia density per leaf and lesion. These measures were obtained automatically with a batch-processing macro utilizing the image-processing software ImageJ. All phenotypes in both locations showed a continuous distribution, as expected for a quantitative trait. The trait distributions at both sites were largely overlapping even though the field and host environments were quite different. Cultivars and RILs could be assigned to two or more statistically different groups for each measured phenotype. Traditional visual assessments of field resistance were highly correlated with quantitative resistance measures based on image analysis for the Oregon RILs. These results show that automated image analysis provides a promising tool for assessing quantitative resistance to Z. tritici under field conditions.

  1. Towards an automated analysis of video-microscopy images of fungal morphogenesis

    Directory of Open Access Journals (Sweden)

    Diéguez-Uribeondo, Javier

    2005-06-01

    Full Text Available Fungal morphogenesis is an exciting field of cell biology and several mathematical models have been developed to describe it. These models require experimental evidences to be corroborated and, therefore, there is a continuous search for new microscopy and image analysis techniques. In this work, we have used a Canny-edge-detector based technique to automate the generation of hyphal profiles and calculation of morphogenetic parameters such as diameter, elongation rates and hyphoid fitness. The results show that the data obtained with this technique are similar to published data generated with manualbased tracing techniques and that have been carried out on the same species or genus. Thus, we show that application of edge detector-based technique to hyphal growth represents an efficient and accurate method to study hyphal morphogenesis. This represents the first step towards an automated analysis of videomicroscopy images of fungal morphogenesis.La morfogénesis de los hongos es un área de estudio de gran relevancia en la biología celular y en la que se han desarrollado varios modelos matemáticos. Los modelos matemáticos de procesos biológicos precisan de pruebas experimentales que apoyen y corroboren las predicciones teóricas y, por este motivo, existe una búsqueda continua de nuevas técnicas de microscopía y análisis de imágenes para su aplicación en el estudio del crecimiento celular. En este trabajo hemos utilizado una técnica basada en un detector de contornos llamado “Canny-edge-detectorâ€� con el objetivo de automatizar la generación de perfiles de hifas y el cálculo de parámetros morfogenéticos, tales como: el diámetro, la velocidad de elongación y el ajuste con el perfil hifoide, es decir, el perfil teórico de las hifas de los hongos. Los resultados obtenidos son similares a los datos publicados a partir de técnicas manuales de trazado de contornos, generados en la misma especie y género. De esta manera

  2. Application of Reflectance Transformation Imaging Technique to Improve Automated Edge Detection in a Fossilized Oyster Reef

    Science.gov (United States)

    Djuricic, Ana; Puttonen, Eetu; Harzhauser, Mathias; Dorninger, Peter; Székely, Balázs; Mandic, Oleg; Nothegger, Clemens; Molnár, Gábor; Pfeifer, Norbert

    2016-04-01

    The world's largest fossilized oyster reef is located in Stetten, Lower Austria excavated during field campaigns of the Natural History Museum Vienna between 2005 and 2008. It is studied in paleontology to learn about change in climate from past events. In order to support this study, a laser scanning and photogrammetric campaign was organized in 2014 for 3D documentation of the large and complex site. The 3D point clouds and high resolution images from this field campaign are visualized by photogrammetric methods in form of digital surface models (DSM, 1 mm resolution) and orthophoto (0.5 mm resolution) to help paleontological interpretation of data. Due to size of the reef, automated analysis techniques are needed to interpret all digital data obtained from the field. One of the key components in successful automation is detection of oyster shell edges. We have tested Reflectance Transformation Imaging (RTI) to visualize the reef data sets for end-users through a cultural heritage viewing interface (RTIViewer). The implementation includes a Lambert shading method to visualize DSMs derived from terrestrial laser scanning using scientific software OPALS. In contrast to shaded RTI no devices consisting of a hardware system with LED lights, or a body to rotate the light source around the object are needed. The gray value for a given shaded pixel is related to the angle between light source and the normal at that position. Brighter values correspond to the slope surfaces facing the light source. Increasing of zenith angle results in internal shading all over the reef surface. In total, oyster reef surface contains 81 DSMs with 3 m x 2 m each. Their surface was illuminated by moving the virtual sun every 30 degrees (12 azimuth angles from 20-350) and every 20 degrees (4 zenith angles from 20-80). This technique provides paleontologists an interactive approach to virtually inspect the oyster reef, and to interpret the shell surface by changing the light source direction

  3. A rapid and automated relocation method of an AFM probe for high-resolution imaging

    Science.gov (United States)

    Zhou, Peilin; Yu, Haibo; Shi, Jialin; Jiao, Niandong; Wang, Zhidong; Wang, Yuechao; Liu, Lianqing

    2016-09-01

    The atomic force microscope (AFM) is one of the most powerful tools for high-resolution imaging and high-precision positioning for nanomanipulation. The selection of the scanning area of the AFM depends on the use of the optical microscope. However, the resolution of an optical microscope is generally no larger than 200 nm owing to wavelength limitations of visible light. Taking into consideration the two determinants of relocation—relative angular rotation and positional offset between the AFM probe and nano target—it is therefore extremely challenging to precisely relocate the AFM probe to the initial scan/manipulation area for the same nano target after the AFM probe has been replaced, or after the sample has been moved. In this paper, we investigate a rapid automated relocation method for the nano target of an AFM using a coordinate transformation. The relocation process is both simple and rapid; moreover, multiple nano targets can be relocated by only identifying a pair of reference points. It possesses a centimeter-scale location range and nano-scale precision. The main advantages of this method are that it overcomes the limitations associated with the resolution of optical microscopes, and that it is label-free on the target areas, which means that it does not require the use of special artificial markers on the target sample areas. Relocation experiments using nanospheres, DNA, SWCNTs, and nano patterns amply demonstrate the practicality and efficiency of the proposed method, which provides technical support for mass nanomanipulation and detection based on AFM for multiple nano targets that are widely distributed in a large area.

  4. Automated integer programming based separation of arteries and veins from thoracic CT images.

    Science.gov (United States)

    Payer, Christian; Pienn, Michael; Bálint, Zoltán; Shekhovtsov, Alexander; Talakic, Emina; Nagy, Eszter; Olschewski, Andrea; Olschewski, Horst; Urschler, Martin

    2016-12-01

    Automated computer-aided analysis of lung vessels has shown to yield promising results for non-invasive diagnosis of lung diseases. To detect vascular changes which affect pulmonary arteries and veins differently, both compartments need to be identified. We present a novel, fully automatic method that separates arteries and veins in thoracic computed tomography images, by combining local as well as global properties of pulmonary vessels. We split the problem into two parts: the extraction of multiple distinct vessel subtrees, and their subsequent labeling into arteries and veins. Subtree extraction is performed with an integer program (IP), based on local vessel geometry. As naively solving this IP is time-consuming, we show how to drastically reduce computational effort by reformulating it as a Markov Random Field. Afterwards, each subtree is labeled as either arterial or venous by a second IP, using two anatomical properties of pulmonary vessels: the uniform distribution of arteries and veins, and the parallel configuration and close proximity of arteries and bronchi. We evaluate algorithm performance by comparing the results with 25 voxel-based manual reference segmentations. On this dataset, we show good performance of the subtree extraction, consisting of very few non-vascular structures (median value: 0.9%) and merged subtrees (median value: 0.6%). The resulting separation of arteries and veins achieves a median voxel-based overlap of 96.3% with the manual reference segmentations, outperforming a state-of-the-art interactive method. In conclusion, our novel approach provides an opportunity to become an integral part of computer aided pulmonary diagnosis, where artery/vein separation is important.

  5. Three-Dimensional Reconstruction of the Bony Nasolacrimal Canal by Automated Segmentation of Computed Tomography Images.

    Directory of Open Access Journals (Sweden)

    Lucia Jañez-Garcia

    Full Text Available To apply a fully automated method to quantify the 3D structure of the bony nasolacrimal canal (NLC from CT scans whereby the size and main morphometric characteristics of the canal can be determined.Cross-sectional study.36 eyes of 18 healthy individuals.Using software designed to detect the boundaries of the NLC on CT images, 36 NLC reconstructions were prepared. These reconstructions were then used to calculate NLC volume. The NLC axis in each case was determined according to a polygonal model and to 2nd, 3rd and 4th degree polynomials. From these models, NLC sectional areas and length were determined. For each variable, descriptive statistics and normality tests (Kolmogorov-Smirnov and Shapiro-Wilk were established.Time for segmentation, NLC volume, axis, sectional areas and length.Mean processing time was around 30 seconds for segmenting each canal. All the variables generated were normally distributed. Measurements obtained using the four models polygonal, 2nd, 3rd and 4th degree polynomial, respectively, were: mean canal length 14.74, 14.3, 14.80, and 15.03 mm; mean sectional area 15.15, 11.77, 11.43, and 11.56 mm2; minimum sectional area 8.69, 7.62, 7.40, and 7.19 mm2; and mean depth of minimum sectional area (craniocaudal 7.85, 7.71, 8.19, and 8.08 mm.The method proposed automatically reconstructs the NLC on CT scans. Using these reconstructions, morphometric measurements can be calculated from NLC axis estimates based on polygonal and 2nd, 3rd and 4th polynomial models.

  6. A rapid and automated relocation method of an AFM probe for high-resolution imaging.

    Science.gov (United States)

    Zhou, Peilin; Yu, Haibo; Shi, Jialin; Jiao, Niandong; Wang, Zhidong; Wang, Yuechao; Liu, Lianqing

    2016-09-30

    The atomic force microscope (AFM) is one of the most powerful tools for high-resolution imaging and high-precision positioning for nanomanipulation. The selection of the scanning area of the AFM depends on the use of the optical microscope. However, the resolution of an optical microscope is generally no larger than 200 nm owing to wavelength limitations of visible light. Taking into consideration the two determinants of relocation-relative angular rotation and positional offset between the AFM probe and nano target-it is therefore extremely challenging to precisely relocate the AFM probe to the initial scan/manipulation area for the same nano target after the AFM probe has been replaced, or after the sample has been moved. In this paper, we investigate a rapid automated relocation method for the nano target of an AFM using a coordinate transformation. The relocation process is both simple and rapid; moreover, multiple nano targets can be relocated by only identifying a pair of reference points. It possesses a centimeter-scale location range and nano-scale precision. The main advantages of this method are that it overcomes the limitations associated with the resolution of optical microscopes, and that it is label-free on the target areas, which means that it does not require the use of special artificial markers on the target sample areas. Relocation experiments using nanospheres, DNA, SWCNTs, and nano patterns amply demonstrate the practicality and efficiency of the proposed method, which provides technical support for mass nanomanipulation and detection based on AFM for multiple nano targets that are widely distributed in a large area.

  7. Automated Thermal Image Processing for Detection and Classification of Birds and Bats - FY2012 Annual Report

    Energy Technology Data Exchange (ETDEWEB)

    Duberstein, Corey A.; Matzner, Shari; Cullinan, Valerie I.; Virden, Daniel J.; Myers, Joshua R.; Maxwell, Adam R.

    2012-09-01

    Surveying wildlife at risk from offshore wind energy development is difficult and expensive. Infrared video can be used to record birds and bats that pass through the camera view, but it is also time consuming and expensive to review video and determine what was recorded. We proposed to conduct algorithm and software development to identify and to differentiate thermally detected targets of interest that would allow automated processing of thermal image data to enumerate birds, bats, and insects. During FY2012 we developed computer code within MATLAB to identify objects recorded in video and extract attribute information that describes the objects recorded. We tested the efficiency of track identification using observer-based counts of tracks within segments of sample video. We examined object attributes, modeled the effects of random variability on attributes, and produced data smoothing techniques to limit random variation within attribute data. We also began drafting and testing methodology to identify objects recorded on video. We also recorded approximately 10 hours of infrared video of various marine birds, passerine birds, and bats near the Pacific Northwest National Laboratory (PNNL) Marine Sciences Laboratory (MSL) at Sequim, Washington. A total of 6 hours of bird video was captured overlooking Sequim Bay over a series of weeks. An additional 2 hours of video of birds was also captured during two weeks overlooking Dungeness Bay within the Strait of Juan de Fuca. Bats and passerine birds (swallows) were also recorded at dusk on the MSL campus during nine evenings. An observer noted the identity of objects viewed through the camera concurrently with recording. These video files will provide the information necessary to produce and test software developed during FY2013. The annotation will also form the basis for creation of a method to reliably identify recorded objects.

  8. LeafJ: an ImageJ plugin for semi-automated leaf shape measurement.

    Science.gov (United States)

    Maloof, Julin N; Nozue, Kazunari; Mumbach, Maxwell R; Palmer, Christine M

    2013-01-21

    High throughput phenotyping (phenomics) is a powerful tool for linking genes to their functions (see review and recent examples). Leaves are the primary photosynthetic organ, and their size and shape vary developmentally and environmentally within a plant. For these reasons studies on leaf morphology require measurement of multiple parameters from numerous leaves, which is best done by semi-automated phenomics tools. Canopy shade is an important environmental cue that affects plant architecture and life history; the suite of responses is collectively called the shade avoidance syndrome (SAS). Among SAS responses, shade induced leaf petiole elongation and changes in blade area are particularly useful as indices. To date, leaf shape programs (e.g. SHAPE, LAMINA, LeafAnalyzer, LEAFPROCESSOR) can measure leaf outlines and categorize leaf shapes, but can not output petiole length. Lack of large-scale measurement systems of leaf petioles has inhibited phenomics approaches to SAS research. In this paper, we describe a newly developed ImageJ plugin, called LeafJ, which can rapidly measure petiole length and leaf blade parameters of the model plant Arabidopsis thaliana. For the occasional leaf that required manual correction of the petiole/leaf blade boundary we used a touch-screen tablet. Further, leaf cell shape and leaf cell numbers are important determinants of leaf size. Separate from LeafJ we also present a protocol for using a touch-screen tablet for measuring cell shape, area, and size. Our leaf trait measurement system is not limited to shade-avoidance research and will accelerate leaf phenotyping of many mutants and screening plants by leaf phenotyping.

  9. SU-C-304-04: A Compact Modular Computational Platform for Automated On-Board Imager Quality Assurance

    Energy Technology Data Exchange (ETDEWEB)

    Dolly, S [Washington University School of Medicine, Saint Louis, MO (United States); University of Missouri, Columbia, MO (United States); Cai, B; Chen, H; Anastasio, M; Sun, B; Yaddanapudi, S; Noel, C; Goddu, S; Mutic, S; Li, H [Washington University School of Medicine, Saint Louis, MO (United States); Tan, J [UTSouthwestern Medical Center, Dallas, TX (United States)

    2015-06-15

    Purpose: Traditionally, the assessment of X-ray tube output and detector positioning accuracy of on-board imagers (OBI) has been performed manually and subjectively with rulers and dosimeters, and typically takes hours to complete. In this study, we have designed a compact modular computational platform to automatically analyze OBI images acquired with in-house designed phantoms as an efficient and robust surrogate. Methods: The platform was developed as an integrated and automated image analysis-based platform using MATLAB for easy modification and maintenance. Given a set of images acquired with the in-house designed phantoms, the X-ray output accuracy was examined via cross-validation of the uniqueness and integration minimization of important image quality assessment metrics, while machine geometric and positioning accuracy were validated by utilizing pattern-recognition based image analysis techniques. Results: The platform input was a set of images of an in-house designed phantom. The total processing time is about 1–2 minutes. Based on the data acquired from three Varian Truebeam machines over the course of 3 months, the designed test validation strategy achieved higher accuracy than traditional methods. The kVp output accuracy can be verified within +/−2 kVp, the exposure accuracy within 2%, and exposure linearity with a coefficient of variation (CV) of 0.1. Sub-millimeter position accuracy was achieved for the lateral and longitudinal positioning tests, while vertical positioning accuracy within +/−2 mm was achieved. Conclusion: This new platform delivers to the radiotherapy field an automated, efficient, and stable image analysis-based procedure, for the first time, acting as a surrogate for traditional tests for LINAC OBI systems. It has great potential to facilitate OBI quality assurance (QA) with the assistance of advanced image processing techniques. In addition, it provides flexible integration of additional tests for expediting other OBI

  10. Prediction method of single wheat grain protein content based on hyperspectral image%高光谱技术检测单籽粒小麦粗蛋白含量探索

    Institute of Scientific and Technical Information of China (English)

    吴静珠; 刘倩; 陈岩; 刘翠玲

    2016-01-01

    The characteristics of wheat protein content has high heritability, so fine-quality breeding can be achieved by selecting the high-protein wheat seed. Combined with chemometric methods’ hyperspectral imaging technique was used to build the average model to achieve fast prediction of single wheat seed protein content. In the experiment, 47 unit wheat seed samples’ hyperspectral images were collected by GaiaChem-NIR system, and the average spectra was obtained by image process methods. Then, synergy interval partial least squares was applied to select the characteristic spectral regions to optimize the prediction model of wheat seed protein content. The optimal models’ determination coefficient is 0.94, the root mean square error of prediction is 0.28%, and the residual predictive deviation (RPD) is 3.30. Finally, the average model was applied to predict the protein content of each pixes of single wheat seed, and calculated the average as the single wheat grain protein content. The experimental results showed that different wheat grain’ s protein content value predicted by the optimal model existed difference.Ueanwhile, the prediction values varied around the average protein content of its sample, which reflected that the average model is accurate and feasible to predict single wheat grain's protein content. Therefore, the studied method provides a new way to select the high-protein wheat seed in the process of breeding, which can promote the development of wheat fine-quality breeding.%小麦蛋白质含量的性状遗传力较高,通过选择蛋白质含量高的籽粒母本可以达到优质育种的预期效果。研究采用高光谱成像技术结合化学计量学方法建立多籽粒小麦粗蛋白平均模型来实现单籽粒小麦粗蛋白含量的快速预测。实验采集47份小麦样本(每份100粒)的高光谱图像并提取平均光谱信息,通过联合区间偏最小二乘法筛选特征变量优化建立多籽粒小麦粗蛋

  11. Automation of a high-speed imaging setup for differential viscosity measurements

    Science.gov (United States)

    Hurth, C.; Duane, B.; Whitfield, D.; Smith, S.; Nordquist, A.; Zenhausern, F.

    2013-12-01

    We present the automation of a setup previously used to assess the viscosity of pleural effusion samples and discriminate between transudates and exudates, an important first step in clinical diagnostics. The presented automation includes the design, testing, and characterization of a vacuum-actuated loading station that handles the 2 mm glass spheres used as sensors, as well as the engineering of electronic Printed Circuit Board (PCB) incorporating a microcontroller and their synchronization with a commercial high-speed camera operating at 10 000 fps. The hereby work therefore focuses on the instrumentation-related automation efforts as the general method and clinical application have been reported earlier [Hurth et al., J. Appl. Phys. 110, 034701 (2011)]. In addition, we validate the performance of the automated setup with the calibration for viscosity measurements using water/glycerol standard solutions and the determination of the viscosity of an "unknown" solution of hydroxyethyl cellulose.

  12. Automation of a high-speed imaging setup for differential viscosity measurements

    Energy Technology Data Exchange (ETDEWEB)

    Hurth, C.; Duane, B.; Whitfield, D.; Smith, S.; Nordquist, A.; Zenhausern, F. [Center for Applied Nanobioscience and Medicine, The University of Arizona College of Medicine, 425 N 5th Street, Phoenix, Arizona 85004 (United States)

    2013-12-28

    We present the automation of a setup previously used to assess the viscosity of pleural effusion samples and discriminate between transudates and exudates, an important first step in clinical diagnostics. The presented automation includes the design, testing, and characterization of a vacuum-actuated loading station that handles the 2 mm glass spheres used as sensors, as well as the engineering of electronic Printed Circuit Board (PCB) incorporating a microcontroller and their synchronization with a commercial high-speed camera operating at 10 000 fps. The hereby work therefore focuses on the instrumentation-related automation efforts as the general method and clinical application have been reported earlier [Hurth et al., J. Appl. Phys. 110, 034701 (2011)]. In addition, we validate the performance of the automated setup with the calibration for viscosity measurements using water/glycerol standard solutions and the determination of the viscosity of an “unknown” solution of hydroxyethyl cellulose.

  13. Optimized and Automated Radiosynthesis of [18F]DHMT for Translational Imaging of Reactive Oxygen Species with Positron Emission Tomography

    Directory of Open Access Journals (Sweden)

    Wenjie Zhang

    2016-12-01

    Full Text Available Reactive oxygen species (ROS play important roles in cell signaling and homeostasis. However, an abnormally high level of ROS is toxic, and is implicated in a number of diseases. Positron emission tomography (PET imaging of ROS can assist in the detection of these diseases. For the purpose of clinical translation of [18F]6-(4-((1-(2-fluoroethyl-1H-1,2,3-triazol-4-ylmethoxyphenyl-5-methyl-5,6-dihydrophenanthridine-3,8-diamine ([18F]DHMT, a promising ROS PET radiotracer, we first manually optimized the large-scale radiosynthesis conditions and then implemented them in an automated synthesis module. Our manual synthesis procedure afforded [18F]DHMT in 120 min with overall radiochemical yield (RCY of 31.6% ± 9.3% (n = 2, decay-uncorrected and specific activity of 426 ± 272 GBq/µmol (n = 2. Fully automated radiosynthesis of [18F]DHMT was achieved within 77 min with overall isolated RCY of 6.9% ± 2.8% (n = 7, decay-uncorrected and specific activity of 155 ± 153 GBq/µmol (n = 7 at the end of synthesis. This study is the first demonstration of producing 2-[18F]fluoroethyl azide by an automated module, which can be used for a variety of PET tracers through click chemistry. It is also the first time that [18F]DHMT was successfully tested for PET imaging in a healthy beagle dog.

  14. Simplified automated image analysis for detection and phenotyping of Mycobacterium tuberculosis on porous supports by monitoring growing microcolonies.

    Directory of Open Access Journals (Sweden)

    Alice L den Hertog

    Full Text Available BACKGROUND: Even with the advent of nucleic acid (NA amplification technologies the culture of mycobacteria for diagnostic and other applications remains of critical importance. Notably microscopic observed drug susceptibility testing (MODS, as opposed to traditional culture on solid media or automated liquid culture, has shown potential to both speed up and increase the provision of mycobacterial culture in high burden settings. METHODS: Here we explore the growth of Mycobacterial tuberculosis microcolonies, imaged by automated digital microscopy, cultured on a porous aluminium oxide (PAO supports. Repeated imaging during colony growth greatly simplifies "computer vision" and presumptive identification of microcolonies was achieved here using existing publically available algorithms. Our system thus allows the growth of individual microcolonies to be monitored and critically, also to change the media during the growth phase without disrupting the microcolonies. Transfer of identified microcolonies onto selective media allowed us, within 1-2 bacterial generations, to rapidly detect the drug susceptibility of individual microcolonies, eliminating the need for time consuming subculturing or the inoculation of multiple parallel cultures. SIGNIFICANCE: Monitoring the phenotype of individual microcolonies as they grow has immense potential for research, screening, and ultimately M. tuberculosis diagnostic applications. The method described is particularly appealing with respect to speed and automation.