WorldWideScience

Sample records for content-based image analysis

  1. AN INTELLIGENT CONTENT BASED IMAGE RETRIEVAL SYSTEM FOR MAMMOGRAM IMAGE ANALYSIS

    Directory of Open Access Journals (Sweden)

    K. VAIDEHI

    2015-11-01

    Full Text Available An automated segmentation method which dynamically selects the parenchymal region of interest (ROI based on the patients breast size is proposed from which, statistical features are derived. SVM classifier is used to model the derived features to classify the breast tissue as dense, glandular and fatty. Then K-nn with different distance metrics namely city-block, Euclidean and Chebchev is used to retrieve the first k similar images closest to the given query image. The proposed method was tested with MIAS database and achieves an average precision of 86.15%. The results reveals that the proposed method could be employed for effective content based mammograms retrieval.

  2. Metadata for Content-Based Image Retrieval

    Directory of Open Access Journals (Sweden)

    Adrian Sterca

    2010-12-01

    Full Text Available This paper presents an image retrieval technique that combines content based image retrieval with pre-computed metadata-based image retrieval. The resulting system will have the advantages of both approaches: the speed/efficiency of metadata-based image retrieval and the accuracy/power of content-based image retrieval.

  3. Material Recognition for Content Based Image Retrieval

    NARCIS (Netherlands)

    Geusebroek, J.M.

    2002-01-01

    One of the open problems in content-based Image Retrieval is the recognition of material present in an image. Knowledge about the set of materials present gives important semantic information about the scene under consideration. For example, detecting sand, sky, and water certainly classifies the

  4. Enhancing Image Retrieval System Using Content Based Search ...

    African Journals Online (AJOL)

    ... performing the search on the entire image database, the image category option directs the retrieval engine to the specified category. Also, there is provision to update or modify the different image categories in the image database as need arise. Keywords: Content-based, Multimedia, Search Engine, Image-based, Texture ...

  5. Teleconsultations using content-based retrieval of parametric images.

    Science.gov (United States)

    Ruminski, J

    2004-01-01

    The problem of medical teleconsultations with intelligent computer system rather than with a human expert is analyzed. System for content-based retrieval of images is described and presented as a use case of a passive teleconsultation. Selected features, crucial for retrieval quality, are introduced including: synthesis of parametric images, regions of interest detection and extraction, definition of content-based features, generation of descriptors, query algebra, system architecture and performance. Additionally, electronic business pattern is proposed to generalize teleconsultation services like content-based retrieval systems.

  6. Human-Centered Content-Based Image Retrieval

    NARCIS (Netherlands)

    van den Broek, Egon

    2005-01-01

    Retrieval of images that lack a (suitable) annotations cannot be achieved through (traditional) Information Retrieval (IR) techniques. Access through such collections can be achieved through the application of computer vision techniques on the IR problem, which is baptized Content-Based Image

  7. IMAGE RETIEVAL COLOR, SHAPE AND TEXTURE FEATURES USING CONTENT BASED

    OpenAIRE

    K. NARESH BABU,; SAKE. POTHALAIAH; Dr.K ASHOK BABU

    2010-01-01

    Content-based image retrieval (CBIR) is an important research area for manipulating large amount of image databases and archives. Extraction of invariant features is the basis of CBIR. This paper focuses on the problem of texture, color& shape feature extractions. Using just one feature information for comparing images may cause inaccuracy than compared with using more than one features. Therefore many image retrieval system use many feature information like color, shape and other features. W...

  8. Content-based histopathology image retrieval using CometCloud.

    Science.gov (United States)

    Qi, Xin; Wang, Daihou; Rodero, Ivan; Diaz-Montes, Javier; Gensure, Rebekah H; Xing, Fuyong; Zhong, Hua; Goodell, Lauri; Parashar, Manish; Foran, David J; Yang, Lin

    2014-08-26

    The development of digital imaging technology is creating extraordinary levels of accuracy that provide support for improved reliability in different aspects of the image analysis, such as content-based image retrieval, image segmentation, and classification. This has dramatically increased the volume and rate at which data are generated. Together these facts make querying and sharing non-trivial and render centralized solutions unfeasible. Moreover, in many cases this data is often distributed and must be shared across multiple institutions requiring decentralized solutions. In this context, a new generation of data/information driven applications must be developed to take advantage of the national advanced cyber-infrastructure (ACI) which enable investigators to seamlessly and securely interact with information/data which is distributed across geographically disparate resources. This paper presents the development and evaluation of a novel content-based image retrieval (CBIR) framework. The methods were tested extensively using both peripheral blood smears and renal glomeruli specimens. The datasets and performance were evaluated by two pathologists to determine the concordance. The CBIR algorithms that were developed can reliably retrieve the candidate image patches exhibiting intensity and morphological characteristics that are most similar to a given query image. The methods described in this paper are able to reliably discriminate among subtle staining differences and spatial pattern distributions. By integrating a newly developed dual-similarity relevance feedback module into the CBIR framework, the CBIR results were improved substantially. By aggregating the computational power of high performance computing (HPC) and cloud resources, we demonstrated that the method can be successfully executed in minutes on the Cloud compared to weeks using standard computers. In this paper, we present a set of newly developed CBIR algorithms and validate them using two

  9. Human-Centered Content-Based Image Retrieval

    NARCIS (Netherlands)

    van den Broek, Egon; Kok, Thijs; Schouten, Theo E.; Vuurpijl, Louis G.; Rogowitz, Bernice E.; Pappas, Thrasyvoulos N.

    2008-01-01

    A breakthrough is needed in order to achieve a substantial progress in the field of Content-Based Image Retrieval (CBIR). This breakthrough can be enforced by: 1) optimizing user-system interaction, 2) combining the wealth of techniques from text-based Information Retrieval with CBIR techniques, 3)

  10. Content-Based Image Retrial Based on Hadoop

    Directory of Open Access Journals (Sweden)

    DongSheng Yin

    2013-01-01

    Full Text Available Generally, time complexity of algorithms for content-based image retrial is extremely high. In order to retrieve images on large-scale databases efficiently, a new way for retrieving based on Hadoop distributed framework is proposed. Firstly, a database of images features is built by using Speeded Up Robust Features algorithm and Locality-Sensitive Hashing and then perform the search on Hadoop platform in a parallel way specially designed. Considerable experimental results show that it is able to retrieve images based on content on large-scale cluster and image sets effectively.

  11. Content-based analysis and indexing of sports video

    Science.gov (United States)

    Luo, Ming; Bai, Xuesheng; Xu, Guang-you

    2001-12-01

    An explosion of on-line image and video data in digital form is already well underway. With the exponential rise in interactive information exploration and dissemination through the World-Wide Web, the major inhibitors of rapid access to on-line video data are the management of capture and storage, and content-based intelligent search and indexing techniques. This paper proposes an approach for content-based analysis and event-based indexing of sports video. It includes a novel method to organize shots - classifying shots as close shots and far shots, an original idea of blur extent-based event detection, and an innovative local mutation-based algorithm for caption detection and retrieval. Results on extensive real TV programs demonstrate the applicability of our approach.

  12. Retrieval Architecture with Classified Query for Content Based Image Recognition

    Directory of Open Access Journals (Sweden)

    Rik Das

    2016-01-01

    Full Text Available The consumer behavior has been observed to be largely influenced by image data with increasing familiarity of smart phones and World Wide Web. Traditional technique of browsing through product varieties in the Internet with text keywords has been gradually replaced by the easy accessible image data. The importance of image data has portrayed a steady growth in application orientation for business domain with the advent of different image capturing devices and social media. The paper has described a methodology of feature extraction by image binarization technique for enhancing identification and retrieval of information using content based image recognition. The proposed algorithm was tested on two public datasets, namely, Wang dataset and Oliva and Torralba (OT-Scene dataset with 3688 images on the whole. It has outclassed the state-of-the-art techniques in performance measure and has shown statistical significance.

  13. Content-based image retrieval for brain MRI: An image-searching engine and population-based analysis to utilize past clinical data for future diagnosis

    Directory of Open Access Journals (Sweden)

    Andreia V. Faria

    2015-01-01

    Full Text Available Radiological diagnosis is based on subjective judgment by radiologists. The reasoning behind this process is difficult to document and share, which is a major obstacle in adopting evidence-based medicine in radiology. We report our attempt to use a comprehensive brain parcellation tool to systematically capture image features and use them to record, search, and evaluate anatomical phenotypes. Anatomical images (T1-weighted MRI were converted to a standardized index by using a high-dimensional image transformation method followed by atlas-based parcellation of the entire brain. We investigated how the indexed anatomical data captured the anatomical features of healthy controls and a population with Primary Progressive Aphasia (PPA. PPA was chosen because patients have apparent atrophy at different degrees and locations, thus the automated quantitative results can be compared with trained clinicians' qualitative evaluations. We explored and tested the power of individual classifications and of performing a search for images with similar anatomical features in a database using partial least squares-discriminant analysis (PLS-DA and principal component analysis (PCA. The agreement between the automated z-score and the averaged visual scores for atrophy (r = 0.8 was virtually the same as the inter-evaluator agreement. The PCA plot distribution correlated with the anatomical phenotypes and the PLS-DA resulted in a model with an accuracy of 88% for distinguishing PPA variants. The quantitative indices captured the main anatomical features. The indexing of image data has a potential to be an effective, comprehensive, and easily translatable tool for clinical practice, providing new opportunities to mine clinical databases for medical decision support.

  14. Content-based image retrieval for brain MRI: An image-searching engine and population-based analysis to utilize past clinical data for future diagnosis

    Science.gov (United States)

    Faria, Andreia V.; Oishi, Kenichi; Yoshida, Shoko; Hillis, Argye; Miller, Michael I.; Mori, Susumu

    2015-01-01

    Radiological diagnosis is based on subjective judgment by radiologists. The reasoning behind this process is difficult to document and share, which is a major obstacle in adopting evidence-based medicine in radiology. We report our attempt to use a comprehensive brain parcellation tool to systematically capture image features and use them to record, search, and evaluate anatomical phenotypes. Anatomical images (T1-weighted MRI) were converted to a standardized index by using a high-dimensional image transformation method followed by atlas-based parcellation of the entire brain. We investigated how the indexed anatomical data captured the anatomical features of healthy controls and a population with Primary Progressive Aphasia (PPA). PPA was chosen because patients have apparent atrophy at different degrees and locations, thus the automated quantitative results can be compared with trained clinicians' qualitative evaluations. We explored and tested the power of individual classifications and of performing a search for images with similar anatomical features in a database using partial least squares-discriminant analysis (PLS-DA) and principal component analysis (PCA). The agreement between the automated z-score and the averaged visual scores for atrophy (r = 0.8) was virtually the same as the inter-evaluator agreement. The PCA plot distribution correlated with the anatomical phenotypes and the PLS-DA resulted in a model with an accuracy of 88% for distinguishing PPA variants. The quantitative indices captured the main anatomical features. The indexing of image data has a potential to be an effective, comprehensive, and easily translatable tool for clinical practice, providing new opportunities to mine clinical databases for medical decision support. PMID:25685706

  15. The Use of QBIC Content-Based Image Retrieval System

    Directory of Open Access Journals (Sweden)

    Ching-Yi Wu

    2004-03-01

    Full Text Available The fast increase in digital images has caught increasing attention on the development of image retrieval technologies. Content-based image retrieval (CBIR has become an important approach in retrieving image data from a large collection. This article reports our results on the use and users study of a CBIR system. Thirty-eight students majored in art and design were invited to use the IBM’s OBIC (Query by Image Content system through the Internet. Data from their information needs, behaviors, and retrieval strategies were collected through an in-depth interview, observation, and self-described think-aloud process. Important conclusions are:(1)There are four types of information needs for image data: implicit, inspirational, ever-changing, and purposive. The types of needs may change during the retrieval process. (2)CBIR is suitable for the example-type query, text retrieval is suitable for the scenario-type query, and image browsing is suitable for the symbolic query. (3)Different from text retrieval, detailed description of the query condition may lead to retrieval failure more easily. (4)CBIR is suitable for the domain-specific image collection, not for the images on the Word-Wide Web.[Article content in Chinese

  16. Toward Content Based Image Retrieval with Deep Convolutional Neural Networks.

    Science.gov (United States)

    Sklan, Judah E S; Plassard, Andrew J; Fabbri, Daniel; Landman, Bennett A

    2015-03-19

    Content-based image retrieval (CBIR) offers the potential to identify similar case histories, understand rare disorders, and eventually, improve patient care. Recent advances in database capacity, algorithm efficiency, and deep Convolutional Neural Networks (dCNN), a machine learning technique, have enabled great CBIR success for general photographic images. Here, we investigate applying the leading ImageNet CBIR technique to clinically acquired medical images captured by the Vanderbilt Medical Center. Briefly, we (1) constructed a dCNN with four hidden layers, reducing dimensionality of an input scaled to 128×128 to an output encoded layer of 4×384, (2) trained the network using back-propagation 1 million random magnetic resonance (MR) and computed tomography (CT) images, (3) labeled an independent set of 2100 images, and (4) evaluated classifiers on the projection of the labeled images into manifold space. Quantitative results were disappointing (averaging a true positive rate of only 20%); however, the data suggest that improvements would be possible with more evenly distributed sampling across labels and potential re-grouping of label structures. This prelimainry effort at automated classification of medical images with ImageNet is promising, but shows that more work is needed beyond direct adaptation of existing techniques.

  17. Content based image retrieval based on wavelet transform coefficients distribution.

    Science.gov (United States)

    Lamard, Mathieu; Cazuguel, Guy; Quellec, Gwénolé; Bekri, Lynda; Roux, Christian; Cochener, Béatrice

    2007-01-01

    In this paper we propose a content based image retrieval method for diagnosis aid in medical fields. We characterize images without extracting significant features by using distribution of coefficients obtained by building signatures from the distribution of wavelet transform. The research is carried out by computing signature distances between the query and database images. Several signatures are proposed; they use a model of wavelet coefficient distribution. To enhance results, a weighted distance between signatures is used and an adapted wavelet base is proposed. Retrieval efficiency is given for different databases including a diabetic retinopathy, a mammography and a face database. Results are promising: the retrieval efficiency is higher than 95% for some cases using an optimization process.

  18. Content Based Image Retrieval based on Wavelet Transform coefficients distribution

    Science.gov (United States)

    Lamard, Mathieu; Cazuguel, Guy; Quellec, Gwénolé; Bekri, Lynda; Roux, Christian; Cochener, Béatrice

    2007-01-01

    In this paper we propose a content based image retrieval method for diagnosis aid in medical fields. We characterize images without extracting significant features by using distribution of coefficients obtained by building signatures from the distribution of wavelet transform. The research is carried out by computing signature distances between the query and database images. Several signatures are proposed; they use a model of wavelet coefficient distribution. To enhance results, a weighted distance between signatures is used and an adapted wavelet base is proposed. Retrieval efficiency is given for different databases including a diabetic retinopathy, a mammography and a face database. Results are promising: the retrieval efficiency is higher than 95% for some cases using an optimization process. PMID:18003013

  19. Content-Based Image Retrieval for Semiconductor Process Characterization

    Directory of Open Access Journals (Sweden)

    Kenneth W. Tobin

    2002-07-01

    Full Text Available Image data management in the semiconductor manufacturing environment is becoming more problematic as the size of silicon wafers continues to increase, while the dimension of critical features continues to shrink. Fabricators rely on a growing host of image-generating inspection tools to monitor complex device manufacturing processes. These inspection tools include optical and laser scattering microscopy, confocal microscopy, scanning electron microscopy, and atomic force microscopy. The number of images that are being generated are on the order of 20,000 to 30,000 each week in some fabrication facilities today. Manufacturers currently maintain on the order of 500,000 images in their data management systems for extended periods of time. Gleaning the historical value from these large image repositories for yield improvement is difficult to accomplish using the standard database methods currently associated with these data sets (e.g., performing queries based on time and date, lot numbers, wafer identification numbers, etc.. Researchers at the Oak Ridge National Laboratory have developed and tested a content-based image retrieval technology that is specific to manufacturing environments. In this paper, we describe the feature representation of semiconductor defect images along with methods of indexing and retrieval, and results from initial field-testing in the semiconductor manufacturing environment.

  20. Biased discriminant euclidean embedding for content-based image retrieval.

    Science.gov (United States)

    Bian, Wei; Tao, Dacheng

    2010-02-01

    With many potential multimedia applications, content-based image retrieval (CBIR) has recently gained more attention for image management and web search. A wide variety of relevance feedback (RF) algorithms have been developed in recent years to improve the performance of CBIR systems. These RF algorithms capture user's preferences and bridge the semantic gap. However, there is still a big room to further the RF performance, because the popular RF algorithms ignore the manifold structure of image low-level visual features. In this paper, we propose the biased discriminative Euclidean embedding (BDEE) which parameterises samples in the original high-dimensional ambient space to discover the intrinsic coordinate of image low-level visual features. BDEE precisely models both the intraclass geometry and interclass discrimination and never meets the undersampled problem. To consider unlabelled samples, a manifold regularization-based item is introduced and combined with BDEE to form the semi-supervised BDEE, or semi-BDEE for short. To justify the effectiveness of the proposed BDEE and semi-BDEE, we compare them against the conventional RF algorithms and show a significant improvement in terms of accuracy and stability based on a subset of the Corel image gallery.

  1. Fast Depiction Invariant Visual Similarity for Content Based Image Retrieval Based on Data-driven Visual Similarity using Linear Discriminant Analysis

    Science.gov (United States)

    Wihardi, Y.; Setiawan, W.; Nugraha, E.

    2018-01-01

    On this research we try to build CBIRS based on Learning Distance/Similarity Function using Linear Discriminant Analysis (LDA) and Histogram of Oriented Gradient (HoG) feature. Our method is invariant to depiction of image, such as similarity of image to image, sketch to image, and painting to image. LDA can decrease execution time compared to state of the art method, but it still needs an improvement in term of accuracy. Inaccuracy in our experiment happen because we did not perform sliding windows search and because of low number of negative samples as natural-world images.

  2. Using deep learning for content-based medical image retrieval

    Science.gov (United States)

    Sun, Qinpei; Yang, Yuanyuan; Sun, Jianyong; Yang, Zhiming; Zhang, Jianguo

    2017-03-01

    Content-Based medical image retrieval (CBMIR) is been highly active research area from past few years. The retrieval performance of a CBMIR system crucially depends on the feature representation, which have been extensively studied by researchers for decades. Although a variety of techniques have been proposed, it remains one of the most challenging problems in current CBMIR research, which is mainly due to the well-known "semantic gap" issue that exists between low-level image pixels captured by machines and high-level semantic concepts perceived by human[1]. Recent years have witnessed some important advances of new techniques in machine learning. One important breakthrough technique is known as "deep learning". Unlike conventional machine learning methods that are often using "shallow" architectures, deep learning mimics the human brain that is organized in a deep architecture and processes information through multiple stages of transformation and representation. This means that we do not need to spend enormous energy to extract features manually. In this presentation, we propose a novel framework which uses deep learning to retrieval the medical image to improve the accuracy and speed of a CBIR in integrated RIS/PACS.

  3. Automatic content-based analysis of georeferenced image data: Detection of Beggiatoa mats in seafloor video mosaics from the HÅkon Mosby Mud Volcano

    Science.gov (United States)

    Jerosch, K.; Lüdtke, A.; Schlüter, M.; Ioannidis, G. T.

    2007-02-01

    The combination of new underwater technology as remotely operating vehicles (ROVs), high-resolution video imagery, and software to compute georeferenced mosaics of the seafloor provides new opportunities for marine geological or biological studies and applications in offshore industry. Even during single surveys by ROVs or towed systems large amounts of images are compiled. While these underwater techniques are now well-engineered, there is still a lack of methods for the automatic analysis of the acquired image data. During ROV dives more than 4200 georeferenced video mosaics were compiled for the HÅkon Mosby Mud Volcano (HMMV). Mud volcanoes as HMMV are considered as significant source locations for methane characterised by unique chemoautotrophic communities as Beggiatoa mats. For the detection and quantification of the spatial distribution of Beggiatoa mats an automated image analysis technique was developed, which applies watershed transformation and relaxation-based labelling of pre-segmented regions. Comparison of the data derived by visual inspection of 2840 video images with the automated image analysis revealed similarities with a precision better than 90%. We consider this as a step towards a time-efficient and accurate analysis of seafloor images for computation of geochemical budgets and identification of habitats at the seafloor.

  4. A content-based image retrieval method for optical colonoscopy images based on image recognition techniques

    Science.gov (United States)

    Nosato, Hirokazu; Sakanashi, Hidenori; Takahashi, Eiichi; Murakawa, Masahiro

    2015-03-01

    This paper proposes a content-based image retrieval method for optical colonoscopy images that can find images similar to ones being diagnosed. Optical colonoscopy is a method of direct observation for colons and rectums to diagnose bowel diseases. It is the most common procedure for screening, surveillance and treatment. However, diagnostic accuracy for intractable inflammatory bowel diseases, such as ulcerative colitis (UC), is highly dependent on the experience and knowledge of the medical doctor, because there is considerable variety in the appearances of colonic mucosa within inflammations with UC. In order to solve this issue, this paper proposes a content-based image retrieval method based on image recognition techniques. The proposed retrieval method can find similar images from a database of images diagnosed as UC, and can potentially furnish the medical records associated with the retrieved images to assist the UC diagnosis. Within the proposed method, color histogram features and higher order local auto-correlation (HLAC) features are adopted to represent the color information and geometrical information of optical colonoscopy images, respectively. Moreover, considering various characteristics of UC colonoscopy images, such as vascular patterns and the roughness of the colonic mucosa, we also propose an image enhancement method to highlight the appearances of colonic mucosa in UC. In an experiment using 161 UC images from 32 patients, we demonstrate that our method improves the accuracy of retrieving similar UC images.

  5. Incorporating Semantics into Data Driven Workflows for Content Based Analysis

    Science.gov (United States)

    Argüello, M.; Fernandez-Prieto, M. J.

    Finding meaningful associations between text elements and knowledge structures within clinical narratives in a highly verbal domain, such as psychiatry, is a challenging goal. The research presented here uses a small corpus of case histories and brings into play pre-existing knowledge, and therefore, complements other approaches that use large corpus (millions of words) and no pre-existing knowledge. The paper describes a variety of experiments for content-based analysis: Linguistic Analysis using NLP-oriented approaches, Sentiment Analysis, and Semantically Meaningful Analysis. Although it is not standard practice, the paper advocates providing automatic support to annotate the functionality as well as the data for each experiment by performing semantic annotation that uses OWL and OWL-S. Lessons learnt can be transmitted to legacy clinical databases facing the conversion of clinical narratives according to prominent Electronic Health Records standards.

  6. Evaluation of shape indexing methods for content-based retrieval of x-ray images

    Science.gov (United States)

    Antani, Sameer; Long, L. Rodney; Thoma, George R.; Lee, Dah-Jye

    2003-01-01

    Efficient content-based image retrieval of biomedical images is a challenging problem of growing research interest. Feature representation algorithms used in indexing medical images on the pathology of interest have to address conflicting goals of reducing feature dimensionality while retaining important and often subtle biomedical features. At the Lister Hill National Center for Biomedical Communications, a R&D division of the National Library of Medicine, we are developing a content-based image retrieval system for digitized images of a collection of 17,000 cervical and lumbar x-rays taken as a part of the second National Health and Nutrition Examination Survey (NHANES II). Shape is the only feature that effectively describes various pathologies identified by medical experts as being consistently and reliably found in the image collection. In order to determine if the state of the art in shape representation methods is suitable for this application, we have evaluated representative algorithms selected from the literature. The algorithms were tested on a subset of 250 vertebral shapes. In this paper we present the requirements of an ideal algorithm, define the evaluation criteria, and present the results and our analysis of the evaluation. We observe that while the shape methods perform well on visual inspection of the overall shape boundaries, they fall short in meeting the needs of determining similarity between the vertebral shapes based on the pathology.

  7. Indexing, learning and content-based retrieval for special purpose image databases

    NARCIS (Netherlands)

    M.J. Huiskes (Mark); E.J. Pauwels (Eric)

    2005-01-01

    textabstractThis chapter deals with content-based image retrieval in special purpose image databases. As image data is amassed ever more effortlessly, building efficient systems for searching and browsing of image databases becomes increasingly urgent. We provide an overview of the current

  8. Content-Based Analysis of Bumper Stickers in Jordan

    Directory of Open Access Journals (Sweden)

    Abdullah A. Jaradat

    2016-12-01

    Full Text Available This study has set out to investigate bumper stickers in Jordan focusing mainly on the themes of the stickers. The study hypothesized that bumper stickers in Jordan reflect a wide range of topics including social, economic, and political. Due to being the first study of this phenomenon, the study has adopted content-based analysis to determine the basic topics. The study has found that the purpose of most bumper sticker is fun and humor; most of them are not serious and do not carry any biting messages. They do not present any criticism to the most dominant problems at the level of society including racism, nepotism, anti-feminism, inflation, high taxes, and refugees. Another finding is that politics is still a taboo; no political bumper stickers were found in Jordan. Finally, the themes the stickers targeted are: lessons of life 28.85 %; challenging or warning other drivers 16%; funny notes about social issues 12%; religious sayings 8%; treating the car as a female 7%; the low economic status of the driver 6%; love and treachery 5.5%; the prestigious status of the car 5%; envy 4%; nicknames for the car or the driver 4%; irony 3 %; and English sayings 1.5 %. Keywords: bumper stickers, themes, politics

  9. Design Guidelines for a Content-Based Image Retrieval Color-Selection Interface

    NARCIS (Netherlands)

    Eggen, Berry; van den Broek, Egon; van der Veer, Gerrit C.; Kisters, Peter M.F.; Willems, Rob; Vuurpijl, Louis G.

    2004-01-01

    In Content-Based Image Retrieval (CBIR) two query-methods exist: query-by-example and query-by-memory. The user either selects an example image or selects image features retrieved from memory (such as color, texture, spatial attributes, and shape) to define his query. Hitherto, research on CBIR

  10. Fundus Image Features Extraction for Exudate Mining in Coordination with Content Based Image Retrieval: A Study

    Science.gov (United States)

    Gururaj, C.; Jayadevappa, D.; Tunga, Satish

    2018-02-01

    Medical field has seen a phenomenal improvement over the previous years. The invention of computers with appropriate increase in the processing and internet speed has changed the face of the medical technology. However there is still scope for improvement of the technologies in use today. One of the many such technologies of medical aid is the detection of afflictions of the eye. Although a repertoire of research has been accomplished in this field, most of them fail to address how to take the detection forward to a stage where it will be beneficial to the society at large. An automated system that can predict the current medical condition of a patient after taking the fundus image of his eye is yet to see the light of the day. Such a system is explored in this paper by summarizing a number of techniques for fundus image features extraction, predominantly hard exudate mining, coupled with Content Based Image Retrieval to develop an automation tool. The knowledge of the same would bring about worthy changes in the domain of exudates extraction of the eye. This is essential in cases where the patients may not have access to the best of technologies. This paper attempts at a comprehensive summary of the techniques for Content Based Image Retrieval (CBIR) or fundus features image extraction, and few choice methods of both, and an exploration which aims to find ways to combine these two attractive features, and combine them so that it is beneficial to all.

  11. Automated and effective content-based image retrieval for digital mammography.

    Science.gov (United States)

    Singh, Vibhav Prakash; Srivastava, Subodh; Srivastava, Rajeev

    2018-01-01

    Nowadays, huge number of mammograms has been generated in hospitals for the diagnosis of breast cancer. Content-based image retrieval (CBIR) can contribute more reliable diagnosis by classifying the query mammograms and retrieving similar mammograms already annotated by diagnostic descriptions and treatment results. Since labels, artifacts, and pectoral muscles present in mammograms can bias the retrieval procedures, automated detection and exclusion of these image noise patterns and/or non-breast regions is an essential pre-processing step. In this study, an efficient and automated CBIR system of mammograms was developed and tested. First, the pre-processing steps including automatic labelling-artifact suppression, automatic pectoral muscle removal, and image enhancement using the adaptive median filter were applied. Next, pre-processed images were segmented using the co-occurrence thresholds based seeded region growing algorithm. Furthermore, a set of image features including shape, histogram based statistical, Gabor, wavelet, and Gray Level Co-occurrence Matrix (GLCM) features, was computed from the segmented region. In order to select the optimal features, a minimum redundancy maximum relevance (mRMR) feature selection method was then applied. Finally, similar images were retrieved using Euclidean distance similarity measure. The comparative experiments conducted with reference to benchmark mammographic images analysis society (MIAS) database confirmed the effectiveness of the proposed work concerning average precision of 72% and 61.30% for normal & abnormal classes of mammograms, respectively.

  12. Comparing features sets for content-based image retrieval in a medical-case database

    Science.gov (United States)

    Muller, Henning; Rosset, Antoine; Vallee, Jean-Paul; Geissbuhler, Antoine

    2004-04-01

    Content-based image retrieval systems (CBIRSs) have frequently been proposed for the use in medical image databases and PACS. Still, only few systems were developed and used in a real clinical environment. It rather seems that medical professionals define their needs and computer scientists develop systems based on data sets they receive with little or no interaction between the two groups. A first study on the diagnostic use of medical image retrieval also shows an improvement in diagnostics when using CBIRSs which underlines the potential importance of this technique. This article explains the use of an open source image retrieval system (GIFT - GNU Image Finding Tool) for the retrieval of medical images in the medical case database system CasImage that is used in daily, clinical routine in the university hospitals of Geneva. Although the base system of GIFT shows an unsatisfactory performance, already little changes in the feature space show to significantly improve the retrieval results. The performance of variations in feature space with respect to color (gray level) quantizations and changes in texture analysis (Gabor filters) is compared. Whereas stock photography relies mainly on colors for retrieval, medical images need a large number of gray levels for successful retrieval, especially when executing feedback queries. The results also show that a too fine granularity in the gray levels lowers the retrieval quality, especially with single-image queries. For the evaluation of the retrieval peformance, a subset of the entire case database of more than 40,000 images is taken with a total of 3752 images. Ground truth was generated by a user who defined the expected query result of a perfect system by selecting images relevant to a given query image. The results show that a smaller number of gray levels (32 - 64) leads to a better retrieval performance, especially when using relevance feedback. The use of more scales and directions for the Gabor filters in the

  13. Adapting content-based image retrieval techniques for the semantic annotation of medical images.

    Science.gov (United States)

    Kumar, Ashnil; Dyer, Shane; Kim, Jinman; Li, Changyang; Leong, Philip H W; Fulham, Michael; Feng, Dagan

    2016-04-01

    The automatic annotation of medical images is a prerequisite for building comprehensive semantic archives that can be used to enhance evidence-based diagnosis, physician education, and biomedical research. Annotation also has important applications in the automatic generation of structured radiology reports. Much of the prior research work has focused on annotating images with properties such as the modality of the image, or the biological system or body region being imaged. However, many challenges remain for the annotation of high-level semantic content in medical images (e.g., presence of calcification, vessel obstruction, etc.) due to the difficulty in discovering relationships and associations between low-level image features and high-level semantic concepts. This difficulty is further compounded by the lack of labelled training data. In this paper, we present a method for the automatic semantic annotation of medical images that leverages techniques from content-based image retrieval (CBIR). CBIR is a well-established image search technology that uses quantifiable low-level image features to represent the high-level semantic content depicted in those images. Our method extends CBIR techniques to identify or retrieve a collection of labelled images that have similar low-level features and then uses this collection to determine the best high-level semantic annotations. We demonstrate our annotation method using retrieval via weighted nearest-neighbour retrieval and multi-class classification to show that our approach is viable regardless of the underlying retrieval strategy. We experimentally compared our method with several well-established baseline techniques (classification and regression) and showed that our method achieved the highest accuracy in the annotation of liver computed tomography (CT) images. Copyright © 2016 Elsevier Ltd. All rights reserved.

  14. A Novel Technique for Shape Feature Extraction Using Content Based Image Retrieval

    Directory of Open Access Journals (Sweden)

    Dhanoa Jaspreet Singh

    2016-01-01

    Full Text Available With the advent of technology and multimedia information, digital images are increasing very quickly. Various techniques are being developed to retrieve/search digital information or data contained in the image. Traditional Text Based Image Retrieval System is not plentiful. Since it is time consuming as it require manual image annotation. Also, the image annotation differs with different peoples. An alternate to this is Content Based Image Retrieval (CBIR system. It retrieves/search for image using its contents rather the text, keywords etc. A lot of exploration has been compassed in the range of Content Based Image Retrieval (CBIR with various feature extraction techniques. Shape is a significant image feature as it reflects the human perception. Moreover, Shape is quite simple to use by the user to define object in an image as compared to other features such as Color, texture etc. Over and above, if applied alone, no descriptor will give fruitful results. Further, by combining it with an improved classifier, one can use the positive features of both the descriptor and classifier. So, a tryout will be made to establish an algorithm for accurate feature (Shape extraction in Content Based Image Retrieval (CBIR. The main objectives of this project are: (a To propose an algorithm for shape feature extraction using CBIR, (b To evaluate the performance of proposed algorithm and (c To compare the proposed algorithm with state of art techniques.

  15. Use of a JPEG-2000 Wavelet Compression Scheme for Content-Based Ophtalmologic Retinal Images Retrieval.

    Science.gov (United States)

    Lamard, Mathieu; Daccache, Wissam; Cazuguel, Guy; Roux, Christian; Cochener, Beatrice

    2005-01-01

    In this paper we propose a content based image retrieval method for diagnosis aid in diabetic retinopathy. We characterize images without extracting significant features, and use histograms obtained from the compressed images in JPEG-2000 wavelet scheme to build signatures. The research is carried out by calculating signature distances between the query and database images. A weighted distance between histograms is used. Retrieval efficiency is given for different standard types of JPEG-2000 wavelets, and for different values of histogram weights. A classified diabetic retinopathy image database is built allowing algorithms tests. On this image database, results are promising: the retrieval efficiency is higher than 70% for some lesion types.

  16. Content-Based Image Retrieval Benchmarking: Utilizing color categories and color distributions

    NARCIS (Netherlands)

    van den Broek, Egon; Kisters, Peter M.F.; Vuurpijl, Louis G.

    From a human centered perspective three ingredients for Content-Based Image Retrieval (CBIR) were developed. First, with their existence confirmed by experimental data, 11 color categories were utilized for CBIR and used as input for a new color space segmentation technique. The complete HSI color

  17. Content based image retrieval using local binary pattern operator and data mining techniques.

    Science.gov (United States)

    Vatamanu, Oana Astrid; Frandeş, Mirela; Lungeanu, Diana; Mihalaş, Gheorghe-Ioan

    2015-01-01

    Content based image retrieval (CBIR) concerns the retrieval of similar images from image databases, using feature vectors extracted from images. These feature vectors globally define the visual content present in an image, defined by e.g., texture, colour, shape, and spatial relations between vectors. Herein, we propose the definition of feature vectors using the Local Binary Pattern (LBP) operator. A study was performed in order to determine the optimum LBP variant for the general definition of image feature vectors. The chosen LBP variant is then subsequently used to build an ultrasound image database, and a database with images obtained from Wireless Capsule Endoscopy. The image indexing process is optimized using data clustering techniques for images belonging to the same class. Finally, the proposed indexing method is compared to the classical indexing technique, which is nowadays widely used.

  18. Content-based image retrieval of digitized histopathology in boosted spectrally embedded spaces

    Directory of Open Access Journals (Sweden)

    Akshay Sridhar

    2015-01-01

    Full Text Available Context : Content-based image retrieval (CBIR systems allow for retrieval of images from within a database that are similar in visual content to a query image. This is useful for digital pathology, where text-based descriptors alone might be inadequate to accurately describe image content. By representing images via a set of quantitative image descriptors, the similarity between a query image with respect to archived, annotated images in a database can be computed and the most similar images retrieved. Recently, non-linear dimensionality reduction methods have become popular for embedding high-dimensional data into a reduced-dimensional space while preserving local object adjacencies, thereby allowing for object similarity to be determined more accurately in the reduced-dimensional space. However, most dimensionality reduction methods implicitly assume, in computing the reduced-dimensional representation, that all features are equally important. Aims : In this paper we present boosted spectral embedding (BoSE, which utilizes a boosted distance metric to selectively weight individual features (based on training data to subsequently map the data into a reduced-dimensional space. Settings and Design : BoSE is evaluated against spectral embedding (SE (which employs equal feature weighting in the context of CBIR of digitized prostate and breast cancer histopathology images. Materials and Methods : The following datasets, which were comprised of a total of 154 hematoxylin and eosin stained histopathology images, were used: (1 Prostate cancer histopathology (benign vs. malignant, (2 estrogen receptor (ER + breast cancer histopathology (low vs. high grade, and (3 HER2+ breast cancer histopathology (low vs. high levels of lymphocytic infiltration. Statistical Analysis Used : We plotted and calculated the area under precision-recall curves (AUPRC and calculated classification accuracy using the Random Forest classifier. Results : BoSE outperformed SE both

  19. Quantifying the margin sharpness of lesions on radiological images for content-based image retrieval

    International Nuclear Information System (INIS)

    Xu Jiajing; Napel, Sandy; Greenspan, Hayit; Beaulieu, Christopher F.; Agrawal, Neeraj; Rubin, Daniel

    2012-01-01

    . Equivalence across deformations was assessed using Schuirmann's paired two one-sided tests. Results: In simulated images, the concordance correlation between measured gradient and actual gradient was 0.994. The mean (s.d.) and standard deviation NDCG score for the retrieval of K images, K = 5, 10, and 15, were 84% (8%), 85% (7%), and 85% (7%) for CT images containing liver lesions, and 82% (7%), 84% (6%), and 85% (4%) for CT images containing lung nodules, respectively. The authors’ proposed method outperformed the two existing margin characterization methods in average NDCG scores over all K, by 1.5% and 3% in datasets containing liver lesion, and 4.5% and 5% in datasets containing lung nodules. Equivalence testing showed that the authors’ feature is more robust across all margin deformations (p < 0.05) than the two existing methods for margin sharpness characterization in both simulated and clinical datasets. Conclusions: The authors have described a new image feature to quantify the margin sharpness of lesions. It has strong correlation with known margin sharpness in simulated images and in clinical CT images containing liver lesions and lung nodules. This image feature has excellent performance for retrieving images with similar margin characteristics, suggesting potential utility, in conjunction with other lesion features, for content-based image retrieval applications.

  20. Multiscale Distance Coherence Vector Algorithm for Content-Based Image Retrieval

    Science.gov (United States)

    Jiexian, Zeng; Xiupeng, Liu

    2014-01-01

    Multiscale distance coherence vector algorithm for content-based image retrieval (CBIR) is proposed due to the same descriptor with different shapes and the shortcomings of antinoise performance of the distance coherence vector algorithm. By this algorithm, the image contour curve is evolved by Gaussian function first, and then the distance coherence vector is, respectively, extracted from the contour of the original image and evolved images. Multiscale distance coherence vector was obtained by reasonable weight distribution of the distance coherence vectors of evolved images contour. This algorithm not only is invariable to translation, rotation, and scaling transformation but also has good performance of antinoise. The experiment results show us that the algorithm has a higher recall rate and precision rate for the retrieval of images polluted by noise. PMID:24883416

  1. Efficient content-based low-altitude images correlated network and strips reconstruction

    Science.gov (United States)

    He, Haiqing; You, Qi; Chen, Xiaoyong

    2017-01-01

    The manual intervention method is widely used to reconstruct strips for further aerial triangulation in low-altitude photogrammetry. Clearly the method for fully automatic photogrammetric data processing is not an expected way. In this paper, we explore a content-based approach without manual intervention or external information for strips reconstruction. Feature descriptors in the local spatial patterns are extracted by SIFT to construct vocabulary tree, in which these features are encoded in terms of TF-IDF numerical statistical algorithm to generate new representation for each low-altitude image. Then images correlated network is reconstructed by similarity measure, image matching and geometric graph theory. Finally, strips are reconstructed automatically by tracing straight lines and growing adjacent images gradually. Experimental results show that the proposed approach is highly effective in automatically rearranging strips of lowaltitude images and can provide rough relative orientation for further aerial triangulation.

  2. A Novel Optimization-Based Approach for Content-Based Image Retrieval

    Directory of Open Access Journals (Sweden)

    Manyu Xiao

    2013-01-01

    Full Text Available Content-based image retrieval is nowadays one of the possible and promising solutions to manage image databases effectively. However, with the large number of images, there still exists a great discrepancy between the users’ expectations (accuracy and efficiency and the real performance in image retrieval. In this work, new optimization strategies are proposed on vocabulary tree building, retrieval, and matching methods. More precisely, a new clustering strategy combining classification and conventional K-Means method is firstly redefined. Then a new matching technique is built to eliminate the error caused by large-scaled scale-invariant feature transform (SIFT. Additionally, a new unit mechanism is proposed to reduce the cost of indexing time. Finally, the numerical results show that excellent performances are obtained in both accuracy and efficiency based on the proposed improvements for image retrieval.

  3. Content Based medical image retrieval based on BEMD: optimization of a similarity metric.

    Science.gov (United States)

    Jai-Andaloussi, Said; Lamard, Mathieu; Cazuguel, Guy; Tairi, Hamid; Meknassi, Mohamed; Cochener, Beatrice; Roux, Christian

    2010-01-01

    Most medical images are now digitized and stored in patients files databases. The challenge is how to use them for acquiring knowledge or/and for aid to diagnosis. In this paper, we address the challenge of diagnosis aid by Content Based Image Retrieval (CBIR). We propose to characterize images by using the Bidimensional Empirical Mode Decomposition (BEMD). Images are decomposed into a set of functions named Bidimensional Intrinsic Mode Functions (BIMF). Two methods are used to characterize BIMFs information content: the Generalized Gaussian density functions (GGD) and the Huang-Hilbert transform (HHT). In order to enhance results, we introduce a similarity metric optimization process: weighted distances between BIMFs are adapted for each image in the database. Retrieval efficiency is given for different databases (DB), including a diabetic retinopathy DB, a mammography DB and a faces DB. Results are promising: the retrieval efficiency is higher than 95% for some cases.

  4. Parallel content-based sub-image retrieval using hierarchical searching.

    Science.gov (United States)

    Yang, Lin; Qi, Xin; Xing, Fuyong; Kurc, Tahsin; Saltz, Joel; Foran, David J

    2014-04-01

    The capacity to systematically search through large image collections and ensembles and detect regions exhibiting similar morphological characteristics is central to pathology diagnosis. Unfortunately, the primary methods used to search digitized, whole-slide histopathology specimens are slow and prone to inter- and intra-observer variability. The central objective of this research was to design, develop, and evaluate a content-based image retrieval system to assist doctors for quick and reliable content-based comparative search of similar prostate image patches. Given a representative image patch (sub-image), the algorithm will return a ranked ensemble of image patches throughout the entire whole-slide histology section which exhibits the most similar morphologic characteristics. This is accomplished by first performing hierarchical searching based on a newly developed hierarchical annular histogram (HAH). The set of candidates is then further refined in the second stage of processing by computing a color histogram from eight equally divided segments within each square annular bin defined in the original HAH. A demand-driven master-worker parallelization approach is employed to speed up the searching procedure. Using this strategy, the query patch is broadcasted to all worker processes. Each worker process is dynamically assigned an image by the master process to search for and return a ranked list of similar patches in the image. The algorithm was tested using digitized hematoxylin and eosin (H&E) stained prostate cancer specimens. We have achieved an excellent image retrieval performance. The recall rate within the first 40 rank retrieved image patches is ∼90%. Both the testing data and source code can be downloaded from http://pleiad.umdnj.edu/CBII/Bioinformatics/.

  5. System for accessing a collection of histology images using content-based strategies

    International Nuclear Information System (INIS)

    Gonzalez F; Caicedo J C; Cruz Roa A; Camargo, J; Spinel, C

    2010-01-01

    Histology images are an important resource for research, education and medical practice. The availability of image collections with reference purposes is limited to printed formats such as books and specialized journals. When histology image sets are published in digital formats, they are composed of some tens of images that do not represent the wide diversity of biological structures that can be found in fundamental tissues; making a complete histology image collection available to the general public having a great impact on research and education in different areas such as medicine, biology and natural sciences. This work presents the acquisition process of a histology image collection with 20,000 samples in digital format, from tissue processing to digital image capturing. The main purpose of collecting these images is to make them available as reference material to the academic community. In addition, this paper presents the design and architecture of a system to query and explore the image collection, using content-based image retrieval tools and text-based search on the annotations provided by experts. The system also offers novel image visualization methods to allow easy identification of interesting images among hundreds of possible pictures. The system has been developed using a service-oriented architecture and allows web-based access in http://www.informed.unal.edu.co

  6. A SYSTEM FOR ACCESSING A COLLECTION OF HISTOLOGY IMAGES USING CONTENT-BASED STRATEGIES

    Directory of Open Access Journals (Sweden)

    F González

    2010-09-01

    Full Text Available Histology images are an important resource for research, education and medical practice. The availability of image collections with reference purposes is limited to printed formats such as books and specialized journals. When histology image sets are published in digital formats, they are composed of some tens of images that do not represent the wide diversity of biological structures that can be found in fundamental tissues. Making a complete histology image collection available to the general public having a great impact on research and education in different areas such as medicine, biology and natural sciences. This work presents the acquisition process of a histology image collection with 20,000 samples in digital format, from tissue processing to digital image capturing. The main purpose of collecting these images is to make them available as reference material to the academic comunity. In addition, this paper presents the design and architecture of a system to query and explore the image collection, using content-based image retrieval tools and text-based search on the annotations provided by experts. The system also offers novel image visualization methods to allow easy identification of interesting images among hundreds of possible pictures. The system has been developed using a service-oriented architecture and allows web-based access in http://www.informed.unal.edu.co

  7. Implementation and evaluation of a medical image management system with content-based retrieval support

    International Nuclear Information System (INIS)

    Carita, Edilson Carlos; Seraphim, Enzo; Honda, Marcelo Ossamu; Azevedo-Marques, Paulo Mazzoncini de

    2008-01-01

    Objective: the present paper describes the implementation and evaluation of a medical images management system with content-based retrieval support (PACS-CBIR) integrating modules focused on images acquisition, storage and distribution, and text retrieval by keyword and images retrieval by similarity. Materials and methods: internet-compatible technologies were utilized for the system implementation with free ware, and C ++ , PHP and Java languages on a Linux platform. There is a DICOM-compatible image management module and two query modules, one of them based on text and the other on similarity of image texture attributes. Results: results demonstrate an appropriate images management and storage, and that the images retrieval time, always < 15 sec, was found to be good by users. The evaluation of retrieval by similarity has demonstrated that the selected images extractor allowed the sorting of images according to anatomical areas. Conclusion: based on these results, one can conclude that the PACS-CBIR implementation is feasible. The system has demonstrated to be DICOM-compatible, and that it can be integrated with the local information system. The similar images retrieval functionality can be enhanced by the introduction of further descriptors. (author)

  8. Content Based Radiographic Images Indexing and Retrieval Using Pattern Orientation Histogram

    Directory of Open Access Journals (Sweden)

    Abolfazl Lakdashti

    2008-06-01

    Full Text Available Introduction: Content Based Image Retrieval (CBIR is a method of image searching and retrieval in a  database. In medical applications, CBIR is a tool used by physicians to compare the previous and current  medical images associated with patients pathological conditions. As the volume of pictorial information  stored in medical image databases is in progress, efficient image indexing and retrieval is increasingly  becoming a necessity.  Materials and Methods: This paper presents a new content based radiographic image retrieval approach  based on histogram of pattern orientations, namely pattern orientation histogram (POH. POH represents  the  spatial  distribution  of  five  different  pattern  orientations:  vertical,  horizontal,  diagonal  down/left,  diagonal down/right and non-orientation. In this method, a given image is first divided into image-blocks  and  the  frequency  of  each  type  of  pattern  is  determined  in  each  image-block.  Then,  local  pattern  histograms for each of these image-blocks are computed.   Results: The method was compared to two well known texture-based image retrieval methods: Tamura  and  Edge  Histogram  Descriptors  (EHD  in  MPEG-7  standard.  Experimental  results  based  on  10000  IRMA  radiography  image  dataset,  demonstrate  that  POH  provides  better  precision  and  recall  rates  compared to Tamura and EHD. For some images, the recall and precision rates obtained by POH are,  respectively, 48% and 18% better than the best of the two above mentioned methods.    Discussion and Conclusion: Since we exploit the absolute location of the pattern in the image as well as  its global composition, the proposed matching method can retrieve semantically similar medical images.

  9. Improving performance of content based image retrieval system with color features

    Directory of Open Access Journals (Sweden)

    Aleš Hladnik

    2017-04-01

    Full Text Available Content based image retrieval (CBIR encompasses a variety of techniques with a goal to solve the problem of searching for digital images in a large database by their visual content. Applications where the retrieval of similar images plays a crucial role include personal photo and art collections, medical imaging, multimedia publications and video surveillance. Main objective of our study was to try to improve the performance of the query-by-example image retrieval system based on texture features – Gabor wavelet and wavelet transform – by augmenting it with color information about the images, in particular color histogram, color autocorrelogram and color moments. Wang image database comprising 1000 natural color images grouped into 10 categories with 100 images was used for testing individual algorithms. Each image in the database served as a query image and the retrieval performance was evaluated by means of the precision and recall. e number of retrieved images ranged from 10 to 80. e best CBIR performance was obtained when implementing a combination of all 190 texture- and color features. Only slightly worse were the average precision and recall for the texture- and color histogram-based system. is result was somewhat surprising, since color histogram features provide no color spatial informa- tion. We observed a 23% increase in average precision when comparing the system containing a combination of texture- and all color features with the one consisting of exclusively texture descriptors when using Euclidean distance measure and 20 retrieved images. Addition of the color autocorrelogram features to the texture de- scriptors had virtually no e ect on the performance, while only minor improvement was detected when adding rst two color moments – the mean and the standard deviation. Similar to what was found in the previous studies with the same image database, average precision was very high in case of dinosaurs and owers and very low

  10. Toward content-based image retrieval with deep convolutional neural networks

    Science.gov (United States)

    Sklan, Judah E. S.; Plassard, Andrew J.; Fabbri, Daniel; Landman, Bennett A.

    2015-03-01

    Content-based image retrieval (CBIR) offers the potential to identify similar case histories, understand rare disorders, and eventually, improve patient care. Recent advances in database capacity, algorithm efficiency, and deep Convolutional Neural Networks (dCNN), a machine learning technique, have enabled great CBIR success for general photographic images. Here, we investigate applying the leading ImageNet CBIR technique to clinically acquired medical images captured by the Vanderbilt Medical Center. Briefly, we (1) constructed a dCNN with four hidden layers, reducing dimensionality of an input scaled to 128x128 to an output encoded layer of 4x384, (2) trained the network using back-propagation 1 million random magnetic resonance (MR) and computed tomography (CT) images, (3) labeled an independent set of 2100 images, and (4) evaluated classifiers on the projection of the labeled images into manifold space. Quantitative results were disappointing (averaging a true positive rate of only 20%); however, the data suggest that improvements would be possible with more evenly distributed sampling across labels and potential re-grouping of label structures. This preliminary effort at automated classification of medical images with ImageNet is promising, but shows that more work is needed beyond direct adaptation of existing techniques.

  11. Combining semantic technologies with a content-based image retrieval system - Preliminary considerations

    Science.gov (United States)

    Chmiel, P.; Ganzha, M.; Jaworska, T.; Paprzycki, M.

    2017-10-01

    Nowadays, as a part of systematic growth of volume, and variety, of information that can be found on the Internet, we observe also dramatic increase in sizes of available image collections. There are many ways to help users browsing / selecting images of interest. One of popular approaches are Content-Based Image Retrieval (CBIR) systems, which allow users to search for images that match their interests, expressed in the form of images (query by example). However, we believe that image search and retrieval could take advantage of semantic technologies. We have decided to test this hypothesis. Specifically, on the basis of knowledge captured in the CBIR, we have developed a domain ontology of residential real estate (detached houses, in particular). This allows us to semantically represent each image (and its constitutive architectural elements) represented within the CBIR. The proposed ontology was extended to capture not only the elements resulting from image segmentation, but also "spatial relations" between them. As a result, a new approach to querying the image database (semantic querying) has materialized, thus extending capabilities of the developed system.

  12. Wavelet optimization for content-based image retrieval in medical databases.

    Science.gov (United States)

    Quellec, G; Lamard, M; Cazuguel, G; Cochener, B; Roux, C

    2010-04-01

    We propose in this article a content-based image retrieval (CBIR) method for diagnosis aid in medical fields. In the proposed system, images are indexed in a generic fashion, without extracting domain-specific features: a signature is built for each image from its wavelet transform. These image signatures characterize the distribution of wavelet coefficients in each subband of the decomposition. A distance measure is then defined to compare two image signatures and thus retrieve the most similar images in a database when a query image is submitted by a physician. To retrieve relevant images from a medical database, the signatures and the distance measure must be related to the medical interpretation of images. As a consequence, we introduce several degrees of freedom in the system so that it can be tuned to any pathology and image modality. In particular, we propose to adapt the wavelet basis, within the lifting scheme framework, and to use a custom decomposition scheme. Weights are also introduced between subbands. All these parameters are tuned by an optimization procedure, using the medical grading of each image in the database to define a performance measure. The system is assessed on two medical image databases: one for diabetic retinopathy follow up and one for screening mammography, as well as a general purpose database. Results are promising: a mean precision of 56.50%, 70.91% and 96.10% is achieved for these three databases, when five images are returned by the system. Copyright 2009 Elsevier B.V. All rights reserved.

  13. An efficient similarity measure for content based image retrieval using memetic algorithm

    Directory of Open Access Journals (Sweden)

    Mutasem K. Alsmadi

    2017-06-01

    Full Text Available Content based image retrieval (CBIR systems work by retrieving images which are related to the query image (QI from huge databases. The available CBIR systems extract limited feature sets which confine the retrieval efficacy. In this work, extensive robust and important features were extracted from the images database and then stored in the feature repository. This feature set is composed of color signature with the shape and color texture features. Where, features are extracted from the given QI in the similar fashion. Consequently, a novel similarity evaluation using a meta-heuristic algorithm called a memetic algorithm (genetic algorithm with great deluge is achieved between the features of the QI and the features of the database images. Our proposed CBIR system is assessed by inquiring number of images (from the test dataset and the efficiency of the system is evaluated by calculating precision-recall value for the results. The results were superior to other state-of-the-art CBIR systems in regard to precision.

  14. Optimizing top precision performance measure of content-based image retrieval by learning similarity function

    KAUST Repository

    Liang, Ru-Ze

    2017-04-24

    In this paper we study the problem of content-based image retrieval. In this problem, the most popular performance measure is the top precision measure, and the most important component of a retrieval system is the similarity function used to compare a query image against a database image. However, up to now, there is no existing similarity learning method proposed to optimize the top precision measure. To fill this gap, in this paper, we propose a novel similarity learning method to maximize the top precision measure. We model this problem as a minimization problem with an objective function as the combination of the losses of the relevant images ranked behind the top-ranked irrelevant image, and the squared Frobenius norm of the similarity function parameter. This minimization problem is solved as a quadratic programming problem. The experiments over two benchmark data sets show the advantages of the proposed method over other similarity learning methods when the top precision is used as the performance measure.

  15. TBIdoc: 3D content-based CT image retrieval system for traumatic brain injury

    Science.gov (United States)

    Li, Shimiao; Gong, Tianxia; Wang, Jie; Liu, Ruizhe; Tan, Chew Lim; Leong, Tze Yun; Pang, Boon Chuan; Lim, C. C. Tchoyoson; Lee, Cheng Kiang; Tian, Qi; Zhang, Zhuo

    2010-03-01

    Traumatic brain injury (TBI) is a major cause of death and disability. Computed Tomography (CT) scan is widely used in the diagnosis of TBI. Nowadays, large amount of TBI CT data is stacked in the hospital radiology department. Such data and the associated patient information contain valuable information for clinical diagnosis and outcome prediction. However, current hospital database system does not provide an efficient and intuitive tool for doctors to search out cases relevant to the current study case. In this paper, we present the TBIdoc system: a content-based image retrieval (CBIR) system which works on the TBI CT images. In this web-based system, user can query by uploading CT image slices from one study, retrieval result is a list of TBI cases ranked according to their 3D visual similarity to the query case. Specifically, cases of TBI CT images often present diffuse or focal lesions. In TBIdoc system, these pathological image features are represented as bin-based binary feature vectors. We use the Jaccard-Needham measure as the similarity measurement. Based on these, we propose a 3D similarity measure for computing the similarity score between two series of CT slices. nDCG is used to evaluate the system performance, which shows the system produces satisfactory retrieval results. The system is expected to improve the current hospital data management in TBI and to give better support for the clinical decision-making process. It may also contribute to the computer-aided education in TBI.

  16. A content-based digital image watermarking scheme resistant to local geometric distortions

    International Nuclear Information System (INIS)

    Yang, Hong-ying; Chen, Li-li; Wang, Xiang-yang

    2011-01-01

    Geometric distortion is known as one of the most difficult attacks to resist, as it can desynchronize the location of the watermark and hence cause incorrect watermark detection. Geometric distortion can be decomposed into two classes: global affine transforms and local geometric distortions. Most countermeasures proposed in the literature only address the problem of global affine transforms. It is a challenging problem to design a robust image watermarking scheme against local geometric distortions. In this paper, we propose a new content-based digital image watermarking scheme with good visual quality and reasonable resistance against local geometric distortions. Firstly, the robust feature points, which can survive various common image processing and global affine transforms, are extracted by using a multi-scale SIFT (scale invariant feature transform) detector. Then, the affine covariant local feature regions (LFRs) are constructed adaptively according to the feature scale and local invariant centroid. Finally, the digital watermark is embedded into the affine covariant LFRs by modulating the magnitudes of discrete Fourier transform (DFT) coefficients. By binding the watermark with the affine covariant LFRs, the watermark detection can be done without synchronization error. Experimental results show that the proposed image watermarking is not only invisible and robust against common image processing operations such as sharpening, noise addition, and JPEG compression, etc, but also robust against global affine transforms and local geometric distortions

  17. Adaptive image content-based exposure control for scanning applications in radiography

    NARCIS (Netherlands)

    Schulerud, H.; Thielemann, J.; Kirkhus, T.; Kaspersen, K.; Østby, J.M.; Metaxas, M.G.; Royle, G.J.; Griffiths, J.; Cook, E.; Esbrand, C.; Pani, S.; Venanzi, C.; van der Stelt, P.F.; Li, G.; Turchetta, R.; Fant, A.; Theodoridis, S.; Georgiou, H.; Hall, G.; Noy, M.; Jones, J.; Leaver, J.; Triantis, F.; Asimidis, A.; Manthos, N.; Longo, R.; Bergamaschi, A.; Speller, R.D.

    2007-01-01

    I-ImaS (Intelligent Imaging Sensors) is a European project which has designed and developed a new adaptive X-ray imaging system using on-line exposure control, to create locally optimized images. The I-ImaS system allows for real-time image analysis during acquisition, thus enabling real-time

  18. Keyframes Global Map Establishing Method for Robot Localization through Content-Based Image Matching

    Directory of Open Access Journals (Sweden)

    Tianyang Cao

    2017-01-01

    Full Text Available Self-localization and mapping are important for indoor mobile robot. We report a robust algorithm for map building and subsequent localization especially suited for indoor floor-cleaning robots. Common methods, for example, SLAM, can easily be kidnapped by colliding or disturbed by similar objects. Therefore, keyframes global map establishing method for robot localization in multiple rooms and corridors is needed. Content-based image matching is the core of this method. It is designed for the situation, by establishing keyframes containing both floor and distorted wall images. Image distortion, caused by robot view angle and movement, is analyzed and deduced. And an image matching solution is presented, consisting of extraction of overlap regions of keyframes extraction and overlap region rebuild through subblocks matching. For improving accuracy, ceiling points detecting and mismatching subblocks checking methods are incorporated. This matching method can process environment video effectively. In experiments, less than 5% frames are extracted as keyframes to build global map, which have large space distance and overlap each other. Through this method, robot can localize itself by matching its real-time vision frames with our keyframes map. Even with many similar objects/background in the environment or kidnapping robot, robot localization is achieved with position RMSE <0.5 m.

  19. Content-Based Image Retrieval by Metric Learning From Radiology Reports: Application to Interstitial Lung Diseases.

    Science.gov (United States)

    Ramos, José; Kockelkorn, Thessa T J P; Ramos, Isabel; Ramos, Rui; Grutters, Jan; Viergever, Max A; van Ginneken, Bram; Campilho, Aurélio

    2016-01-01

    Content-based image retrieval (CBIR) is a search technology that could aid medical diagnosis by retrieving and presenting earlier reported cases that are related to the one being diagnosed. To retrieve relevant cases, CBIR systems depend on supervised learning to map low-level image contents to high-level diagnostic concepts. However, the annotation by medical doctors for training and evaluation purposes is a difficult and time-consuming task, which restricts the supervised learning phase to specific CBIR problems of well-defined clinical applications. This paper proposes a new technique that automatically learns the similarity between the several exams from textual distances extracted from radiology reports, thereby successfully reducing the number of annotations needed. Our method first infers the relation between patients by using information retrieval techniques to determine the textual distances between patient radiology reports. These distances are subsequently used to supervise a metric learning algorithm, that transforms the image space accordingly to textual distances. CBIR systems with different image descriptions and different levels of medical annotations were evaluated, with and without supervision from textual distances, using a database of computer tomography scans of patients with interstitial lung diseases. The proposed method consistently improves CBIR mean average precision, with improvements that can reach 38%, and more marked gains for small annotation sets. Given the overall availability of radiology reports in picture archiving and communication systems, the proposed approach can be broadly applied to CBIR systems in different medical problems, and may facilitate the introduction of CBIR in clinical practice.

  20. PROTOTYPE CONTENT BASED IMAGE RETRIEVAL UNTUK DETEKSI PEN YAKIT KULIT DENGAN METODE EDGE DETECTION

    Directory of Open Access Journals (Sweden)

    Erick Fernando

    2016-05-01

    Full Text Available Dokter spesialis kulit melakukan pemeriksa secara visual objek mata, capture objek dengan kamera digital dan menanyakan riwayat perjalanan penyakit pasien, tanpa melakukan perbandingan terhadap gejala dan tanda yang ada sebelummnya. Sehingga pemeriksaan dan perkiraan jenis penyakit kulit. Pengolahan data citra dalam bentuk digital khususnya citra medis sudah sangat dibutuhkan dengan pra-processing. Banyak pasien yang dilayani di rumah sakit masih menggunakan data citra analog. Data analog ini membutuhkan ruangan khusus untuk menyimpan guna menghindarkan kerusakan mekanis. Uraian mengatasi permasalahan ini, citra medis dibuat dalam bentuk digital dan disimpan dalam sistem database dan dapat melihat kesamaan citra kulit yang baru. Citra akan dapat ditampilkan dengan pra- processing dengan identifikasi kesamaan dengan Content Based Image Retrieval (CBIR bekerja dengan cara mengukur kemiripan citra query dengan semua citra yang ada dalam database sehingga query cost berbanding lurus dengan jumlah citra dalam database.

  1. Endowing a Content-Based Medical Image Retrieval System with Perceptual Similarity Using Ensemble Strategy.

    Science.gov (United States)

    Bedo, Marcos Vinicius Naves; Pereira Dos Santos, Davi; Ponciano-Silva, Marcelo; de Azevedo-Marques, Paulo Mazzoncini; Ferreira de Carvalho, André Ponce de León; Traina, Caetano

    2016-02-01

    Content-based medical image retrieval (CBMIR) is a powerful resource to improve differential computer-aided diagnosis. The major problem with CBMIR applications is the semantic gap, a situation in which the system does not follow the users' sense of similarity. This gap can be bridged by the adequate modeling of similarity queries, which ultimately depends on the combination of feature extractor methods and distance functions. In this study, such combinations are referred to as perceptual parameters, as they impact on how images are compared. In a CBMIR, the perceptual parameters must be manually set by the users, which imposes a heavy burden on the specialists; otherwise, the system will follow a predefined sense of similarity. This paper presents a novel approach to endow a CBMIR with a proper sense of similarity, in which the system defines the perceptual parameter depending on the query element. The method employs ensemble strategy, where an extreme learning machine acts as a meta-learner and identifies the most suitable perceptual parameter according to a given query image. This parameter defines the search space for the similarity query that retrieves the most similar images. An instance-based learning classifier labels the query image following the query result set. As the concept implementation, we integrated the approach into a mammogram CBMIR. For each query image, the resulting tool provided a complete second opinion, including lesion class, system certainty degree, and set of most similar images. Extensive experiments on a large mammogram dataset showed that our proposal achieved a hit ratio up to 10% higher than the traditional CBMIR approach without requiring external parameters from the users. Our database-driven solution was also up to 25% faster than content retrieval traditional approaches.

  2. Content-based image retrieval of digitized histopathology in boosted spectrally embedded spaces.

    Science.gov (United States)

    Sridhar, Akshay; Doyle, Scott; Madabhushi, Anant

    2015-01-01

    Content-based image retrieval (CBIR) systems allow for retrieval of images from within a database that are similar in visual content to a query image. This is useful for digital pathology, where text-based descriptors alone might be inadequate to accurately describe image content. By representing images via a set of quantitative image descriptors, the similarity between a query image with respect to archived, annotated images in a database can be computed and the most similar images retrieved. Recently, non-linear dimensionality reduction methods have become popular for embedding high-dimensional data into a reduced-dimensional space while preserving local object adjacencies, thereby allowing for object similarity to be determined more accurately in the reduced-dimensional space. However, most dimensionality reduction methods implicitly assume, in computing the reduced-dimensional representation, that all features are equally important. In this paper we present boosted spectral embedding(BoSE), which utilizes a boosted distance metric to selectively weight individual features (based on training data) to subsequently map the data into a reduced-dimensional space. BoSE is evaluated against spectral embedding (SE) (which employs equal feature weighting) in the context of CBIR of digitized prostate and breast cancer histopathology images. The following datasets, which were comprised of a total of 154 hematoxylin and eosin stained histopathology images, were used: (1) Prostate cancer histopathology (benign vs. malignant), (2) estrogen receptor (ER) + breast cancer histopathology (low vs. high grade), and (3) HER2+ breast cancer histopathology (low vs. high levels of lymphocytic infiltration). We plotted and calculated the area under precision-recall curves (AUPRC) and calculated classification accuracy using the Random Forest classifier. BoSE outperformed SE both in terms of CBIR-based (area under the precision-recall curve) and classifier-based (classification accuracy

  3. Adaptive nonseparable wavelet transform via lifting and its application to content-based image retrieval.

    Science.gov (United States)

    Quellec, Gwénolé; Lamard, Mathieu; Cazuguel, Guy; Cochener, Béatrice; Roux, Christian

    2010-01-01

    We present in this paper a novel way to adapt a multidimensional wavelet filter bank, based on the nonseparable lifting scheme framework, to any specific problem. It allows the design of filter banks with a desired number of degrees of freedom, while controlling the number of vanishing moments of the primal wavelet ((~)N moments) and of the dual wavelet ( N moments). The prediction and update filters, in the lifting scheme based filter banks, are defined as Neville filters of order (~)N and N, respectively. However, in order to introduce some degrees of freedom in the design, these filters are not defined as the simplest Neville filters. The proposed method is convenient: the same algorithm is used whatever the dimensionality of the signal, and whatever the lattice used. The method is applied to content-based image retrieval (CBIR): an image signature is derived from this new adaptive nonseparable wavelet transform. The method is evaluated on four image databases and compared to a similar CBIR system, based on an adaptive separable wavelet transform. The mean precision at five of the nonseparable wavelet based system is notably higher on three out of the four databases, and comparable on the other one. The proposed method also compares favorably with the dual-tree complex wavelet transform, an overcomplete nonseparable wavelet transform.

  4. Content-based Image Hiding Method for Secure Network Biometric Verification

    Directory of Open Access Journals (Sweden)

    Xiangjiu Che

    2011-08-01

    Full Text Available For secure biometric verification, most existing methods embed biometric information directly into the cover image, but content correlation analysis between the biometric image and the cover image is often ignored. In this paper, we propose a novel biometric image hiding approach based on the content correlation analysis to protect the network-based transmitted image. By using principal component analysis (PCA, the content correlation between the biometric image and the cover image is firstly analyzed. Then based on particle swarm optimization (PSO algorithm, some regions of the cover image are selected to represent the biometric image, in which the cover image can carry partial content of the biometric image. As a result of the correlation analysis, the unrepresented part of the biometric image is embedded into the cover image by using the discrete wavelet transform (DWT. Combined with human visual system (HVS model, this approach makes the hiding result perceptually invisible. The extensive experimental results demonstrate that the proposed hiding approach is robust against some common frequency and geometric attacks; it also provides an effective protection for the secure biometric verification.

  5. Design and development of a content-based medical image retrieval system for spine vertebrae irregularity.

    Science.gov (United States)

    Mustapha, Aouache; Hussain, Aini; Samad, Salina Abdul; Zulkifley, Mohd Asyraf; Diyana Wan Zaki, Wan Mimi; Hamid, Hamzaini Abdul

    2015-01-16

    Content-based medical image retrieval (CBMIR) system enables medical practitioners to perform fast diagnosis through quantitative assessment of the visual information of various modalities. In this paper, a more robust CBMIR system that deals with both cervical and lumbar vertebrae irregularity is afforded. It comprises three main phases, namely modelling, indexing and retrieval of the vertebrae image. The main tasks in the modelling phase are to improve and enhance the visibility of the x-ray image for better segmentation results using active shape model (ASM). The segmented vertebral fractures are then characterized in the indexing phase using region-based fracture characterization (RB-FC) and contour-based fracture characterization (CB-FC). Upon a query, the characterized features are compared to the query image. Effectiveness of the retrieval phase is determined by its retrieval, thus, we propose an integration of the predictor model based cross validation neural network (PMCVNN) and similarity matching (SM) in this stage. The PMCVNN task is to identify the correct vertebral irregularity class through classification allowing the SM process to be more efficient. Retrieval performance between the proposed and the standard retrieval architectures are then compared using retrieval precision (Pr@M) and average group score (AGS) measures. Experimental results show that the new integrated retrieval architecture performs better than those of the standard CBMIR architecture with retrieval results of cervical (AGS > 87%) and lumbar (AGS > 82%) datasets. The proposed CBMIR architecture shows encouraging results with high Pr@M accuracy. As a result, images from the same visualization class are returned for further used by the medical personnel.

  6. Optimization of reference library used in content-based medical image retrieval scheme

    International Nuclear Information System (INIS)

    Park, Sang Cheol; Sukthankar, Rahul; Mummert, Lily; Satyanarayanan, Mahadev; Zheng Bin

    2007-01-01

    Building an optimal image reference library is a critical step in developing the interactive computer-aided detection and diagnosis (I-CAD) systems of medical images using content-based image retrieval (CBIR) schemes. In this study, the authors conducted two experiments to investigate (1) the relationship between I-CAD performance and size of reference library and (2) a new reference selection strategy to optimize the library and improve I-CAD performance. The authors assembled a reference library that includes 3153 regions of interest (ROI) depicting either malignant masses (1592) or CAD-cued false-positive regions (1561) and an independent testing data set including 200 masses and 200 false-positive regions. A CBIR scheme using a distance-weighted K-nearest neighbor algorithm is applied to retrieve references that are considered similar to the testing sample from the library. The area under receiver operating characteristic curve (A z ) is used as an index to evaluate the I-CAD performance. In the first experiment, the authors systematically increased reference library size and tested I-CAD performance. The result indicates that scheme performance improves initially from A z =0.715 to 0.874 and then plateaus when the library size reaches approximately half of its maximum capacity. In the second experiment, based on the hypothesis that a ROI should be removed if it performs poorly compared to a group of similar ROIs in a large and diverse reference library, the authors applied a new strategy to identify 'poorly effective' references. By removing 174 identified ROIs from the reference library, I-CAD performance significantly increases to A z =0.914 (p<0.01). The study demonstrates that increasing reference library size and removing poorly effective references can significantly improve I-CAD performance

  7. Using an image-extended relational database to support content-based image retrieval in a PACS.

    Science.gov (United States)

    Traina, Caetano; Traina, Agma J M; Araújo, Myrian R B; Bueno, Josiane M; Chino, Fabio J T; Razente, Humberto; Azevedo-Marques, Paulo M

    2005-12-01

    This paper presents a new Picture Archiving and Communication System (PACS), called cbPACS, which has content-based image retrieval capabilities. The cbPACS answers range and k-nearest- neighbor similarity queries, employing a relational database manager extended to support images. The images are compared through their features, which are extracted by an image-processing module and stored in the extended relational database. The database extensions were developed aiming at efficiently answering similarity queries by taking advantage of specialized indexing methods. The main concept supporting the extensions is the definition, inside the relational manager, of distance functions based on features extracted from the images. An extension to the SQL language enables the construction of an interpreter that intercepts the extended commands and translates them to standard SQL, allowing any relational database server to be used. By now, the system implemented works on features based on color distribution of the images through normalized histograms as well as metric histograms. Metric histograms are invariant regarding scale, translation and rotation of images and also to brightness transformations. The cbPACS is prepared to integrate new image features, based on texture and shape of the main objects in the image.

  8. Automated assessment of diabetic retinopathy severity using content-based image retrieval in multimodal fundus photographs.

    Science.gov (United States)

    Quellec, Gwénolé; Lamard, Mathieu; Cazuguel, Guy; Bekri, Lynda; Daccache, Wissam; Roux, Christian; Cochener, Béatrice

    2011-10-21

    Recent studies on diabetic retinopathy (DR) screening in fundus photographs suggest that disagreements between algorithms and clinicians are now comparable to disagreements among clinicians. The purpose of this study is to (1) determine whether this observation also holds for automated DR severity assessment algorithms, and (2) show the interest of such algorithms in clinical practice. A dataset of 85 consecutive DR examinations (168 eyes, 1176 multimodal eye fundus photographs) was collected at Brest University Hospital (Brest, France). Two clinicians with different experience levels determined DR severity in each eye, according to the International Clinical Diabetic Retinopathy Disease Severity (ICDRS) scale. Based on Cohen's kappa (κ) measurements, the performance of clinicians at assessing DR severity was compared to the performance of state-of-the-art content-based image retrieval (CBIR) algorithms from our group. At assessing DR severity in each patient, intraobserver agreement was κ = 0.769 for the most experienced clinician. Interobserver agreement between clinicians was κ = 0.526. Interobserver agreement between the most experienced clinicians and the most advanced algorithm was κ = 0.592. Besides, the most advanced algorithm was often able to predict agreements and disagreements between clinicians. Automated DR severity assessment algorithms, trained to imitate experienced clinicians, can be used to predict when young clinicians would agree or disagree with their more experienced fellow members. Such algorithms may thus be used in clinical practice to help validate or invalidate their diagnoses. CBIR algorithms, in particular, may also be used for pooling diagnostic knowledge among peers, with applications in training and coordination of clinicians' prescriptions.

  9. Out-of-Sample Extrapolation utilizing Semi-Supervised Manifold Learning (OSE-SSL): Content Based Image Retrieval for Histopathology Images.

    Science.gov (United States)

    Sparks, Rachel; Madabhushi, Anant

    2016-06-06

    Content-based image retrieval (CBIR) retrieves database images most similar to the query image by (1) extracting quantitative image descriptors and (2) calculating similarity between database and query image descriptors. Recently, manifold learning (ML) has been used to perform CBIR in a low dimensional representation of the high dimensional image descriptor space to avoid the curse of dimensionality. ML schemes are computationally expensive, requiring an eigenvalue decomposition (EVD) for every new query image to learn its low dimensional representation. We present out-of-sample extrapolation utilizing semi-supervised ML (OSE-SSL) to learn the low dimensional representation without recomputing the EVD for each query image. OSE-SSL incorporates semantic information, partial class label, into a ML scheme such that the low dimensional representation co-localizes semantically similar images. In the context of prostate histopathology, gland morphology is an integral component of the Gleason score which enables discrimination between prostate cancer aggressiveness. Images are represented by shape features extracted from the prostate gland. CBIR with OSE-SSL for prostate histology obtained from 58 patient studies, yielded an area under the precision recall curve (AUPRC) of 0.53 ± 0.03 comparatively a CBIR with Principal Component Analysis (PCA) to learn a low dimensional space yielded an AUPRC of 0.44 ± 0.01.

  10. Complex event processing for content-based text, image, and video retrieval

    NARCIS (Netherlands)

    Bowman, E.K.; Broome, B.D.; Holland, V.M.; Summers-Stay, D.; Rao, R.M.; Duselis, J.; Howe, J.; Madahar, B.K.; Boury-Brisset, A.C.; Forrester, B.; Kwantes, P.; Burghouts, G.; Huis, J. van; Mulayim, A.Y.

    2016-01-01

    This report summarizes the findings of an exploratory team of the North Atlantic Treaty Organization (NATO) Information Systems Technology panel into Content-Based Analytics (CBA). The team carried out a technical review into the current status of theoretical and practical developments of methods,

  11. A review of content-based image retrieval systems in medical applications-clinical benefits and future directions.

    Science.gov (United States)

    Müller, Henning; Michoux, Nicolas; Bandon, David; Geissbuhler, Antoine

    2004-02-01

    Content-based visual information retrieval (CBVIR) or content-based image retrieval (CBIR) has been one on the most vivid research areas in the field of computer vision over the last 10 years. The availability of large and steadily growing amounts of visual and multimedia data, and the development of the Internet underline the need to create thematic access methods that offer more than simple text-based queries or requests based on matching exact database fields. Many programs and tools have been developed to formulate and execute queries based on the visual or audio content and to help browsing large multimedia repositories. Still, no general breakthrough has been achieved with respect to large varied databases with documents of differing sorts and with varying characteristics. Answers to many questions with respect to speed, semantic descriptors or objective image interpretations are still unanswered. In the medical field, images, and especially digital images, are produced in ever-increasing quantities and used for diagnostics and therapy. The Radiology Department of the University Hospital of Geneva alone produced more than 12,000 images a day in 2002. The cardiology is currently the second largest producer of digital images, especially with videos of cardiac catheterization ( approximately 1800 exams per year containing almost 2000 images each). The total amount of cardiologic image data produced in the Geneva University Hospital was around 1 TB in 2002. Endoscopic videos can equally produce enormous amounts of data. With digital imaging and communications in medicine (DICOM), a standard for image communication has been set and patient information can be stored with the actual image(s), although still a few problems prevail with respect to the standardization. In several articles, content-based access to medical images for supporting clinical decision-making has been proposed that would ease the management of clinical data and scenarios for the integration of

  12. Content-based image retrieval using spatial layout information in brain tumor T1-weighted contrast-enhanced MR images.

    Directory of Open Access Journals (Sweden)

    Meiyan Huang

    Full Text Available This study aims to develop content-based image retrieval (CBIR system for the retrieval of T1-weighted contrast-enhanced MR (CE-MR images of brain tumors. When a tumor region is fed to the CBIR system as a query, the system attempts to retrieve tumors of the same pathological category. The bag-of-visual-words (BoVW model with partition learning is incorporated into the system to extract informative features for representing the image contents. Furthermore, a distance metric learning algorithm called the Rank Error-based Metric Learning (REML is proposed to reduce the semantic gap between low-level visual features and high-level semantic concepts. The effectiveness of the proposed method is evaluated on a brain T1-weighted CE-MR dataset with three types of brain tumors (i.e., meningioma, glioma, and pituitary tumor. Using the BoVW model with partition learning, the mean average precision (mAP of retrieval increases beyond 4.6% with the learned distance metrics compared with the spatial pyramid BoVW method. The distance metric learned by REML significantly outperforms three other existing distance metric learning methods in terms of mAP. The mAP of the CBIR system is as high as 91.8% using the proposed method, and the precision can reach 93.1% when the top 10 images are returned by the system. These preliminary results demonstrate that the proposed method is effective and feasible for the retrieval of brain tumors in T1-weighted CE-MR Images.

  13. Content-based image retrieval using spatial layout information in brain tumor T1-weighted contrast-enhanced MR images.

    Science.gov (United States)

    Huang, Meiyan; Yang, Wei; Wu, Yao; Jiang, Jun; Gao, Yang; Chen, Yang; Feng, Qianjin; Chen, Wufan; Lu, Zhentai

    2014-01-01

    This study aims to develop content-based image retrieval (CBIR) system for the retrieval of T1-weighted contrast-enhanced MR (CE-MR) images of brain tumors. When a tumor region is fed to the CBIR system as a query, the system attempts to retrieve tumors of the same pathological category. The bag-of-visual-words (BoVW) model with partition learning is incorporated into the system to extract informative features for representing the image contents. Furthermore, a distance metric learning algorithm called the Rank Error-based Metric Learning (REML) is proposed to reduce the semantic gap between low-level visual features and high-level semantic concepts. The effectiveness of the proposed method is evaluated on a brain T1-weighted CE-MR dataset with three types of brain tumors (i.e., meningioma, glioma, and pituitary tumor). Using the BoVW model with partition learning, the mean average precision (mAP) of retrieval increases beyond 4.6% with the learned distance metrics compared with the spatial pyramid BoVW method. The distance metric learned by REML significantly outperforms three other existing distance metric learning methods in terms of mAP. The mAP of the CBIR system is as high as 91.8% using the proposed method, and the precision can reach 93.1% when the top 10 images are returned by the system. These preliminary results demonstrate that the proposed method is effective and feasible for the retrieval of brain tumors in T1-weighted CE-MR Images.

  14. A Review of Content Based Image Classification using Machine Learning Approach

    OpenAIRE

    Sandeep Kumar; Zeeshan Khan; Anurag jain

    2012-01-01

    Image classification is vital field of research in computer vision. Increasing rate of multimedia data, remote sensing and web photo gallery need a category of different image for the proper retrieval of user. Various researchers apply different approach for image classification such as segmentation, clustering and some machine learning approach for the classification of image. Content of image such as color, texture and shape and size plays an important role in semantic image classification....

  15. Microcalcification classification assisted by content-based image retrieval for breast cancer diagnosis.

    Science.gov (United States)

    Wei, Liyang; Yang, Yongyi; Nishikawa, Roberts M

    2009-06-01

    In this paper we propose a microcalcification classification scheme, assisted by content-based mammogram retrieval, for breast cancer diagnosis. We recently developed a machine learning approach for mammogram retrieval where the similarity measure between two lesion mammograms was modeled after expert observers. In this work we investigate how to use retrieved similar cases as references to improve the performance of a numerical classifier. Our rationale is that by adaptively incorporating local proximity information into a classifier, it can help to improve its classification accuracy, thereby leading to an improved "second opinion" to radiologists. Our experimental results on a mammogram database demonstrate that the proposed retrieval-driven approach with an adaptive support vector machine (SVM) could improve the classification performance from 0.78 to 0.82 in terms of the area under the ROC curve.

  16. Content-based image retrieval using a signature graph and a self-organizing map

    Directory of Open Access Journals (Sweden)

    Van Thanh The

    2016-06-01

    Full Text Available In order to effectively retrieve a large database of images, a method of creating an image retrieval system CBIR (contentbased image retrieval is applied based on a binary index which aims to describe features of an image object of interest. This index is called the binary signature and builds input data for the problem of matching similar images. To extract the object of interest, we propose an image segmentation method on the basis of low-level visual features including the color and texture of the image. These features are extracted at each block of the image by the discrete wavelet frame transform and the appropriate color space. On the basis of a segmented image, we create a binary signature to describe the location, color and shape of the objects of interest. In order to match similar images, we provide a similarity measure between the images based on binary signatures. Then, we present a CBIR model which combines a signature graph and a self-organizing map to cluster and store similar images. To illustrate the proposed method, experiments on image databases are reported, including COREL,Wang and MSRDI.

  17. A classification framework for content-based extraction of biomedical objects from hierarchically decomposed images

    Science.gov (United States)

    Thies, Christian; Schmidt Borreda, Marcel; Seidl, Thomas; Lehmann, Thomas M.

    2006-03-01

    Multiscale analysis provides a complete hierarchical partitioning of images into visually plausible regions. Each of them is formally characterized by a feature vector describing shape, texture and scale properties. Consequently, object extraction becomes a classification of the feature vectors. Classifiers are trained by relevant and irrelevant regions labeled as object and remaining partitions, respectively. A trained classifier is applicable to yet uncategorized partitionings to identify the corresponding region's classes. Such an approach enables retrieval of a-priori unknown objects within a point-and-click interface. In this work, the classification pipeline consists of a framework for data selection, feature selection, classifier training, classification of testing data, and evaluation. According to the no-free-lunch-theorem of supervised learning, the appropriate classification pipeline is determined experimentally. Therefore, each of the steps is varied by state-of-the-art methods and the respective classification quality is measured. Selection of training data from the ground truth is supported by bootstrapping, variance pooling, virtual training data, and cross validation. Feature selection for dimension reduction is performed by linear discriminant analysis, principal component analysis, and greedy selection. Competing classifiers are k-nearest-neighbor, Bayesian classifier, and the support vector machine. Quality is measured by precision and recall to reflect the retrieval task. A set of 105 hand radiographs from clinical routine serves as ground truth, where the metacarpal bones have been labeled manually. In total, 368 out of 39.017 regions are identified as relevant. In initial experiments for feature selection with the support vector machine have been obtained recall, precision and F-measure of 0.58, 0.67, and 0,62, respectively.

  18. Predicting apple tree leaf nitrogen content based on hyperspectral applying wavelet and wavelet packet analysis

    Science.gov (United States)

    Zhang, Yao; Zheng, Lihua; Li, Minzan; Deng, Xiaolei; Sun, Hong

    2012-11-01

    The visible and NIR spectral reflectance were measured for apple leaves by using a spectrophotometer in fruit-bearing, fruit-falling and fruit-maturing period respectively, and the nitrogen content of each sample was measured in the lab. The analysis of correlation between nitrogen content of apple tree leaves and their hyperspectral data was conducted. Then the low frequency signal and high frequency noise reduction signal were extracted by using wavelet packet decomposition algorithm. At the same time, the original spectral reflectance was denoised taking advantage of the wavelet filtering technology. And then the principal components spectra were collected after PCA (Principal Component Analysis). It was known that the model built based on noise reduction principal components spectra reached higher accuracy than the other three ones in fruit-bearing period and physiological fruit-maturing period. Their calibration R2 reached 0.9529 and 0.9501, and validation R2 reached 0.7285 and 0.7303 respectively. While in the fruit-falling period the model based on low frequency principal components spectra reached the highest accuracy, and its calibration R2 reached 0.9921 and validation R2 reached 0.6234. The results showed that it was an effective way to improve ability of predicting apple tree nitrogen content based on hyperspectral analysis by using wavelet packet algorithm.

  19. A Novel Feature Extraction Technique Using Binarization of Bit Planes for Content Based Image Classification

    Directory of Open Access Journals (Sweden)

    Sudeep Thepade

    2014-01-01

    Full Text Available A number of techniques have been proposed earlier for feature extraction using image binarization. Efficiency of the techniques was dependent on proper threshold selection for the binarization method. In this paper, a new feature extraction technique using image binarization has been proposed. The technique has binarized the significant bit planes of an image by selecting local thresholds. The proposed algorithm has been tested on a public dataset and has been compared with existing widely used techniques using binarization for extraction of features. It has been inferred that the proposed method has outclassed all the existing techniques and has shown consistent classification performance.

  20. Content-Based High-Resolution Remote Sensing Image Retrieval via Unsupervised Feature Learning and Collaborative Affinity Metric Fusion

    Directory of Open Access Journals (Sweden)

    Yansheng Li

    2016-08-01

    Full Text Available With the urgent demand for automatic management of large numbers of high-resolution remote sensing images, content-based high-resolution remote sensing image retrieval (CB-HRRS-IR has attracted much research interest. Accordingly, this paper proposes a novel high-resolution remote sensing image retrieval approach via multiple feature representation and collaborative affinity metric fusion (IRMFRCAMF. In IRMFRCAMF, we design four unsupervised convolutional neural networks with different layers to generate four types of unsupervised features from the fine level to the coarse level. In addition to these four types of unsupervised features, we also implement four traditional feature descriptors, including local binary pattern (LBP, gray level co-occurrence (GLCM, maximal response 8 (MR8, and scale-invariant feature transform (SIFT. In order to fully incorporate the complementary information among multiple features of one image and the mutual information across auxiliary images in the image dataset, this paper advocates collaborative affinity metric fusion to measure the similarity between images. The performance evaluation of high-resolution remote sensing image retrieval is implemented on two public datasets, the UC Merced (UCM dataset and the Wuhan University (WH dataset. Large numbers of experiments show that our proposed IRMFRCAMF can significantly outperform the state-of-the-art approaches.

  1. Ad-hoc Content-based Queries and Data Analysis for Virtual Observatories, Phase II

    Data.gov (United States)

    National Aeronautics and Space Administration — Aquilent, Inc. proposes to support ad-hoc, content-based query and data retrieval from virtual observatories (VxO) by developing 1) Higher Order Query Services that...

  2. A rapid automatic analyzer and its methodology for effective bentonite content based on image recognition technology

    Directory of Open Access Journals (Sweden)

    Wei Long

    2016-09-01

    Full Text Available Fast and accurate determination of effective bentonite content in used clay bonded sand is very important for selecting the correct mixing ratio and mixing process to obtain high-performance molding sand. Currently, the effective bentonite content is determined by testing the ethylene blue absorbed in used clay bonded sand, which is usually a manual operation with some disadvantages including complicated process, long testing time and low accuracy. A rapid automatic analyzer of the effective bentonite content in used clay bonded sand was developed based on image recognition technology. The instrument consists of auto stirring, auto liquid removal, auto titration, step-rotation and image acquisition components, and processor. The principle of the image recognition method is first to decompose the color images into three-channel gray images based on the photosensitive degree difference of the light blue and dark blue in the three channels of red, green and blue, then to make the gray values subtraction calculation and gray level transformation of the gray images, and finally, to extract the outer circle light blue halo and the inner circle blue spot and calculate their area ratio. The titration process can be judged to reach the end-point while the area ratio is higher than the setting value.

  3. Developing a comprehensive system for content-based retrieval of image and text data from a national survey

    Science.gov (United States)

    Antani, Sameer K.; Natarajan, Mukil; Long, Jonathan L.; Long, L. Rodney; Thoma, George R.

    2005-04-01

    The article describes the status of our ongoing R&D at the U.S. National Library of Medicine (NLM) towards the development of an advanced multimedia database biomedical information system that supports content-based image retrieval (CBIR). NLM maintains a collection of 17,000 digitized spinal X-rays along with text survey data from the Second National Health and Nutritional Examination Survey (NHANES II). These data serve as a rich data source for epidemiologists and researchers of osteoarthritis and musculoskeletal diseases. It is currently possible to access these through text keyword queries using our Web-based Medical Information Retrieval System (WebMIRS). CBIR methods developed specifically for biomedical images could offer direct visual searching of these images by means of example image or user sketch. We are building a system which supports hybrid queries that have text and image-content components. R&D goals include developing algorithms for robust image segmentation for localizing and identifying relevant anatomy, labeling the segmented anatomy based on its pathology, developing suitable indexing and similarity matching methods for images and image features, and associating the survey text information for query and retrieval along with the image data. Some highlights of the system developed in MATLAB and Java are: use of a networked or local centralized database for text and image data; flexibility to incorporate new research work; provides a means to control access to system components under development; and use of XML for structured reporting. The article details the design, features, and algorithms in this third revision of this prototype system, CBIR3.

  4. Stochastic Optimized Relevance Feedback Particle Swarm Optimization for Content Based Image Retrieval

    Directory of Open Access Journals (Sweden)

    Muhammad Imran

    2014-01-01

    Full Text Available One of the major challenges for the CBIR is to bridge the gap between low level features and high level semantics according to the need of the user. To overcome this gap, relevance feedback (RF coupled with support vector machine (SVM has been applied successfully. However, when the feedback sample is small, the performance of the SVM based RF is often poor. To improve the performance of RF, this paper has proposed a new technique, namely, PSO-SVM-RF, which combines SVM based RF with particle swarm optimization (PSO. The aims of this proposed technique are to enhance the performance of SVM based RF and also to minimize the user interaction with the system by minimizing the RF number. The PSO-SVM-RF was tested on the coral photo gallery containing 10908 images. The results obtained from the experiments showed that the proposed PSO-SVM-RF achieved 100% accuracy in 8 feedback iterations for top 10 retrievals and 80% accuracy in 6 iterations for 100 top retrievals. This implies that with PSO-SVM-RF technique high accuracy rate is achieved at a small number of iterations.

  5. İçerik Tabanlı Görüntü Erişimi / Content-Based Image Retrieval

    Directory of Open Access Journals (Sweden)

    İrem Soydal

    2005-10-01

    Full Text Available Digital image collections are expanding day by day, and image retrieval becomes even harder. Both individuals and institutions encounter serious problems when building their image archives and later when retrieving the archived images. Visual information cannot be fully expressed in words and normally depends on intuitive human perception. Consequently, this causes us to find the plain text-based information inadequate, and as a result, increases the value of the visual content. However describing, storing and retrieving the visual content is not simple. The research activities in this area, which escalated in the 90’s, have brought several solutions to the understanding, design and development of the image retrieval systems. This article reviews the studies on image retrieval systems in general, and content-based image retrieval systems specifically. The article also examines the features of content-based image retrieval systems.

  6. A Key Gene, PLIN1, Can Affect Porcine Intramuscular Fat Content Based on Transcriptome Analysis

    Directory of Open Access Journals (Sweden)

    Bojiang Li

    2018-04-01

    Full Text Available Intramuscular fat (IMF content is an important indicator for meat quality evaluation. However, the key genes and molecular regulatory mechanisms affecting IMF deposition remain unclear. In the present study, we identified 75 differentially expressed genes (DEGs between the higher (H and lower (L IMF content of pigs using transcriptome analysis, of which 27 were upregulated and 48 were downregulated. Notably, Kyoto Encyclopedia of Genes and Genomes (KEGG enrichment analysis indicated that the DEG perilipin-1 (PLIN1 was significantly enriched in the fat metabolism-related peroxisome proliferator-activated receptor (PPAR signaling pathway. Furthermore, we determined the expression patterns and functional role of porcine PLIN1. Our results indicate that PLIN1 was highly expressed in porcine adipose tissue, and its expression level was significantly higher in the H IMF content group when compared with the L IMF content group, and expression was increased during adipocyte differentiation. Additionally, our results confirm that PLIN1 knockdown decreases the triglyceride (TG level and lipid droplet (LD size in porcine adipocytes. Overall, our data identify novel candidate genes affecting IMF content and provide new insight into PLIN1 in porcine IMF deposition and adipocyte differentiation.

  7. Content-based image retrieval using scale invariant feature transform and gray level co-occurrence matrix

    Science.gov (United States)

    Srivastava, Prashant; Khare, Manish; Khare, Ashish

    2017-06-01

    The rapid growth of different types of images has posed a great challenge to the scientific fraternity. As the images are increasing everyday, it is becoming a challenging task to organize the images for efficient and easy access. The field of image retrieval attempts to solve this problem through various techniques. This paper proposes a novel technique of image retrieval by combining Scale Invariant Feature Transform (SIFT) and Co-occurrence matrix. For construction of feature vector, SIFT descriptors of gray scale images are computed and normalized using z-score normalization followed by construction of Gray-Level Co-occurrence Matrix (GLCM) of normalized SIFT keypoints. The constructed feature vector is matched with those of images in database to retrieve visually similar images. The proposed method is tested on Corel-1K dataset and the performance is measured in terms of precision and recall. The experimental results demonstrate that the proposed method outperforms some of the other state-of-the-art methods.

  8. Facilitating medical information search using Google Glass connected to a content-based medical image retrieval system.

    Science.gov (United States)

    Widmer, Antoine; Schaer, Roger; Markonis, Dimitrios; Muller, Henning

    2014-01-01

    Wearable computing devices are starting to change the way users interact with computers and the Internet. Among them, Google Glass includes a small screen located in front of the right eye, a camera filming in front of the user and a small computing unit. Google Glass has the advantage to provide online services while allowing the user to perform tasks with his/her hands. These augmented glasses uncover many useful applications, also in the medical domain. For example, Google Glass can easily provide video conference between medical doctors to discuss a live case. Using these glasses can also facilitate medical information search by allowing the access of a large amount of annotated medical cases during a consultation in a non-disruptive fashion for medical staff. In this paper, we developed a Google Glass application able to take a photo and send it to a medical image retrieval system along with keywords in order to retrieve similar cases. As a preliminary assessment of the usability of the application, we tested the application under three conditions (images of the skin; printed CT scans and MRI images; and CT and MRI images acquired directly from an LCD screen) to explore whether using Google Glass affects the accuracy of the results returned by the medical image retrieval system. The preliminary results show that despite minor problems due to the relative stability of the Google Glass, images can be sent to and processed by the medical image retrieval system and similar images are returned to the user, potentially helping in the decision making process.

  9. Image Analysis

    DEFF Research Database (Denmark)

    . The topics of the accepted papers range from novel applications of vision systems, pattern recognition, machine learning, feature extraction, segmentation, 3D vision, to medical and biomedical image analysis. The papers originate from all the Scandinavian countries and several other European countries......The 19th Scandinavian Conference on Image Analysis was held at the IT University of Copenhagen in Denmark during June 15-17, 2015. The SCIA conference series has been an ongoing biannual event for more than 30 years and over the years it has nurtured a world-class regional research and development...

  10. Content-based retrieval of brain tumor in contrast-enhanced MRI images using tumor margin information and learned distance metric.

    Science.gov (United States)

    Yang, Wei; Feng, Qianjin; Yu, Mei; Lu, Zhentai; Gao, Yang; Xu, Yikai; Chen, Wufan

    2012-11-01

    A content-based image retrieval (CBIR) method for T1-weighted contrast-enhanced MRI (CE-MRI) images of brain tumors is presented for diagnosis aid. The method is thoroughly evaluated on a large image dataset. Using the tumor region as a query, the authors' CBIR system attempts to retrieve tumors of the same pathological category. Aside from commonly used features such as intensity, texture, and shape features, the authors use a margin information descriptor (MID), which is capable of describing the characteristics of tissue surrounding a tumor, for representing image contents. In addition, the authors designed a distance metric learning algorithm called Maximum mean average Precision Projection (MPP) to maximize the smooth approximated mean average precision (mAP) to optimize retrieval performance. The effectiveness of MID and MPP algorithms was evaluated using a brain CE-MRI dataset consisting of 3108 2D scans acquired from 235 patients with three categories of brain tumors (meningioma, glioma, and pituitary tumor). By combining MID and other features, the mAP of retrieval increased by more than 6% with the learned distance metrics. The distance metric learned by MPP significantly outperformed the other two existing distance metric learning methods in terms of mAP. The CBIR system using the proposed strategies achieved a mAP of 87.3% and a precision of 89.3% when top 10 images were returned by the system. Compared with scale-invariant feature transform, the MID, which uses the intensity profile as descriptor, achieves better retrieval performance. Incorporating tumor margin information represented by MID with the distance metric learned by the MPP algorithm can substantially improve the retrieval performance for brain tumors in CE-MRI.

  11. A fully automatic end-to-end method for content-based image retrieval of CT scans with similar liver lesion annotations.

    Science.gov (United States)

    Spanier, A B; Caplan, N; Sosna, J; Acar, B; Joskowicz, L

    2018-01-01

    The goal of medical content-based image retrieval (M-CBIR) is to assist radiologists in the decision-making process by retrieving medical cases similar to a given image. One of the key interests of radiologists is lesions and their annotations, since the patient treatment depends on the lesion diagnosis. Therefore, a key feature of M-CBIR systems is the retrieval of scans with the most similar lesion annotations. To be of value, M-CBIR systems should be fully automatic to handle large case databases. We present a fully automatic end-to-end method for the retrieval of CT scans with similar liver lesion annotations. The input is a database of abdominal CT scans labeled with liver lesions, a query CT scan, and optionally one radiologist-specified lesion annotation of interest. The output is an ordered list of the database CT scans with the most similar liver lesion annotations. The method starts by automatically segmenting the liver in the scan. It then extracts a histogram-based features vector from the segmented region, learns the features' relative importance, and ranks the database scans according to the relative importance measure. The main advantages of our method are that it fully automates the end-to-end querying process, that it uses simple and efficient techniques that are scalable to large datasets, and that it produces quality retrieval results using an unannotated CT scan. Our experimental results on 9 CT queries on a dataset of 41 volumetric CT scans from the 2014 Image CLEF Liver Annotation Task yield an average retrieval accuracy (Normalized Discounted Cumulative Gain index) of 0.77 and 0.84 without/with annotation, respectively. Fully automatic end-to-end retrieval of similar cases based on image information alone, rather that on disease diagnosis, may help radiologists to better diagnose liver lesions.

  12. Spinal imaging and image analysis

    CERN Document Server

    Yao, Jianhua

    2015-01-01

    This book is instrumental to building a bridge between scientists and clinicians in the field of spine imaging by introducing state-of-the-art computational methods in the context of clinical applications.  Spine imaging via computed tomography, magnetic resonance imaging, and other radiologic imaging modalities, is essential for noninvasively visualizing and assessing spinal pathology. Computational methods support and enhance the physician’s ability to utilize these imaging techniques for diagnosis, non-invasive treatment, and intervention in clinical practice. Chapters cover a broad range of topics encompassing radiological imaging modalities, clinical imaging applications for common spine diseases, image processing, computer-aided diagnosis, quantitative analysis, data reconstruction and visualization, statistical modeling, image-guided spine intervention, and robotic surgery. This volume serves a broad audience as  contributions were written by both clinicians and researchers, which reflects the inte...

  13. Advanced biomedical image analysis

    CERN Document Server

    Haidekker, Mark A

    2010-01-01

    "This book covers the four major areas of image processing: Image enhancement and restoration, image segmentation, image quantification and classification, and image visualization. Image registration, storage, and compression are also covered. The text focuses on recently developed image processing and analysis operators and covers topical research"--Provided by publisher.

  14. Digital image sequence processing, compression, and analysis

    CERN Document Server

    Reed, Todd R

    2004-01-01

    IntroductionTodd R. ReedCONTENT-BASED IMAGE SEQUENCE REPRESENTATIONPedro M. Q. Aguiar, Radu S. Jasinschi, José M. F. Moura, andCharnchai PluempitiwiriyawejTHE COMPUTATION OF MOTIONChristoph Stiller, Sören Kammel, Jan Horn, and Thao DangMOTION ANALYSIS AND DISPLACEMENT ESTIMATION IN THE FREQUENCY DOMAINLuca Lucchese and Guido Maria CortelazzoQUALITY OF SERVICE ASSESSMENT IN NEW GENERATION WIRELESS VIDEO COMMUNICATIONSGaetano GiuntaERROR CONCEALMENT IN DIGITAL VIDEOFrancesco G.B. De NataleIMAGE SEQUENCE RESTORATION: A WIDER PERSPECTIVEAnil KokaramVIDEO SUMMARIZATIONCuneyt M. Taskiran and Edward

  15. Digital image analysis

    DEFF Research Database (Denmark)

    Riber-Hansen, Rikke; Vainer, Ben; Steiniche, Torben

    2012-01-01

    Digital image analysis (DIA) is increasingly implemented in histopathological research to facilitate truly quantitative measurements, decrease inter-observer variation and reduce hands-on time. Originally, efforts were made to enable DIA to reproduce manually obtained results on histological slides...... reproducibility, application of stereology-based quantitative measurements, time consumption, optimization of histological slides, regions of interest selection and recent developments in staining and imaging techniques....

  16. Gabor Analysis for Imaging

    DEFF Research Database (Denmark)

    Christensen, Ole; Feichtinger, Hans G.; Paukner, Stephan

    2015-01-01

    In contrast to classical Fourier analysis, time–frequency analysis is concerned with localized Fourier transforms. Gabor analysis is an important branch of time–frequency analysis. Although significantly different, it shares with the wavelet transform methods the ability to describe the smoothness......, it characterizes a function by its transform over phase space, which is the time–frequency plane (TF-plane) in a musical context or the location–wave-number domain in the context of image processing. Since the transition from the signal domain to the phase space domain introduces an enormous amount of data...

  17. Hyperspectral image analysis. A tutorial

    DEFF Research Database (Denmark)

    Amigo Rubio, Jose Manuel; Babamoradi, Hamid; Elcoroaristizabal Martin, Saioa

    2015-01-01

    This tutorial aims at providing guidelines and practical tools to assist with the analysis of hyperspectral images. Topics like hyperspectral image acquisition, image pre-processing, multivariate exploratory analysis, hyperspectral image resolution, classification and final digital image processing...... to differentiate between several types of plastics by using Near infrared hyperspectral imaging and Partial Least Squares - Discriminant Analysis. Thus, the reader is guided through every single step and oriented in order to adapt those strategies to the user's case....

  18. Content Based Searching for INIS

    International Nuclear Information System (INIS)

    Jain, V.; Jain, R.K.

    2016-01-01

    Full text: Whatever a user wants is available on the internet, but to retrieve the information efficiently, a multilingual and most-relevant document search engine is a must. Most current search engines are word based or pattern based. They do not consider the meaning of the query posed to them; purely based on the keywords of the query; no support of multilingual query and and dismissal of nonrelevant results. Current information-retrieval techniques either rely on an encoding process, using a certain perspective or classification scheme, to describe a given item, or perform a full-text analysis, searching for user-specified words. Neither case guarantees content matching because an encoded description might reflect only part of the content and the mere occurrence of a word does not necessarily reflect the document’s content. For general documents, there doesn’t yet seem to be a much better option than lazy full-text analysis, by manually going through those endless results pages. In contrast to this, new search engine should extract the meaning of the query and then perform the search based on this extracted meaning. New search engine should also employ Interlingua based machine translation technology to present information in the language of choice of the user. (author

  19. Image sequence analysis

    CERN Document Server

    1981-01-01

    The processing of image sequences has a broad spectrum of important applica­ tions including target tracking, robot navigation, bandwidth compression of TV conferencing video signals, studying the motion of biological cells using microcinematography, cloud tracking, and highway traffic monitoring. Image sequence processing involves a large amount of data. However, because of the progress in computer, LSI, and VLSI technologies, we have now reached a stage when many useful processing tasks can be done in a reasonable amount of time. As a result, research and development activities in image sequence analysis have recently been growing at a rapid pace. An IEEE Computer Society Workshop on Computer Analysis of Time-Varying Imagery was held in Philadelphia, April 5-6, 1979. A related special issue of the IEEE Transactions on Pattern Anal­ ysis and Machine Intelligence was published in November 1980. The IEEE Com­ puter magazine has also published a special issue on the subject in 1981. The purpose of this book ...

  20. A Probabilistic Framework for Content-Based Diagnosis of Retinal Disease

    Energy Technology Data Exchange (ETDEWEB)

    Tobin Jr, Kenneth William [ORNL; Abdelrahman, Mohamed A [ORNL; Chaum, Edward [ORNL; Muthusamy Govindasamy, Vijaya Priya [ORNL; Karnowski, Thomas Paul [ORNL

    2007-01-01

    Diabetic retinopathy is the leading cause of blindness in the working age population around the world. Computer assisted analysis has the potential to assist in the early detection of diabetes by regular screening of large populations. The widespread availability of digital fundus cameras today is resulting in the accumulation of large image archives of diagnosed patient data that captures historical knowledge of retinal pathology. Through this research we are developing a content-based image retrieval method to verify our hypothesis that retinal pathology can be identified and quantified from visually similar retinal images in an image archive. We will present diagnostic results for specificity and sensitivity on a population of 395 fundus images representing the normal fundus and 14 stratified disease states.

  1. Statistical Image Analysis of Longitudinal RAVENS Images

    Directory of Open Access Journals (Sweden)

    Seonjoo eLee

    2015-10-01

    Full Text Available Regional analysis of volumes examined in normalized space (RAVENS are transformation images used in the study of brain morphometry. In this paper, RAVENS images are analyzed using a longitudinal variant of voxel-based morphometry (VBM and longitudinal functional principal component analysis (LFPCA for high-dimensional images. We demonstrate that the latter overcomes the limitations of standard longitudinal VBM analyses, which does not separate registration errors from other longitudinal changes and baseline patterns. This is especially important in contexts where longitudinal changes are only a small fraction of the overall observed variability, which is typical in normal aging and many chronic diseases. Our simulation study shows that LFPCA effectively separates registration error from baseline and longitudinal signals of interest by decomposing RAVENS images measured at multiple visits into three components: a subject-specific imaging random intercept that quantifies the cross-sectional variability, a subject-specific imaging slope that quantifies the irreversible changes over multiple visits, and a subject-visit specific imaging deviation. We describe strategies to identify baseline/longitudinal variation and registration errors combined with covariates of interest. Our analysis suggests that specific regional brain atrophy and ventricular enlargement are associated with multiple sclerosis (MS disease progression.

  2. Document image analysis: A primer

    Indian Academy of Sciences (India)

    R. Narasimhan (Krishtel eMaging) 1461 1996 Oct 15 13:05:22

    Abstract. Document image analysis refers to algorithms and techniques that are applied to images of documents to obtain a computer-readable description from pixel data. A well-known document image analysis product is the Optical Character. Recognition (OCR) software that recognizes characters in a scanned document ...

  3. Medical image registration for analysis

    International Nuclear Information System (INIS)

    Petrovic, V.

    2006-01-01

    Full text: Image registration techniques represent a rich family of image processing and analysis tools that aim to provide spatial correspondences across sets of medical images of similar and disparate anatomies and modalities. Image registration is a fundamental and usually the first step in medical image analysis and this paper presents a number of advanced techniques as well as demonstrates some of the advanced medical image analysis techniques they make possible. A number of both rigid and non-rigid medical image alignment algorithms of equivalent and merely consistent anatomical structures respectively are presented. The algorithms are compared in terms of their practical aims, inputs, computational complexity and level of operator (e.g. diagnostician) interaction. In particular, the focus of the methods discussion is placed on the applications and practical benefits of medical image registration. Results of medical image registration on a number of different imaging modalities and anatomies are presented demonstrating the accuracy and robustness of their application. Medical image registration is quickly becoming ubiquitous in medical imaging departments with the results of such algorithms increasingly used in complex medical image analysis and diagnostics. This paper aims to demonstrate at least part of the reason why

  4. Feature representation and compression for content-based retrieval

    Science.gov (United States)

    Xie, Hua; Ortega, Antonio

    2000-12-01

    In semantic content-based image/video browsing and navigation systems, efficient mechanisms to represent and manage a large collection of digital images/videos are needed. Traditional keyword-based indexing describes the content of multimedia data through annotations such as text or keywords extracted manually by the user from a controlled vocabulary. This textual indexing technique lacks the flexibility of satisfying various kinds of queries requested by database users and also requires huge amount of work for updating the information. Current content-based retrieval systems often extract a set of features such as color, texture, shape motion, speed, and position from the raw multimedia data automatically and store them as content descriptors. This content-based metadata differs from text-based metadata in that it supports wider varieties of queries and can be extracted automatically, thus providing a promising approach for efficient database access and management. When the raw data volume grows very large, explicitly extracting the content-information and storing it as metadata along with the images will improve querying performance since metadata requires much less storage than the raw image data and thus will be easier to manipulate. In this paper we maintain that storing metadata together with images will enable effective information management and efficient remote query. We also show, using a texture classification example, that this side information can be compressed while guaranteeing that the desired query accuracy is satisfied. We argue that the compact representation of the image contents not only reduces significantly the storage and transmission rate requirement, but also facilitates certain types of queries. Algorithms are developed for optimized compression of this texture feature metadata given that the goal is to maximize the classification performance for a given rate budget.

  5. English Institute Content-Based Program Manual.

    Science.gov (United States)

    Canada Coll., Redwood City, CA.

    Instructional materials designed for the content-based English as a Second Language program at Canada College's English Institute (EI) are presented in this manual. First, an introduction provides background information on the college, its student body, and the program. Drawing on relevant second language theory, this section offers a definition…

  6. ANALYSIS OF FUNDUS IMAGES

    DEFF Research Database (Denmark)

    2000-01-01

    A method classifying objects man image as respective arterial or venous vessels comprising: identifying pixels of the said modified image which are located on a line object, determining which of the said image points is associated with crossing point or a bifurcation of the respective line object......, wherein a crossing point is represented by an image point which is the intersection of four line segments, performing a matching operation on pairs of said line segments for each said crossing point, to determine the path of blood vessels in the image, thereby classifying the line objects in the original...... image into two arbitrary sets, and thereafter designating one of the sets as representing venous structure, the other of the sets as representing arterial structure, depending on one or more of the following criteria: (a) complexity of structure; (b) average density; (c) average width; (d) tortuosity...

  7. Hyperspectral image analysis. A tutorial

    International Nuclear Information System (INIS)

    Amigo, José Manuel; Babamoradi, Hamid; Elcoroaristizabal, Saioa

    2015-01-01

    This tutorial aims at providing guidelines and practical tools to assist with the analysis of hyperspectral images. Topics like hyperspectral image acquisition, image pre-processing, multivariate exploratory analysis, hyperspectral image resolution, classification and final digital image processing will be exposed, and some guidelines given and discussed. Due to the broad character of current applications and the vast number of multivariate methods available, this paper has focused on an industrial chemical framework to explain, in a step-wise manner, how to develop a classification methodology to differentiate between several types of plastics by using Near infrared hyperspectral imaging and Partial Least Squares – Discriminant Analysis. Thus, the reader is guided through every single step and oriented in order to adapt those strategies to the user's case. - Highlights: • Comprehensive tutorial of Hyperspectral Image analysis. • Hierarchical discrimination of six classes of plastics containing flame retardant. • Step by step guidelines to perform class-modeling on hyperspectral images. • Fusion of multivariate data analysis and digital image processing methods. • Promising methodology for real-time detection of plastics containing flame retardant.

  8. Hyperspectral image analysis. A tutorial

    Energy Technology Data Exchange (ETDEWEB)

    Amigo, José Manuel, E-mail: jmar@food.ku.dk [Spectroscopy and Chemometrics Group, Department of Food Sciences, Faculty of Science, University of Copenhagen, Rolighedsvej 30, Frederiksberg C DK–1958 (Denmark); Babamoradi, Hamid [Spectroscopy and Chemometrics Group, Department of Food Sciences, Faculty of Science, University of Copenhagen, Rolighedsvej 30, Frederiksberg C DK–1958 (Denmark); Elcoroaristizabal, Saioa [Spectroscopy and Chemometrics Group, Department of Food Sciences, Faculty of Science, University of Copenhagen, Rolighedsvej 30, Frederiksberg C DK–1958 (Denmark); Chemical and Environmental Engineering Department, School of Engineering, University of the Basque Country, Alameda de Urquijo s/n, E-48013 Bilbao (Spain)

    2015-10-08

    This tutorial aims at providing guidelines and practical tools to assist with the analysis of hyperspectral images. Topics like hyperspectral image acquisition, image pre-processing, multivariate exploratory analysis, hyperspectral image resolution, classification and final digital image processing will be exposed, and some guidelines given and discussed. Due to the broad character of current applications and the vast number of multivariate methods available, this paper has focused on an industrial chemical framework to explain, in a step-wise manner, how to develop a classification methodology to differentiate between several types of plastics by using Near infrared hyperspectral imaging and Partial Least Squares – Discriminant Analysis. Thus, the reader is guided through every single step and oriented in order to adapt those strategies to the user's case. - Highlights: • Comprehensive tutorial of Hyperspectral Image analysis. • Hierarchical discrimination of six classes of plastics containing flame retardant. • Step by step guidelines to perform class-modeling on hyperspectral images. • Fusion of multivariate data analysis and digital image processing methods. • Promising methodology for real-time detection of plastics containing flame retardant.

  9. Automated Dermoscopy Image Analysis of Pigmented Skin Lesions

    Directory of Open Access Journals (Sweden)

    Alfonso Baldi

    2010-03-01

    Full Text Available Dermoscopy (dermatoscopy, epiluminescence microscopy is a non-invasive diagnostic technique for the in vivo observation of pigmented skin lesions (PSLs, allowing a better visualization of surface and subsurface structures (from the epidermis to the papillary dermis. This diagnostic tool permits the recognition of morphologic structures not visible by the naked eye, thus opening a new dimension in the analysis of the clinical morphologic features of PSLs. In order to reduce the learning-curve of non-expert clinicians and to mitigate problems inherent in the reliability and reproducibility of the diagnostic criteria used in pattern analysis, several indicative methods based on diagnostic algorithms have been introduced in the last few years. Recently, numerous systems designed to provide computer-aided analysis of digital images obtained by dermoscopy have been reported in the literature. The goal of this article is to review these systems, focusing on the most recent approaches based on content-based image retrieval systems (CBIR.

  10. Stochastic geometry for image analysis

    CERN Document Server

    Descombes, Xavier

    2013-01-01

    This book develops the stochastic geometry framework for image analysis purpose. Two main frameworks are  described: marked point process and random closed sets models. We derive the main issues for defining an appropriate model. The algorithms for sampling and optimizing the models as well as for estimating parameters are reviewed.  Numerous applications, covering remote sensing images, biological and medical imaging, are detailed.  This book provides all the necessary tools for developing an image analysis application based on modern stochastic modeling.

  11. Image processing, analysis, measurement, and quality

    International Nuclear Information System (INIS)

    Hughes, G.W.; Mantey, P.E.; Rogowitz, B.E.

    1988-01-01

    Topics covered in these proceedings include: image aquisition, image processing and analysis, electronic vision, IR imaging, measurement and quality, spatial vision and spatial sampling, and contrast-detail curve measurement and analysis in radiological imaging

  12. Multispectral analysis of multimodal images

    International Nuclear Information System (INIS)

    Kvinnsland, Yngve; Brekke, Njaal; Taxt, Torfinn M.; Gruener, Renate

    2009-01-01

    An increasing number of multimodal images represent a valuable increase in available image information, but at the same time it complicates the extraction of diagnostic information across the images. Multispectral analysis (MSA) has the potential to simplify this problem substantially as unlimited number of images can be combined, and tissue properties across the images can be extracted automatically. Materials and methods. We have developed a software solution for MSA containing two algorithms for unsupervised classification, an EM-algorithm finding multinormal class descriptions and the k-means clustering algorithm, and two for supervised classification, a Bayesian classifier using multinormal class descriptions and a kNN-algorithm. The software has an efficient user interface for the creation and manipulation of class descriptions, and it has proper tools for displaying the results. Results. The software has been tested on different sets of images. One application is to segment cross-sectional images of brain tissue (T1- and T2-weighted MR images) into its main normal tissues and brain tumors. Another interesting set of images are the perfusion maps and diffusion maps, derived images from raw MR images. The software returns segmentation that seem to be sensible. Discussion. The MSA software appears to be a valuable tool for image analysis with multimodal images at hand. It readily gives a segmentation of image volumes that visually seems to be sensible. However, to really learn how to use MSA, it will be necessary to gain more insight into what tissues the different segments contain, and the upcoming work will therefore be focused on examining the tissues through for example histological sections

  13. Methods in quantitative image analysis.

    Science.gov (United States)

    Oberholzer, M; Ostreicher, M; Christen, H; Brühlmann, M

    1996-05-01

    The main steps of image analysis are image capturing, image storage (compression), correcting imaging defects (e.g. non-uniform illumination, electronic-noise, glare effect), image enhancement, segmentation of objects in the image and image measurements. Digitisation is made by a camera. The most modern types include a frame-grabber, converting the analog-to-digital signal into digital (numerical) information. The numerical information consists of the grey values describing the brightness of every point within the image, named a pixel. The information is stored in bits. Eight bits are summarised in one byte. Therefore, grey values can have a value between 0 and 256 (2(8)). The human eye seems to be quite content with a display of 5-bit images (corresponding to 64 different grey values). In a digitised image, the pixel grey values can vary within regions that are uniform in the original scene: the image is noisy. The noise is mainly manifested in the background of the image. For an optimal discrimination between different objects or features in an image, uniformity of illumination in the whole image is required. These defects can be minimised by shading correction [subtraction of a background (white) image from the original image, pixel per pixel, or division of the original image by the background image]. The brightness of an image represented by its grey values can be analysed for every single pixel or for a group of pixels. The most frequently used pixel-based image descriptors are optical density, integrated optical density, the histogram of the grey values, mean grey value and entropy. The distribution of the grey values existing within an image is one of the most important characteristics of the image. However, the histogram gives no information about the texture of the image. The simplest way to improve the contrast of an image is to expand the brightness scale by spreading the histogram out to the full available range. Rules for transforming the grey value

  14. Content-based management service for medical videos.

    Science.gov (United States)

    Mendi, Engin; Bayrak, Coskun; Cecen, Songul; Ermisoglu, Emre

    2013-01-01

    Development of health information technology has had a dramatic impact to improve the efficiency and quality of medical care. Developing interoperable health information systems for healthcare providers has the potential to improve the quality and equitability of patient-centered healthcare. In this article, we describe an automated content-based medical video analysis and management service that provides convenience and ease in accessing the relevant medical video content without sequential scanning. The system facilitates effective temporal video segmentation and content-based visual information retrieval that enable a more reliable understanding of medical video content. The system is implemented as a Web- and mobile-based service and has the potential to offer a knowledge-sharing platform for the purpose of efficient medical video content access.

  15. Flightspeed Integral Image Analysis Toolkit

    Science.gov (United States)

    Thompson, David R.

    2009-01-01

    The Flightspeed Integral Image Analysis Toolkit (FIIAT) is a C library that provides image analysis functions in a single, portable package. It provides basic low-level filtering, texture analysis, and subwindow descriptor for applications dealing with image interpretation and object recognition. Designed with spaceflight in mind, it addresses: Ease of integration (minimal external dependencies) Fast, real-time operation using integer arithmetic where possible (useful for platforms lacking a dedicated floatingpoint processor) Written entirely in C (easily modified) Mostly static memory allocation 8-bit image data The basic goal of the FIIAT library is to compute meaningful numerical descriptors for images or rectangular image regions. These n-vectors can then be used directly for novelty detection or pattern recognition, or as a feature space for higher-level pattern recognition tasks. The library provides routines for leveraging training data to derive descriptors that are most useful for a specific data set. Its runtime algorithms exploit a structure known as the "integral image." This is a caching method that permits fast summation of values within rectangular regions of an image. This integral frame facilitates a wide range of fast image-processing functions. This toolkit has applicability to a wide range of autonomous image analysis tasks in the space-flight domain, including novelty detection, object and scene classification, target detection for autonomous instrument placement, and science analysis of geomorphology. It makes real-time texture and pattern recognition possible for platforms with severe computational restraints. The software provides an order of magnitude speed increase over alternative software libraries currently in use by the research community. FIIAT can commercially support intelligent video cameras used in intelligent surveillance. It is also useful for object recognition by robots or other autonomous vehicles

  16. Image formation and image analysis in electron microscopy

    International Nuclear Information System (INIS)

    Heel, M. van.

    1981-01-01

    This thesis covers various aspects of image formation and image analysis in electron microscopy. The imaging of relatively strong objects in partially coherent illumination, the coherence properties of thermionic emission sources and the detection of objects in quantum noise limited images are considered. IMAGIC, a fast, flexible and friendly image analysis software package is described. Intelligent averaging of molecular images is discussed. (C.F.)

  17. Introduction to Medical Image Analysis

    DEFF Research Database (Denmark)

    Paulsen, Rasmus Reinhold; Moeslund, Thomas B.

    This book is a result of a collaboration between DTU Informatics at the Technical University of Denmark and the Laboratory of Computer Vision and Media Technology at Aalborg University. It is partly based on the book ”Image and Video Processing”, second edition by Thomas Moeslund. The aim...... of the book is to present the fascinating world of medical image analysis in an easy and interesting way. Compared to many standard books on image analysis, the approach we have chosen is less mathematical and more casual. Some of the key algorithms are exemplified in C-code. Please note that the code...

  18. Introduction to Medical Image Analysis

    DEFF Research Database (Denmark)

    Paulsen, Rasmus Reinhold; Moeslund, Thomas B.

    2011-01-01

    of the book is to present the fascinating world of medical image analysis in an easy and interesting way. Compared to many standard books on image analysis, the approach we have chosen is less mathematical and more casual. Some of the key algorithms are exemplified in C-code. Please note that the code......This book is a result of a collaboration between DTU Informatics at the Technical University of Denmark and the Laboratory of Computer Vision and Media Technology at Aalborg University. It is partly based on the book ”Image and Video Processing”, second edition by Thomas Moeslund. The aim...

  19. Shape analysis in medical image analysis

    CERN Document Server

    Tavares, João

    2014-01-01

    This book contains thirteen contributions from invited experts of international recognition addressing important issues in shape analysis in medical image analysis, including techniques for image segmentation, registration, modelling and classification, and applications in biology, as well as in cardiac, brain, spine, chest, lung and clinical practice. This volume treats topics such as, anatomic and functional shape representation and matching; shape-based medical image segmentation; shape registration; statistical shape analysis; shape deformation; shape-based abnormity detection; shape tracking and longitudinal shape analysis; machine learning for shape modeling and analysis; shape-based computer-aided-diagnosis; shape-based medical navigation; benchmark and validation of shape representation, analysis and modeling algorithms. This work will be of interest to researchers, students, and manufacturers in the fields of artificial intelligence, bioengineering, biomechanics, computational mechanics, computationa...

  20. Artificial intelligence and medical imaging. Expert systems and image analysis

    International Nuclear Information System (INIS)

    Wackenheim, A.; Zoellner, G.; Horviller, S.; Jacqmain, T.

    1987-01-01

    This paper gives an overview on the existing systems for automated image analysis and interpretation in medical imaging, especially in radiology. The example of ORFEVRE, the system for the analysis of CAT-scan images of the cervical triplet (c3-c5) by image analysis and subsequent expert-system is given and discussed in detail. Possible extensions are described [fr

  1. Errors from Image Analysis

    Energy Technology Data Exchange (ETDEWEB)

    Wood, William Monford [Los Alamos National Lab. (LANL), Los Alamos, NM (United States)

    2015-02-23

    Presenting a systematic study of the standard analysis of rod-pinch radiographs for obtaining quantitative measurements of areal mass densities, and making suggestions for improving the methodology of obtaining quantitative information from radiographed objects.

  2. Image analysis for DNA sequencing

    International Nuclear Information System (INIS)

    Palaniappan, K.; Huang, T.S.

    1991-01-01

    This paper reports that there is a great deal of interest in automating the process of DNA (deoxyribonucleic acid) sequencing to support the analysis of genomic DNA such as the Human and Mouse Genome projects. In one class of gel-based sequencing protocols autoradiograph images are generated in the final step and usually require manual interpretation to reconstruct the DNA sequence represented by the image. The need to handle a large volume of sequence information necessitates automation of the manual autoradiograph reading step through image analysis in order to reduce the length of time required to obtain sequence data and reduce transcription errors. Various adaptive image enhancement, segmentation and alignment methods were applied to autoradiograph images. The methods are adaptive to the local characteristics of the image such as noise, background signal, or presence of edges. Once the two-dimensional data is converted to a set of aligned one-dimensional profiles waveform analysis is used to determine the location of each band which represents one nucleotide in the sequence. Different classification strategies including a rule-based approach are investigated to map the profile signals, augmented with the original two-dimensional image data as necessary, to textual DNA sequence information

  3. Multispectral Imaging Broadens Cellular Analysis

    Science.gov (United States)

    2007-01-01

    Amnis Corporation, a Seattle-based biotechnology company, developed ImageStream to produce sensitive fluorescence images of cells in flow. The company responded to an SBIR solicitation from Ames Research Center, and proposed to evaluate several methods of extending the depth of field for its ImageStream system and implement the best as an upgrade to its commercial products. This would allow users to view whole cells at the same time, rather than just one section of each cell. Through Phase I and II SBIR contracts, Ames provided Amnis the funding the company needed to develop this extended functionality. For NASA, the resulting high-speed image flow cytometry process made its way into Medusa, a life-detection instrument built to collect, store, and analyze sample organisms from erupting hydrothermal vents, and has the potential to benefit space flight health monitoring. On the commercial end, Amnis has implemented the process in ImageStream, combining high-resolution microscopy and flow cytometry in a single instrument, giving researchers the power to conduct quantitative analyses of individual cells and cell populations at the same time, in the same experiment. ImageStream is also built for many other applications, including cell signaling and pathway analysis; classification and characterization of peripheral blood mononuclear cell populations; quantitative morphology; apoptosis (cell death) assays; gene expression analysis; analysis of cell conjugates; molecular distribution; and receptor mapping and distribution.

  4. Content-based retrieval in videos from laparoscopic surgery

    Science.gov (United States)

    Schoeffmann, Klaus; Beecks, Christian; Lux, Mathias; Uysal, Merih Seran; Seidl, Thomas

    2016-03-01

    In the field of medical endoscopy more and more surgeons are changing over to record and store videos of their endoscopic procedures for long-term archival. These endoscopic videos are a good source of information for explanations to patients and follow-up operations. As the endoscope is the "eye of the surgeon", the video shows the same information the surgeon has seen during the operation, and can describe the situation inside the patient much more precisely than an operation report would do. Recorded endoscopic videos can also be used for training young surgeons and in some countries the long-term archival of video recordings from endoscopic procedures is even enforced by law. A major challenge, however, is to efficiently access these very large video archives for later purposes. One problem, for example, is to locate specific images in the videos that show important situations, which are additionally captured as static images during the procedure. This work addresses this problem and focuses on contentbased video retrieval in data from laparoscopic surgery. We propose to use feature signatures, which can appropriately and concisely describe the content of laparoscopic images, and show that by using this content descriptor with an appropriate metric, we are able to efficiently perform content-based retrieval in laparoscopic videos. In a dataset with 600 captured static images from 33 hours recordings, we are able to find the correct video segment for more than 88% of these images.

  5. Signal and image multiresolution analysis

    CERN Document Server

    Ouahabi, Abdelialil

    2012-01-01

    Multiresolution analysis using the wavelet transform has received considerable attention in recent years by researchers in various fields. It is a powerful tool for efficiently representing signals and images at multiple levels of detail with many inherent advantages, including compression, level-of-detail display, progressive transmission, level-of-detail editing, filtering, modeling, fractals and multifractals, etc.This book aims to provide a simple formalization and new clarity on multiresolution analysis, rendering accessible obscure techniques, and merging, unifying or completing

  6. Astronomical Image and Data Analysis

    CERN Document Server

    Starck, J.-L

    2006-01-01

    With information and scale as central themes, this comprehensive survey explains how to handle real problems in astronomical data analysis using a modern arsenal of powerful techniques. It treats those innovative methods of image, signal, and data processing that are proving to be both effective and widely relevant. The authors are leaders in this rapidly developing field and draw upon decades of experience. They have been playing leading roles in international projects such as the Virtual Observatory and the Grid. The book addresses not only students and professional astronomers and astrophysicists, but also serious amateur astronomers and specialists in earth observation, medical imaging, and data mining. The coverage includes chapters or appendices on: detection and filtering; image compression; multichannel, multiscale, and catalog data analytical methods; wavelets transforms, Picard iteration, and software tools. This second edition of Starck and Murtagh's highly appreciated reference again deals with to...

  7. UV imaging in pharmaceutical analysis

    DEFF Research Database (Denmark)

    Østergaard, Jesper

    2018-01-01

    UV imaging provides spatially and temporally resolved absorbance measurements, which are highly useful in pharmaceutical analysis. Commercial UV imaging instrumentation was originally developed as a detector for separation sciences, but the main use is in the area of in vitro dissolution...... and release testing studies. The review covers the basic principles of the technology and summarizes the main applications in relation to intrinsic dissolution rate determination, excipient compatibility studies and in vitro release characterization of drug substances and vehicles intended for parenteral...... administration. UV imaging has potential for providing new insights to drug dissolution and release processes in formulation development by real-time monitoring of swelling, precipitation, diffusion and partitioning phenomena. Limitations of current instrumentation are discussed and a perspective to new...

  8. Autoradiography and automated image analysis

    International Nuclear Information System (INIS)

    Vardy, P.H.; Willard, A.G.

    1982-01-01

    Limitations with automated image analysis and the solution of problems encountered are discussed. With transmitted light, unstained plastic sections with planar profiles should be used. Stains potentiate signal so that television registers grains as falsely larger areas of low light intensity. Unfocussed grains in paraffin sections will not be seen by image analysers due to change in darkness and size. With incident illumination, the use of crossed polars, oil objectives and an oil filled light trap continuous with the base of the slide will reduce glare. However this procedure so enormously attenuates the light reflected by silver grains, that detection may be impossible. Autoradiographs should then be photographed and the negative images of silver grains on film analysed automatically using transmitted light

  9. Remote Sensing Digital Image Analysis

    Science.gov (United States)

    Richards, John A.; Jia, Xiuping

    Remote Sensing Digital Image Analysis provides the non-specialist with an introduction to quantitative evaluation of satellite and aircraft derived remotely retrieved data. Each chapter covers the pros and cons of digital remotely sensed data, without detailed mathematical treatment of computer based algorithms, but in a manner conductive to an understanding of their capabilities and limitations. Problems conclude each chapter. This fourth edition has been developed to reflect the changes that have occurred in this area over the past several years.

  10. Chaotic secure content-based hidden transmission of biometric templates

    International Nuclear Information System (INIS)

    Khan, Muhammad Khurram; Zhang Jiashu; Tian Lei

    2007-01-01

    The large-scale proliferation of biometric verification systems creates a demand for effective and reliable security and privacy of its data. Like passwords and PIN codes, biometric data is also not secret and if it is compromised, the integrity of the whole verification system could be at high risk. To address these issues, this paper presents a novel chaotic secure content-based hidden transmission scheme of biometric data. Encryption and data hiding techniques are used to improve the security and secrecy of the transmitted templates. Secret keys are generated by the biometric image and used as the parameter value and initial condition of the chaotic map, and each transaction session has different secret keys to protect from the attacks. Two chaotic maps are incorporated for the encryption to resolve the finite word length effect and to improve the system's resistance against attacks. Encryption is applied on the biometric templates before hiding into the cover/host images to make them secure, and then templates are hidden into the cover image. Experimental results show that the security, performance, and accuracy of the presented scheme are encouraging comparable with other methods found in the current literature

  11. Quantitative image analysis of synovial tissue

    NARCIS (Netherlands)

    van der Hall, Pascal O.; Kraan, Maarten C.; Tak, Paul Peter

    2007-01-01

    Quantitative image analysis is a form of imaging that includes microscopic histological quantification, video microscopy, image analysis, and image processing. Hallmarks are the generation of reliable, reproducible, and efficient measurements via strict calibration and step-by-step control of the

  12. Mesh Processing in Medical Image Analysis

    DEFF Research Database (Denmark)

    The following topics are dealt with: mesh processing; medical image analysis; interactive freeform modeling; statistical shape analysis; clinical CT images; statistical surface recovery; automated segmentation; cerebral aneurysms; and real-time particle-based representation....

  13. Content-Based tile Retrieval System

    Czech Academy of Sciences Publication Activity Database

    Vácha, Pavel; Haindl, Michal

    -, č. 85 (2011), s. 45-45 ISSN 0926-4981 R&D Projects: GA MŠk 1M0572; GA ČR GA102/08/0593; GA MŠk(CZ) LG11009 Institutional research plan: CEZ:AV0Z10750506 Keywords : CBIR * Markov random fields Subject RIV: BD - Theory of Information http://ercim-news.ercim.eu/images/stories/EN85/EN85-web.pdf

  14. Morphological segmentation for sagittal plane image analysis.

    Science.gov (United States)

    Bezerra, F N; Paula, I C; Medeiros, F S; Ushizima, D M; Cintra, L S

    2010-01-01

    This paper introduces a morphological image segmentation method by applying watershed transform with markers to scale-space smoothed images and furthermore provides images for clinical monitoring and analysis of patients. The database comprises sagittal plane images taken from a digital camera of patients submitted to Global Postural Reeducation (GPR) physiotherapy treatment. Orthopaedic specialists can use these segmented images to diagnose posture problems, assess physiotherapy treatment evolution and thus reduce diagnostic errors due to subjective analysis.

  15. Automating the construction of scene classifiers for content-based video retrieval

    NARCIS (Netherlands)

    Khan, L.; Israël, Menno; Petrushin, V.A.; van den Broek, Egon; van der Putten, Peter

    2004-01-01

    This paper introduces a real time automatic scene classifier within content-based video retrieval. In our envisioned approach end users like documentalists, not image processing experts, build classifiers interactively, by simply indicating positive examples of a scene. Classification consists of a

  16. Prenatal Care: A Content-Based ESL Curriculum.

    Science.gov (United States)

    Hassel, Elissa Anne

    A content-based curriculum in English as a Second Language (ESL) focusing on prenatal self-care is presented. The course was designed as a solution to the problem of inadequate prenatal care for limited-English-proficient Mexican immigrant women. The first three sections offer background information on and discussion of (1) content-based ESL…

  17. Image analysis in industrial radiography

    International Nuclear Information System (INIS)

    Lavayssiere, B.

    1993-01-01

    Non-destructive testing in nuclear power plants remains a major EDF objective for the coming decades. To facilitate diagnosis, the expert must be provided with elaborate decision-making aids: contrasted images, noise-free signals, pertinent parameters, ''meaningful'' images. In the field of industrial radiography, the inspector's offer of a portable system for digitalization and subsequent processing of radiographs (ENTRAIGUES) is an improvement in the inspection of primary circuit nozzles. Three major directions were followed: - improvement of images and localization of flaws (2D approach); techniques such as Markov modelling were evaluated and tested, - development of a system which can be transported on site, for digitalization, processing and subsequent archiving on inspection radiographs, known as ENTRAIGUES, - development of a program for aid in analysis of digitized radiographs (''bread-board'' version), offering an ergonomic interface and push-button processing, which is the software component in ENTRAIGUES and uses sophisticated methods: contrast enhancement, background flattening, segmentation. An other objective is to reconstruct a three-dimensional volume on the basis of a few radiographs taken at different incidences and to estimate the flaw orientation within a piece understudy. This information makes sense to experts, with regards to the deterioration rate of the flaw; the equipment concerned includes the formed bends in the primary coolant nozzles. This reconstruction problem is ill-posed and a solution can be obtained by introducing a priori information on the solution. The first step of our algorithm is a classical iterative reconstruction A.R.T. type method (Algebraic Reconstruction Techniques) which provides a rough volumic reconstructed tridimensional zone containing the flaw. Then, on this reconstructed zone, we apply a Bayesian restoration method introducing a Markov Random Field (MRF) modelling. Conclusive results have been obtained. (author

  18. KAFE - A Flexible Image Analysis Tool

    Science.gov (United States)

    Burkutean, Sandra

    2017-11-01

    We present KAFE, the Keywords of Astronomical FITS-Images Explorer - a web-based FITS images post-processing analysis tool designed to be applicable in the radio to sub-mm wavelength domain. KAFE was developed to enrich selected FITS files with metadata based on a uniform image analysis approach as well as to provide advanced image diagnostic plots. It is ideally suited for data mining purposes and multi-wavelength/multi-instrument data samples that require uniform data diagnostic criteria.

  19. Content-Based Object Movie Retrieval and Relevance Feedbacks

    Directory of Open Access Journals (Sweden)

    Lee Greg C

    2007-01-01

    Full Text Available Object movie refers to a set of images captured from different perspectives around a 3D object. Object movie provides a good representation of a physical object because it can provide 3D interactive viewing effect, but does not require 3D model reconstruction. In this paper, we propose an efficient approach for content-based object movie retrieval. In order to retrieve the desired object movie from the database, we first map an object movie into the sampling of a manifold in the feature space. Two different layers of feature descriptors, dense and condensed, are designed to sample the manifold for representing object movies. Based on these descriptors, we define the dissimilarity measure between the query and the target in the object movie database. The query we considered can be either an entire object movie or simply a subset of views. We further design a relevance feedback approach to improving retrieved results. Finally, some experimental results are presented to show the efficacy of our approach.

  20. Content-based video indexing and searching with wavelet transformation

    Science.gov (United States)

    Stumpf, Florian; Al-Jawad, Naseer; Du, Hongbo; Jassim, Sabah

    2006-05-01

    Biometric databases form an essential tool in the fight against international terrorism, organised crime and fraud. Various government and law enforcement agencies have their own biometric databases consisting of combination of fingerprints, Iris codes, face images/videos and speech records for an increasing number of persons. In many cases personal data linked to biometric records are incomplete and/or inaccurate. Besides, biometric data in different databases for the same individual may be recorded with different personal details. Following the recent terrorist atrocities, law enforcing agencies collaborate more than before and have greater reliance on database sharing. In such an environment, reliable biometric-based identification must not only determine who you are but also who else you are. In this paper we propose a compact content-based video signature and indexing scheme that can facilitate retrieval of multiple records in face biometric databases that belong to the same person even if their associated personal data are inconsistent. We shall assess the performance of our system using a benchmark audio visual face biometric database that has multiple videos for each subject but with different identity claims. We shall demonstrate that retrieval of relatively small number of videos that are nearest, in terms of the proposed index, to any video in the database results in significant proportion of that individual biometric data.

  1. Detector systems for imaging neutron activation analysis

    International Nuclear Information System (INIS)

    Dewaraja, Y.K.; Fleming, R.F.

    1994-01-01

    This paper compares the performance of two imaging detector systems for the new technique of Imaging Neutron Activation Analysis (Imaging NAA). The first system is based on secondary electron imaging, and the second employs a position sensitive charged particle detector for direct localization of beta particles. The secondary electron imaging system has demonstrated a position resolution of 20 μm. The position sensitive beta detector has the potential for higher efficiencies with resolution being a trade off. Results presented show the feasibility of the two imaging methods for different applications of Imaging NAA

  2. iPixel: a visual content-based and semantic search engine for retrieving digitized mammograms by using collective intelligence.

    Science.gov (United States)

    Alor-Hernández, Giner; Pérez-Gallardo, Yuliana; Posada-Gómez, Rubén; Cortes-Robles, Guillermo; Rodríguez-González, Alejandro; Aguilar-Laserre, Alberto A

    2012-09-01

    Nowadays, traditional search engines such as Google, Yahoo and Bing facilitate the retrieval of information in the format of images, but the results are not always useful for the users. This is mainly due to two problems: (1) the semantic keywords are not taken into consideration and (2) it is not always possible to establish a query using the image features. This issue has been covered in different domains in order to develop content-based image retrieval (CBIR) systems. The expert community has focussed their attention on the healthcare domain, where a lot of visual information for medical analysis is available. This paper provides a solution called iPixel Visual Search Engine, which involves semantics and content issues in order to search for digitized mammograms. iPixel offers the possibility of retrieving mammogram features using collective intelligence and implementing a CBIR algorithm. Our proposal compares not only features with similar semantic meaning, but also visual features. In this sense, the comparisons are made in different ways: by the number of regions per image, by maximum and minimum size of regions per image and by average intensity level of each region. iPixel Visual Search Engine supports the medical community in differential diagnoses related to the diseases of the breast. The iPixel Visual Search Engine has been validated by experts in the healthcare domain, such as radiologists, in addition to experts in digital image analysis.

  3. Implementação e avaliação de um sistema de gerenciamento de imagens médicas com suporte à recuperação baseada em conteúdo Implementation and evaluation of a medical image management system with content-based retrieval support

    Directory of Open Access Journals (Sweden)

    Edilson Carlos Caritá

    2008-10-01

    Full Text Available OBJETIVO: Neste artigo são descritas a implementação e avaliação de um sistema de gerenciamento de imagens médicas com suporte à recuperação baseada em conteúdo (PACS-CBIR, integrando módulos voltados para a aquisição, armazenamento e distribuição de imagens, e a recuperação de informação textual por palavras-chave e de imagens por similaridade. MATERIAIS E MÉTODOS: O sistema foi implementado com tecnologias para Internet, utilizando-se programas livres, plataforma Linux e linguagem de programação C++, PHP e Java. Há um módulo de gerenciamento de imagens compatível com o padrão DICOM e outros dois módulos de busca, um baseado em informações textuais e outro na similaridade de atributos de textura de imagens. RESULTADOS: Os resultados obtidos indicaram que as imagens são gerenciadas e armazenadas corretamente e que o tempo de retorno das imagens, sempre menor do que 15 segundos, foi considerado bom pelos usuários. As avaliações da recuperação por similaridade demonstraram que o extrator escolhido possibilitou a separação das imagens por região anatômica. CONCLUSÃO: Com os resultados obtidos pode-se concluir que é viável a implementação de um PACS-CBIR. O sistema apresentou-se compatível com as funcionalidades do DICOM e integrável ao sistema de informação local. A funcionalidade de recuperação de imagens similares pode ser melhorada com a inclusão de outros descritores.OBJECTIVE: The present paper describes the implementation and evaluation of a medical images management system with content-based retrieval support (PACS-CBIR integrating modules focused on images acquisition, storage and distribution, and text retrieval by keyword and images retrieval by similarity. MATERIALS AND METHODS: Internet-compatible technologies were utilized for the system implementation with freeware, and C++, PHP and Java languages on a Linux platform. There is a DICOM-compatible image management module and two query

  4. Microscopy image segmentation tool: Robust image data analysis

    International Nuclear Information System (INIS)

    Valmianski, Ilya; Monton, Carlos; Schuller, Ivan K.

    2014-01-01

    We present a software package called Microscopy Image Segmentation Tool (MIST). MIST is designed for analysis of microscopy images which contain large collections of small regions of interest (ROIs). Originally developed for analysis of porous anodic alumina scanning electron images, MIST capabilities have been expanded to allow use in a large variety of problems including analysis of biological tissue, inorganic and organic film grain structure, as well as nano- and meso-scopic structures. MIST provides a robust segmentation algorithm for the ROIs, includes many useful analysis capabilities, and is highly flexible allowing incorporation of specialized user developed analysis. We describe the unique advantages MIST has over existing analysis software. In addition, we present a number of diverse applications to scanning electron microscopy, atomic force microscopy, magnetic force microscopy, scanning tunneling microscopy, and fluorescent confocal laser scanning microscopy

  5. Video retrieval by still-image analysis with ImageMiner

    Science.gov (United States)

    Kreyss, Jutta; Roeper, M.; Alshuth, Peter; Hermes, Thorsten; Herzog, Otthein

    1997-01-01

    The large amount of available multimedia information (e.g. videos, audio, images) requires efficient and effective annotation and retrieval methods. As videos start playing a more important role in the frame of multimedia, we want to make these available for content-based retrieval. The ImageMiner-System, which was developed at the University of Bremen in the AI group, is designed for content-based retrieval of single images by a new combination of techniques and methods from computer vision and artificial intelligence. In our approach to make videos available for retrieval in a large database of videos and images there are two necessary steps: First, the detection and extraction of shots from a video, which is done by a histogram based method and second, the construction of the separate frames in a shot to one still single images. This is performed by a mosaicing-technique. The resulting mosaiced image gives a one image visualization of the shot and can be analyzed by the ImageMiner-System. ImageMiner has been tested on several domains, (e.g. landscape images, technical drawings), which cover a wide range of applications.

  6. Digital-image processing and image analysis of glacier ice

    Science.gov (United States)

    Fitzpatrick, Joan J.

    2013-01-01

    This document provides a methodology for extracting grain statistics from 8-bit color and grayscale images of thin sections of glacier ice—a subset of physical properties measurements typically performed on ice cores. This type of analysis is most commonly used to characterize the evolution of ice-crystal size, shape, and intercrystalline spatial relations within a large body of ice sampled by deep ice-coring projects from which paleoclimate records will be developed. However, such information is equally useful for investigating the stress state and physical responses of ice to stresses within a glacier. The methods of analysis presented here go hand-in-hand with the analysis of ice fabrics (aggregate crystal orientations) and, when combined with fabric analysis, provide a powerful method for investigating the dynamic recrystallization and deformation behaviors of bodies of ice in motion. The procedures described in this document compose a step-by-step handbook for a specific image acquisition and data reduction system built in support of U.S. Geological Survey ice analysis projects, but the general methodology can be used with any combination of image processing and analysis software. The specific approaches in this document use the FoveaPro 4 plug-in toolset to Adobe Photoshop CS5 Extended but it can be carried out equally well, though somewhat less conveniently, with software such as the image processing toolbox in MATLAB, Image-Pro Plus, or ImageJ.

  7. Information granules in image histogram analysis.

    Science.gov (United States)

    Wieclawek, Wojciech

    2017-05-10

    A concept of granular computing employed in intensity-based image enhancement is discussed. First, a weighted granular computing idea is introduced. Then, the implementation of this term in the image processing area is presented. Finally, multidimensional granular histogram analysis is introduced. The proposed approach is dedicated to digital images, especially to medical images acquired by Computed Tomography (CT). As the histogram equalization approach, this method is based on image histogram analysis. Yet, unlike the histogram equalization technique, it works on a selected range of the pixel intensity and is controlled by two parameters. Performance is tested on anonymous clinical CT series. Copyright © 2017 Elsevier Ltd. All rights reserved.

  8. Retinal image analysis: preprocessing and feature extraction

    Energy Technology Data Exchange (ETDEWEB)

    Marrugo, Andres G; Millan, Maria S, E-mail: andres.marrugo@upc.edu [Grup d' Optica Aplicada i Processament d' Imatge, Departament d' Optica i Optometria Univesitat Politecnica de Catalunya (Spain)

    2011-01-01

    Image processing, analysis and computer vision techniques are found today in all fields of medical science. These techniques are especially relevant to modern ophthalmology, a field heavily dependent on visual data. Retinal images are widely used for diagnostic purposes by ophthalmologists. However, these images often need visual enhancement prior to apply a digital analysis for pathological risk or damage detection. In this work we propose the use of an image enhancement technique for the compensation of non-uniform contrast and luminosity distribution in retinal images. We also explore optic nerve head segmentation by means of color mathematical morphology and the use of active contours.

  9. Some developments in multivariate image analysis

    DEFF Research Database (Denmark)

    Kucheryavskiy, Sergey

    Multivariate image analysis (MIA), one of the successful chemometric applications, now is used widely in different areas of science and industry. Introduced in late 80s it has became very popular with hyperspectral imaging, where MIA is one of the most efficient tools for exploratory analysis...... and classification. MIA considers all image pixels as objects and their color values (or spectrum in the case of hyperspectral images) as variables. So it gives data matrices with hundreds of thousands samples in the case of laboratory scale images and even more for aerial photos, where the number of pixels could...... for and analyzing patterns on these plots and the original image allow to do interactive analysis, to get some hidden information, build a supervised classification model, and much more. In the present work several alternative methods to original principal component analysis (PCA) for building the projection...

  10. Content-based classification and retrieval of audio

    Science.gov (United States)

    Zhang, Tong; Kuo, C.-C. Jay

    1998-10-01

    An on-line audio classification and segmentation system is presented in this research, where audio recordings are classified and segmented into speech, music, several types of environmental sounds and silence based on audio content analysis. This is the first step of our continuing work towards a general content-based audio classification and retrieval system. The extracted audio features include temporal curves of the energy function,the average zero- crossing rate, the fundamental frequency of audio signals, as well as statistical and morphological features of these curves. The classification result is achieved through a threshold-based heuristic procedure. The audio database that we have built, details of feature extraction, classification and segmentation procedures, and experimental results are described. It is shown that, with the proposed new system, audio recordings can be automatically segmented and classified into basic types in real time with an accuracy of over 90 percent. Outlines of further classification of audio into finer types and a query-by-example audio retrieval system on top of the coarse classification are also introduced.

  11. Solar Image Analysis and Visualization

    CERN Document Server

    Ireland, J

    2009-01-01

    This volume presents a selection of papers on the state of the art of image enhancement, automated feature detection, machine learning, and visualization tools in support of solar physics that focus on the challenges presented by new ground-based and space-based instrumentation. The articles and topics were inspired by the Third Solar Image Processing Workshop, held at Trinity College Dublin, Ireland but contributions from other experts have been included as well. This book is mainly aimed at researchers and graduate students working on image processing and compter vision in astronomy and solar physics.

  12. Content-Based Instruction and Content and Language Integrated Learning: The Same or Different?

    Science.gov (United States)

    Cenoz, Jasone

    2015-01-01

    This article looks at the characteristics of Content-Based Instruction (CBI) and Content and Language Integrated Learning (CLIL) in order to examine their similarities and differences. The analysis shows that CBI/CLIL programmes share the same essential properties and are not pedagogically different from each other. In fact, the use of an L2 as…

  13. Image analysis of PV module electroluminescence

    Science.gov (United States)

    Lai, T.; Ramirez, C.; Potter, B. G.; Simmons-Potter, K.

    2017-08-01

    Electroluminescence imaging can be used as a non-invasive method to spatially assess performance degradation in photovoltaic (PV) modules. Cells, or regions of cells, that do not produce an infra-red luminescence signal under electrical excitation indicate potential damage in the module. In this study, an Andor iKon-M camera and an image acquisition tool provided by Andor have been utilized to obtain electroluminescent images of a full-sized multicrystalline PV module at regular intervals throughout an accelerated lifecycle test (ALC) performed in a large-scale environmental degradation chamber. Computer aided digital image analysis methods were then used to automate degradation assessment in the modules. Initial preprocessing of the images was designed to remove both background noise and barrel distortion in the image data. Image areas were then mapped so that changes in luminescent intensity across both individual cells and the full module could be identified. Two primary techniques for image analysis were subsequently investigated. In the first case, pixel intensity distributions were evaluated over each individual PV cell and changes to the intensities of the cells over the course of an ALC test were evaluated. In the second approach, intensity line scans of each of the cells in a PV module were performed and variations in line scan data were identified during the module ALC test. In this report, both the image acquisition and preprocessing technique and the contribution of each image analysis approach to an assessment of degradation behavior will be discussed.

  14. Analysis of multi-dimensional confocal images

    International Nuclear Information System (INIS)

    Samarabandu, J.K.; Acharya, R.; Edirisinghe, C.D.; Cheng, P.C.; Lin, T.H.

    1991-01-01

    In this paper, a confocal image understanding system is developed which used the blackboard model of problem solving to achieve computerized identification and characterization of confocal fluorescent images (serial optical sections). The system is capable of identifying a large percentage of structures (e.g. cell nucleus) in the presence of background noise and non specific staining of cellular structures. The blackboard architecture provides a convenient framework within which a combination of image processing techniques can be applied to successively refine the input image. The system is organized to find the surfaces of highly visible structures first, using simple image processing techniques and then to adjust and fill in the missing areas of these object surfaces using external knowledge, and a number of more complex image processing techniques when necessary. As a result, the image analysis system is capable of obtaining morphometrical parameters such as surface area, volume and position of structures of interest automatically

  15. Wavefront analysis for plenoptic camera imaging

    International Nuclear Information System (INIS)

    Luan Yin-Sen; Xu Bing; Yang Ping; Tang Guo-Mao

    2017-01-01

    The plenoptic camera is a single lens stereo camera which can retrieve the direction of light rays while detecting their intensity distribution. In this paper, to reveal more truths of plenoptic camera imaging, we present the wavefront analysis for the plenoptic camera imaging from the angle of physical optics but not from the ray tracing model of geometric optics. Specifically, the wavefront imaging model of a plenoptic camera is analyzed and simulated by scalar diffraction theory and the depth estimation is redescribed based on physical optics. We simulate a set of raw plenoptic images of an object scene, thereby validating the analysis and derivations and the difference between the imaging analysis methods based on geometric optics and physical optics are also shown in simulations. (paper)

  16. Introduction to Medical Image Analysis

    DEFF Research Database (Denmark)

    Paulsen, Rasmus Reinhold; Moeslund, Thomas B.

    This book is a result of a collaboration between DTU Informatics at the Technical University of Denmark and the Laboratory of Computer Vision and Media Technology at Aalborg University. It is partly based on the book ”Image and Video Processing”, second edition by Thomas Moeslund. The aim of the ...

  17. A Robust Actin Filaments Image Analysis Framework.

    Directory of Open Access Journals (Sweden)

    Mitchel Alioscha-Perez

    2016-08-01

    Full Text Available The cytoskeleton is a highly dynamical protein network that plays a central role in numerous cellular physiological processes, and is traditionally divided into three components according to its chemical composition, i.e. actin, tubulin and intermediate filament cytoskeletons. Understanding the cytoskeleton dynamics is of prime importance to unveil mechanisms involved in cell adaptation to any stress type. Fluorescence imaging of cytoskeleton structures allows analyzing the impact of mechanical stimulation in the cytoskeleton, but it also imposes additional challenges in the image processing stage, such as the presence of imaging-related artifacts and heavy blurring introduced by (high-throughput automated scans. However, although there exists a considerable number of image-based analytical tools to address the image processing and analysis, most of them are unfit to cope with the aforementioned challenges. Filamentous structures in images can be considered as a piecewise composition of quasi-straight segments (at least in some finer or coarser scale. Based on this observation, we propose a three-steps actin filaments extraction methodology: (i first the input image is decomposed into a 'cartoon' part corresponding to the filament structures in the image, and a noise/texture part, (ii on the 'cartoon' image, we apply a multi-scale line detector coupled with a (iii quasi-straight filaments merging algorithm for fiber extraction. The proposed robust actin filaments image analysis framework allows extracting individual filaments in the presence of noise, artifacts and heavy blurring. Moreover, it provides numerous parameters such as filaments orientation, position and length, useful for further analysis. Cell image decomposition is relatively under-exploited in biological images processing, and our study shows the benefits it provides when addressing such tasks. Experimental validation was conducted using publicly available datasets, and in

  18. Machine learning applications in cell image analysis.

    Science.gov (United States)

    Kan, Andrey

    2017-07-01

    Machine learning (ML) refers to a set of automatic pattern recognition methods that have been successfully applied across various problem domains, including biomedical image analysis. This review focuses on ML applications for image analysis in light microscopy experiments with typical tasks of segmenting and tracking individual cells, and modelling of reconstructed lineage trees. After describing a typical image analysis pipeline and highlighting challenges of automatic analysis (for example, variability in cell morphology, tracking in presence of clutters) this review gives a brief historical outlook of ML, followed by basic concepts and definitions required for understanding examples. This article then presents several example applications at various image processing stages, including the use of supervised learning methods for improving cell segmentation, and the application of active learning for tracking. The review concludes with remarks on parameter setting and future directions.

  19. Optimization of shearography image quality analysis

    International Nuclear Information System (INIS)

    Rafhayudi Jamro

    2005-01-01

    Shearography is an optical technique based on speckle pattern to measure the deformation of the object surface in which the fringe pattern is obtained through the correlation analysis from the speckle pattern. Analysis of fringe pattern for engineering application is limited for qualitative measurement. Therefore, for further analysis that lead to qualitative data, series of image processing mechanism are involved. In this paper, the fringe pattern for qualitative analysis is discussed. In principal field of applications is qualitative non-destructive testing such as detecting discontinuity, defect in the material structure, locating fatigue zones and etc and all these required image processing application. In order to performed image optimisation successfully, the noise in the fringe pattern must be minimised and the fringe pattern itself must be maximise. This can be achieved by applying a filtering method with a kernel size ranging from 2 X 2 to 7 X 7 pixels size and also applying equalizer in the image processing. (Author)

  20. Facial Image Analysis in Anthropology: A Review

    Czech Academy of Sciences Publication Activity Database

    Kalina, Jan

    2011-01-01

    Roč. 49, č. 2 (2011), s. 141-153 ISSN 0323-1119 Institutional support: RVO:67985807 Keywords : face * computer-assisted methods * template matching * geometric morphopetrics * robust image analysis Subject RIV: IN - Informatics, Computer Science

  1. Cobra: A Content-Based Video Retrieval System

    NARCIS (Netherlands)

    Petkovic, M.; Jonker, Willem

    An increasing number of large publicly available video libraries results in a demand for techniques that can manipulate the video data based on content. In this paper, we present a content-based video retrieval system called Cobra. The system supports automatic extraction and retrieval of high-level

  2. Rock and Roll English Teaching: Content-Based Cultural Workshops

    Science.gov (United States)

    Robinson, Tim

    2011-01-01

    In this article, the author shares a content-based English as a Second/Foreign Language (ESL/EFL) workshop that strengthens language acquisition, increases intrinsic motivation, and bridges cultural divides. He uses a rock and roll workshop to introduce an organizational approach with a primary emphasis on cultural awareness content and a…

  3. Fast Content-Based Packet Handling for Intrusion Detection

    National Research Council Canada - National Science Library

    Fisk, Mike

    2001-01-01

    ... use of Royer-Moore currently used in the popular intrusion detection platform Snort. We then measure the actual performance of several search algorithms on actual packet traces and rulesets. Our results provide lessons on the structuring of content-based handlers.

  4. Application of Bayesian Classification to Content-Based Data Management

    Science.gov (United States)

    Lynnes, Christopher; Berrick, S.; Gopalan, A.; Hua, X.; Shen, S.; Smith, P.; Yang, K-Y.; Wheeler, K.; Curry, C.

    2004-01-01

    The high volume of Earth Observing System data has proven to be challenging to manage for data centers and users alike. At the Goddard Earth Sciences Distributed Active Archive Center (GES DAAC), about 1 TB of new data are archived each day. Distribution to users is also about 1 TB/day. A substantial portion of this distribution is MODIS calibrated radiance data, which has a wide variety of uses. However, much of the data is not useful for a particular user's needs: for example, ocean color users typically need oceanic pixels that are free of cloud and sun-glint. The GES DAAC is using a simple Bayesian classification scheme to rapidly classify each pixel in the scene in order to support several experimental content-based data services for near-real-time MODIS calibrated radiance products (from Direct Readout stations). Content-based subsetting would allow distribution of, say, only clear pixels to the user if desired. Content-based subscriptions would distribute data to users only when they fit the user's usability criteria in their area of interest within the scene. Content-based cache management would retain more useful data on disk for easy online access. The classification may even be exploited in an automated quality assessment of the geolocation product. Though initially to be demonstrated at the GES DAAC, these techniques have applicability in other resource-limited environments, such as spaceborne data systems.

  5. Privacy-Preserving Content-Based Recommender System

    NARCIS (Netherlands)

    Erkin, Z.; Beye, M.; Veugen, P.J.M.; Lagendijk, R.L.

    2012-01-01

    By offering personalized content to users, recommender systems have become a vital tool in e-commerce and online media applications. Content-based algorithms recommend items or products to users, that are most similar to those previously purchased or consumed. Unfortunately, collecting and storing

  6. A Database Approach to Content-based XML retrieval

    NARCIS (Netherlands)

    Hiemstra, Djoerd

    2003-01-01

    This paper describes a rst prototype system for content-based retrieval from XML data. The system's design supports both XPath queries and complex information retrieval queries based on a language modelling approach to information retrieval. Evaluation using the INEX benchmark shows that it is

  7. Privacy-Preserving Content-Based Recommendations through Homomorphic Encryption

    NARCIS (Netherlands)

    Erkin, Z.; Beye, M.; Veugen, P.J.M.; Lagendijk, R.L.

    2012-01-01

    By offering personalized content to users, recommender systems have become a vital tool in ecommerce and online media applications. Content-based algorithms recommend items or products to users, that are most similar to those previously purchased or consumed. Unfortunately, collecting and storing

  8. Content-Based Design and Implementation of Ambient Intelligence Applications

    NARCIS (Netherlands)

    Diggelen, J. van; Grootjen, M.; Ubink, E.M.; Zomeren, M. van; Smets, N.J.J.M.

    2013-01-01

    Optimal support of professionals in complex ambient task environments requires a system that delivers the Right Message at the Right Moment in the Right Modality: (RM)3. This paper describes a content-based design methodology and an agent-based architecture to enable real time decisions of

  9. Content-Based Retrieval of Spatio-Temporal Video Events

    NARCIS (Netherlands)

    Petkovic, M.; Jonker, Willem

    2001-01-01

    This paper addresses content-based video retrieval with an emphasis on spatio-temporal modeling and querying of events. Our approach is based on a layered model that guides the process of translating raw video data into an efficient internal representation that captures video semantics. We also

  10. EFL and Educational Reform: Content-Based Instruction in Argentina.

    Science.gov (United States)

    Snow, Marguerite Ann; Cortes, Viviana; Pron, Alejandra V.

    1998-01-01

    Discusses initial experiences with content-based instruction in Argentina. The new approach was precipitated in part by educational reform. Suggests that the dramatic shift from a grammar-based approach to a communicative approach, and the use of language as a tool for instruction may become overwhelming for most teachers. (Author/VWL)

  11. Student engagement with a content-based learning design

    Directory of Open Access Journals (Sweden)

    Brenda Cecilia Padilla Rodriguez

    2013-09-01

    Full Text Available While learning is commonly conceptualised as a social, collaborative process in organisations, online courses often provide limited opportunities for communication between people. How do students engage with content-based courses? How do they find answers to their questions? How do they achieve the learning outcomes? This paper aims to answer these questions by focusing on students’ experiences in an online content-based course delivered in a large Mexican organisation. Sales supervisors (n=47 participated as students. Four main data sources were used to evaluate engagement with and learning from the course: surveys (n=40, think-aloud sessions (n=8, activity logs (n=47 and exams (n=43. Findings suggest that: (1 Students engage with a content-based course by following the guidance available and attempting to make the materials relevant to their own context. (2 Students are resourceful when trying to find support. If the materials do not provide the answers to their questions, they search for alternatives such as colleagues to talk to. (3 Content-based online learning designs may be engaging and effective. However, broadening the range of support options available to students may derive in more meaningful, contextualised and rewarding learning experiences.

  12. Structural analysis in medical imaging

    International Nuclear Information System (INIS)

    Dellepiane, S.; Serpico, S.B.; Venzano, L.; Vernazza, G.

    1987-01-01

    The conventional techniques in Pattern Recognition (PR) have been greatly improved by the introduction of Artificial Intelligence (AI) approaches, in particular for knowledge representation, inference mechanism and control structure. The purpose of this paper is to describe an image understanding system, based on the integrated approach (AI - PR), developed in the author's Department to interpret Nuclear Magnetic Resonance (NMR) images. The system is characterized by a heterarchical control structure and a blackboard model for the global data-base. The major aspects of the system are pointed out, with particular reference to segmentation, knowledge representation and error recovery (backtracking). The eye slices obtained in the case of two patients have been analyzed and the related results are discussed

  13. Malware Analysis Using Visualized Image Matrices

    Directory of Open Access Journals (Sweden)

    KyoungSoo Han

    2014-01-01

    Full Text Available This paper proposes a novel malware visual analysis method that contains not only a visualization method to convert binary files into images, but also a similarity calculation method between these images. The proposed method generates RGB-colored pixels on image matrices using the opcode sequences extracted from malware samples and calculates the similarities for the image matrices. Particularly, our proposed methods are available for packed malware samples by applying them to the execution traces extracted through dynamic analysis. When the images are generated, we can reduce the overheads by extracting the opcode sequences only from the blocks that include the instructions related to staple behaviors such as functions and application programming interface (API calls. In addition, we propose a technique that generates a representative image for each malware family in order to reduce the number of comparisons for the classification of unknown samples and the colored pixel information in the image matrices is used to calculate the similarities between the images. Our experimental results show that the image matrices of malware can effectively be used to classify malware families both statically and dynamically with accuracy of 0.9896 and 0.9732, respectively.

  14. Image processing and analysis software development

    International Nuclear Information System (INIS)

    Shahnaz, R.

    1999-01-01

    The work presented in this project is aimed at developing a software 'IMAGE GALLERY' to investigate various image processing and analysis techniques. The work was divided into two parts namely the image processing techniques and pattern recognition, which further comprised of character and face recognition. Various image enhancement techniques including negative imaging, contrast stretching, compression of dynamic, neon, diffuse, emboss etc. have been studied. Segmentation techniques including point detection, line detection, edge detection have been studied. Also some of the smoothing and sharpening filters have been investigated. All these imaging techniques have been implemented in a window based computer program written in Visual Basic Neural network techniques based on Perception model have been applied for face and character recognition. (author)

  15. Topological image texture analysis for quality assessment

    Science.gov (United States)

    Asaad, Aras T.; Rashid, Rasber Dh.; Jassim, Sabah A.

    2017-05-01

    Image quality is a major factor influencing pattern recognition accuracy and help detect image tampering for forensics. We are concerned with investigating topological image texture analysis techniques to assess different type of degradation. We use Local Binary Pattern (LBP) as a texture feature descriptor. For any image construct simplicial complexes for selected groups of uniform LBP bins and calculate persistent homology invariants (e.g. number of connected components). We investigated image quality discriminating characteristics of these simplicial complexes by computing these models for a large dataset of face images that are affected by the presence of shadows as a result of variation in illumination conditions. Our tests demonstrate that for specific uniform LBP patterns, the number of connected component not only distinguish between different levels of shadow effects but also help detect the infected regions as well.

  16. From Digital Imaging to Computer Image Analysis of Fine Art

    Science.gov (United States)

    Stork, David G.

    An expanding range of techniques from computer vision, pattern recognition, image analysis, and computer graphics are being applied to problems in the history of art. The success of these efforts is enabled by the growing corpus of high-resolution multi-spectral digital images of art (primarily paintings and drawings), sophisticated computer vision methods, and most importantly the engagement of some art scholars who bring questions that may be addressed through computer methods. This paper outlines some general problem areas and opportunities in this new inter-disciplinary research program.

  17. Document image analysis: A primer

    Indian Academy of Sciences (India)

    OCR makes it possible for the user to edit or search the document's contents. In this paper we briefly describe various components of a document analysis system. Many of these basic building blocks are found in most document analysis systems, irrespective of the particular domain or language to which they are applied.

  18. Adversarial Stain Transfer for Histopathology Image Analysis.

    Science.gov (United States)

    Bentaieb, Aicha; Hamarneh, Ghassan

    2018-03-01

    It is generally recognized that color information is central to the automatic and visual analysis of histopathology tissue slides. In practice, pathologists rely on color, which reflects the presence of specific tissue components, to establish a diagnosis. Similarly, automatic histopathology image analysis algorithms rely on color or intensity measures to extract tissue features. With the increasing access to digitized histopathology images, color variation and its implications have become a critical issue. These variations are the result of not only a variety of factors involved in the preparation of tissue slides but also in the digitization process itself. Consequently, different strategies have been proposed to alleviate stain-related tissue inconsistencies in automatic image analysis systems. Such techniques generally rely on collecting color statistics to perform color matching across images. In this work, we propose a different approach for stain normalization that we refer to as stain transfer. We design a discriminative image analysis model equipped with a stain normalization component that transfers stains across datasets. Our model comprises a generative network that learns data set-specific staining properties and image-specific color transformations as well as a task-specific network (e.g., classifier or segmentation network). The model is trained end-to-end using a multi-objective cost function. We evaluate the proposed approach in the context of automatic histopathology image analysis on three data sets and two different analysis tasks: tissue segmentation and classification. The proposed method achieves superior results in terms of accuracy and quality of normalized images compared to various baselines.

  19. Multispectral dual isotope and NMR image analysis

    Energy Technology Data Exchange (ETDEWEB)

    Vannier, M.W.; Beihn, R.M.; Butterfield, R.L.; De Land, F.H.

    1985-05-01

    Dual isotope scintigraphy and nuclear magnetic resonance imaging produce image data that is intrinsically multispectral. That is multiple images of the same anatomic region are generated with different gray scale distribution and morphologic content that is largely redundant. Image processing technology, originally developed by NASA for satellite imaging, is available for multispectral analysis. These methods have been applied to provide tissue characterization. Tissue specific information encoded in the grapy scale data from dual isotope and NMR studies may be extracted using multispectral pattern recognition methods. The authors used table lookup minimum distance, maximum likelihood and cluster analysis techniques with data sets from Ga-67 / Tc-99m, 1-131 labeled antibodies / Tc-99m, Tc-99m perfusion / Xe-133 ventilation, and NMR studies. The results show; tissue characteristic signatures exist in dual isotope and NMR imaging, and these spectral signatures are identifiable using multispectral image analysis and provide tissue classification maps with scatter diagrams that facilitate interpretation and assist in elucidating subtle changes.

  20. Deep Learning in Medical Image Analysis

    Science.gov (United States)

    Shen, Dinggang; Wu, Guorong; Suk, Heung-Il

    2016-01-01

    The computer-assisted analysis for better interpreting images have been longstanding issues in the medical imaging field. On the image-understanding front, recent advances in machine learning, especially, in the way of deep learning, have made a big leap to help identify, classify, and quantify patterns in medical images. Specifically, exploiting hierarchical feature representations learned solely from data, instead of handcrafted features mostly designed based on domain-specific knowledge, lies at the core of the advances. In that way, deep learning is rapidly proving to be the state-of-the-art foundation, achieving enhanced performances in various medical applications. In this article, we introduce the fundamentals of deep learning methods; review their successes to image registration, anatomical/cell structures detection, tissue segmentation, computer-aided disease diagnosis or prognosis, and so on. We conclude by raising research issues and suggesting future directions for further improvements. PMID:28301734

  1. Traffic analysis and control using image processing

    Science.gov (United States)

    Senthilkumar, K.; Ellappan, Vijayan; Arun, A. R.

    2017-11-01

    This paper shows the work on traffic analysis and control till date. It shows an approach to regulate traffic the use of image processing and MATLAB systems. This concept uses computational images that are to be compared with original images of the street taken in order to determine the traffic level percentage and set the timing for the traffic signal accordingly which are used to reduce the traffic stoppage on traffic lights. They concept proposes to solve real life scenarios in the streets, thus enriching the traffic lights by adding image receivers like HD cameras and image processors. The input is then imported into MATLAB to be used. as a method for calculating the traffic on roads. Their results would be computed in order to adjust the traffic light timings on a particular street, and also with respect to other similar proposals but with the added value of solving a real, big instance.

  2. Digital image analysis of NDT radiographs

    International Nuclear Information System (INIS)

    Graeme, W.A. Jr.; Eizember, A.C.; Douglass, J.

    1989-01-01

    Prior to the introduction of Charge Coupled Device (CCD) detectors the majority of image analysis performed on NDT radiographic images was done visually in the analog domain. While some film digitization was being performed, the process was often unable to capture all the usable information on the radiograph or was too time consuming. CCD technology now provides a method to digitize radiographic film images without losing the useful information captured in the original radiograph in a timely process. Incorporating that technology into a complete digital radiographic workstation allows analog radiographic information to be processed, providing additional information to the radiographer. Once in the digital domain, that data can be stored, and fused with radioscopic and other forms of digital data. The result is more productive analysis and management of radiographic inspection data. The principal function of the NDT Scan IV digital radiography system is the digitization, enhancement and storage of radiographic images

  3. Development of Image Analysis Software of MAXI

    Science.gov (United States)

    Eguchi, S.; Ueda, Y.; Hiroi, K.; Isobe, N.; Sugizaki, M.; Suzuki, M.; Tomida, H.; Maxi Team

    2010-12-01

    Monitor of All-sky X-ray Image (MAXI) is an X-ray all-sky monitor, attached to the Japanese experiment module Kibo on the International Space Station. The main scientific goals of the MAXI mission include the discovery of X-ray novae followed by prompt alerts to the community (Negoro et al., in this conference), and production of X-ray all-sky maps and new source catalogs with unprecedented sensitivities. To extract the best capabilities of the MAXI mission, we are working on the development of detailed image analysis tools. We utilize maximum likelihood fitting to a projected sky image, where we take account of the complicated detector responses, such as the background and point spread functions (PSFs). The modeling of PSFs, which strongly depend on the orbit and attitude of MAXI, is a key element in the image analysis. In this paper, we present the status of our software development.

  4. Content-based music recommendation using underlying music preference structure

    OpenAIRE

    Soleymani M.; Aljanaki A.; Wiering F.; Veltkamp R.C.

    2015-01-01

    The cold start problem for new users or items is a great challenge for recommender systems. New items can be positioned within the existing items using a similarity metric to estimate their ratings. However, the calculation of similarity varies by domain and available resources. In this paper, we propose a content-based music recommender system which is based on a set of attributes derived from psychological studies of music preference. These five attributes, namely, Mellow, Unpretentious, So...

  5. Design Criteria For Networked Image Analysis System

    Science.gov (United States)

    Reader, Cliff; Nitteberg, Alan

    1982-01-01

    Image systems design is currently undergoing a metamorphosis from the conventional computing systems of the past into a new generation of special purpose designs. This change is motivated by several factors, notably among which is the increased opportunity for high performance with low cost offered by advances in semiconductor technology. Another key issue is a maturing in understanding of problems and the applicability of digital processing techniques. These factors allow the design of cost-effective systems that are functionally dedicated to specific applications and used in a utilitarian fashion. Following an overview of the above stated issues, the paper presents a top-down approach to the design of networked image analysis systems. The requirements for such a system are presented, with orientation toward the hospital environment. The three main areas are image data base management, viewing of image data and image data processing. This is followed by a survey of the current state of the art, covering image display systems, data base techniques, communications networks and software systems control. The paper concludes with a description of the functional subystems and architectural framework for networked image analysis in a production environment.

  6. Multilocus genetic analysis of brain images

    Directory of Open Access Journals (Sweden)

    Derrek Paul Hibar

    2011-10-01

    Full Text Available The quest to identify genes that influence disease is now being extended to find genes that affect biological markers of disease, or endophenotypes. Brain images, in particular, provide exquisitely detailed measures of anatomy, function, and connectivity in the living human brain, and have identified characteristic features of psychiatric and neurological disorders. The emerging field of imaging genomics is discovering important genetic variants associated with brain structure and function, which in turn influence disease risk and fundamental cognitive processes. Statistical approaches for testing genetic associations are not straightforward to apply to brain images because brain imaging phenotypes are generally high dimensional and spatially complex. Neuroimaging phenotypes comprise three dimensional maps across many points in the brain, fiber tracts, shape-based analysis, and connectivity matrices, or networks. These complex data types require new methods for data reduction and joint consideration of the image and the genome. Image-wide, genome-wide searches are now feasible, but they can be greatly empowered by sparse regression or hierarchical clustering methods that isolate promising features, boosting statistical power. Here we review the evolution of statistical approaches to assess genetic influences on the brain. We outline the current state of multivariate statistics in imaging genomics, and future directions, including meta-analysis. We emphasize the power of novel multivariate approaches to discover reliable genetic influences with small effect sizes.

  7. Mathematical foundations of image processing and analysis

    CERN Document Server

    Pinoli, Jean-Charles

    2014-01-01

    Mathematical Imaging is currently a rapidly growing field in applied mathematics, with an increasing need for theoretical mathematics. This book, the second of two volumes, emphasizes the role of mathematics as a rigorous basis for imaging sciences. It provides a comprehensive and convenient overview of the key mathematical concepts, notions, tools and frameworks involved in the various fields of gray-tone and binary image processing and analysis, by proposing a large, but coherent, set of symbols and notations, a complete list of subjects and a detailed bibliography. It establishes a bridg

  8. Dynamic Chest Image Analysis: Evaluation of Model-Based Pulmonary Perfusion Analysis With Pyramid Images

    National Research Council Canada - National Science Library

    Liang, Jianming

    2001-01-01

    Dynamic Chest Image Analysis aims to develop model-based computer analysis and visualization methods for showing focal and general abnormalities of lung ventilation and perfusion based on a sequence...

  9. Aneuploid polyclonality in image analysis.

    Science.gov (United States)

    Alderisio, M; Ribotta, G; Giarnieri, E; Midulla, C; Ferranti, S; Narilli, P; Nofroni, I; Vecchione, A

    1996-05-01

    Solid tumors such as colorectal adenocarcinomas consist of biologically diverse cell subpopulations. Nuclear DNA content of tumor cells in colorectal carcinomas may be studied with different techniques of intranuclear DNA quantification. In the current study, the DNA ploidy of samples obtained from 68 patients with colorectal carcinoma (age ranging from 46 to 86 years, mean age 66 years), treated with radical surgery, between the years 1992 and 1995 was analyzed. DNA ploidy was assessed using a CAS 200 image analyzer and was evaluated on neoplastic tissue and undamaged healthy mucosa obtained from the edges of the surgical resection. Approximately 150-300 cells were analyzed for each sample. The aim of this study was to evaluate the prognostic significance of the polyclonal cases correlated with lymph node infiltration and disease free-survival. The pathological stage according to the TNM classification was compared to ploidy: an increase in multiple stemlines was observed in stage III cases, i.e., a progression towards aneuploidy and multiple stemlines was significantly associated with lymphatic metastasis (p<0.0003). Concerning distant metastasis, we found a correlation between stage IV and polyclonality. A significant correlation was observed between disease-free survival and aneuploid and polyclonal cases (p<0.0053). In polyclonal cases a nine fold greater relapse risk compared to the non-polyclonal cases was observed (p<0.0004). In two cases, the adeno-carcinoma of the sigma was polyclonal and its hepatic metastasis contained the predominant aneuploid clone with the same cytometric characteristics (DNA index) of the original lesion.

  10. Morphometric image analysis of giant vesicles

    DEFF Research Database (Denmark)

    Husen, Peter Rasmussen; Arriaga, Laura; Monroy, Francisco

    2012-01-01

    We have developed a strategy to determine lengths and orientations of tie lines in the coexistence region of liquid-ordered and liquid-disordered phases of cholesterol containing ternary lipid mixtures. The method combines confocal-fluorescence-microscopy image stacks of giant unilamellar vesicles...... (GUVs), a dedicated 3D-image analysis, and a quantitative analysis based in equilibrium thermodynamic considerations. This approach was tested in GUVs composed of 1,2-dioleoyl-sn-glycero-3-phosphocholine/1,2-palmitoyl-sn-glycero-3-phosphocholine/cholesterol. In general, our results show a reasonable...

  11. Multispectral Image Analysis for Astaxanthin Coating Classification

    DEFF Research Database (Denmark)

    Ljungqvist, Martin Georg; Ersbøll, Bjarne Kjær; Nielsen, Michael Engelbrecht

    2012-01-01

    Industrial quality inspection using image analysis on astaxanthin coating in aquaculture feed pellets is of great importance for automatic production control. The pellets were divided into two groups: one with pellets coated using synthetic astaxanthin in fish oil and the other with pellets coate...... products with optimal use of pigment and minimum amount of waste.......Industrial quality inspection using image analysis on astaxanthin coating in aquaculture feed pellets is of great importance for automatic production control. The pellets were divided into two groups: one with pellets coated using synthetic astaxanthin in fish oil and the other with pellets coated...

  12. Fourier analysis: from cloaking to imaging

    International Nuclear Information System (INIS)

    Wu, Kedi; Ping Wang, Guo; Cheng, Qiluan

    2016-01-01

    Regarding invisibility cloaks as an optical imaging system, we present a Fourier approach to analytically unify both Pendry cloaks and complementary media-based invisibility cloaks into one kind of cloak. By synthesizing different transfer functions, we can construct different devices to realize a series of interesting functions such as hiding objects (events), creating illusions, and performing perfect imaging. In this article, we give a brief review on recent works of applying Fourier approach to analysis invisibility cloaks and optical imaging through scattering layers. We show that, to construct devices to conceal an object, no constructive materials with extreme properties are required, making most, if not all, of the above functions realizable by using naturally occurring materials. As instances, we experimentally verify a method of directionally hiding distant objects and create illusions by using all-dielectric materials, and further demonstrate a non-invasive method of imaging objects completely hidden by scattering layers. (review)

  13. Fourier analysis: from cloaking to imaging

    Science.gov (United States)

    Wu, Kedi; Cheng, Qiluan; Wang, Guo Ping

    2016-04-01

    Regarding invisibility cloaks as an optical imaging system, we present a Fourier approach to analytically unify both Pendry cloaks and complementary media-based invisibility cloaks into one kind of cloak. By synthesizing different transfer functions, we can construct different devices to realize a series of interesting functions such as hiding objects (events), creating illusions, and performing perfect imaging. In this article, we give a brief review on recent works of applying Fourier approach to analysis invisibility cloaks and optical imaging through scattering layers. We show that, to construct devices to conceal an object, no constructive materials with extreme properties are required, making most, if not all, of the above functions realizable by using naturally occurring materials. As instances, we experimentally verify a method of directionally hiding distant objects and create illusions by using all-dielectric materials, and further demonstrate a non-invasive method of imaging objects completely hidden by scattering layers.

  14. Deep Learning in Medical Image Analysis.

    Science.gov (United States)

    Shen, Dinggang; Wu, Guorong; Suk, Heung-Il

    2017-06-21

    This review covers computer-assisted analysis of images in the field of medical imaging. Recent advances in machine learning, especially with regard to deep learning, are helping to identify, classify, and quantify patterns in medical images. At the core of these advances is the ability to exploit hierarchical feature representations learned solely from data, instead of features designed by hand according to domain-specific knowledge. Deep learning is rapidly becoming the state of the art, leading to enhanced performance in various medical applications. We introduce the fundamentals of deep learning methods and review their successes in image registration, detection of anatomical and cellular structures, tissue segmentation, computer-aided disease diagnosis and prognosis, and so on. We conclude by discussing research issues and suggesting future directions for further improvement.

  15. Hyperspectral Image Analysis of Food Quality

    DEFF Research Database (Denmark)

    Arngren, Morten

    inspection.Near-infrared spectroscopy can address these issues by offering a fast and objectiveanalysis of the food quality. A natural extension to these single spectrumNIR systems is to include image information such that each pixel holds a NIRspectrum. This augmented image information offers several......Assessing the quality of food is a vital step in any food processing line to ensurethe best food quality and maximum profit for the farmer and food manufacturer.Traditional quality evaluation methods are often destructive and labourintensive procedures relying on wet chemistry or subjective human...... extensions to the analysis offood quality. This dissertation is concerned with hyperspectral image analysisused to assess the quality of single grain kernels. The focus is to highlight thebenefits and challenges of using hyperspectral imaging for food quality presentedin two research directions. Initially...

  16. Curvelet based offline analysis of SEM images.

    Directory of Open Access Journals (Sweden)

    Syed Hamad Shirazi

    Full Text Available Manual offline analysis, of a scanning electron microscopy (SEM image, is a time consuming process and requires continuous human intervention and efforts. This paper presents an image processing based method for automated offline analyses of SEM images. To this end, our strategy relies on a two-stage process, viz. texture analysis and quantification. The method involves a preprocessing step, aimed at the noise removal, in order to avoid false edges. For texture analysis, the proposed method employs a state of the art Curvelet transform followed by segmentation through a combination of entropy filtering, thresholding and mathematical morphology (MM. The quantification is carried out by the application of a box-counting algorithm, for fractal dimension (FD calculations, with the ultimate goal of measuring the parameters, like surface area and perimeter. The perimeter is estimated indirectly by counting the boundary boxes of the filled shapes. The proposed method, when applied to a representative set of SEM images, not only showed better results in image segmentation but also exhibited a good accuracy in the calculation of surface area and perimeter. The proposed method outperforms the well-known Watershed segmentation algorithm.

  17. Data Analysis Strategies in Medical Imaging.

    Science.gov (United States)

    Parmar, Chintan; Barry, Joseph D; Hosny, Ahmed; Quackenbush, John; Aerts, Hugo Jwl

    2018-03-26

    Radiographic imaging continues to be one of the most effective and clinically useful tools within oncology. Sophistication of artificial intelligence (AI) has allowed for detailed quantification of radiographic characteristics of tissues using predefined engineered algorithms or deep learning methods. Precedents in radiology as well as a wealth of research studies hint at the clinical relevance of these characteristics. However, there are critical challenges associated with the analysis of medical imaging data. While some of these challenges are specific to the imaging field, many others like reproducibility and batch effects are generic and have already been addressed in other quantitative fields such as genomics. Here, we identify these pitfalls and provide recommendations for analysis strategies of medical imaging data including data normalization, development of robust models, and rigorous statistical analyses. Adhering to these recommendations will not only improve analysis quality, but will also enhance precision medicine by allowing better integration of imaging data with other biomedical data sources. Copyright ©2018, American Association for Cancer Research.

  18. Measuring toothbrush interproximal penetration using image analysis

    Science.gov (United States)

    Hayworth, Mark S.; Lyons, Elizabeth K.

    1994-09-01

    An image analysis method of measuring the effectiveness of a toothbrush in reaching the interproximal spaces of teeth is described. Artificial teeth are coated with a stain that approximates real plaque and then brushed with a toothbrush on a brushing machine. The teeth are then removed and turned sideways so that the interproximal surfaces can be imaged. The areas of stain that have been removed within masked regions that define the interproximal regions are measured and reported. These areas correspond to the interproximal areas of the tooth reached by the toothbrush bristles. The image analysis method produces more precise results (10-fold decrease in standard deviation) in a fraction (22%) of the time as compared to our prior visual grading method.

  19. A virtual laboratory for medical image analysis

    NARCIS (Netherlands)

    Olabarriaga, Sílvia D.; Glatard, Tristan; de Boer, Piter T.

    2010-01-01

    This paper presents the design, implementation, and usage of a virtual laboratory for medical image analysis. It is fully based on the Dutch grid, which is part of the Enabling Grids for E-sciencE (EGEE) production infrastructure and driven by the gLite middleware. The adopted service-oriented

  20. Scanning transmission electron microscopy imaging and analysis

    CERN Document Server

    Pennycook, Stephen J

    2011-01-01

    Provides the first comprehensive treatment of the physics and applications of this mainstream technique for imaging and analysis at the atomic level Presents applications of STEM in condensed matter physics, materials science, catalysis, and nanoscience Suitable for graduate students learning microscopy, researchers wishing to utilize STEM, as well as for specialists in other areas of microscopy Edited and written by leading researchers and practitioners

  1. Using Image Analysis to Build Reading Comprehension

    Science.gov (United States)

    Brown, Sarah Drake; Swope, John

    2010-01-01

    Content area reading remains a primary concern of history educators. In order to better prepare students for encounters with text, the authors propose the use of two image analysis strategies tied with a historical theme to heighten student interest in historical content and provide a basis for improved reading comprehension.

  2. Flame analysis using image processing techniques

    Science.gov (United States)

    Her Jie, Albert Chang; Zamli, Ahmad Faizal Ahmad; Zulazlan Shah Zulkifli, Ahmad; Yee, Joanne Lim Mun; Lim, Mooktzeng

    2018-04-01

    This paper presents image processing techniques with the use of fuzzy logic and neural network approach to perform flame analysis. Flame diagnostic is important in the industry to extract relevant information from flame images. Experiment test is carried out in a model industrial burner with different flow rates. Flame features such as luminous and spectral parameters are extracted using image processing and Fast Fourier Transform (FFT). Flame images are acquired using FLIR infrared camera. Non-linearities such as thermal acoustic oscillations and background noise affect the stability of flame. Flame velocity is one of the important characteristics that determines stability of flame. In this paper, an image processing method is proposed to determine flame velocity. Power spectral density (PSD) graph is a good tool for vibration analysis where flame stability can be approximated. However, a more intelligent diagnostic system is needed to automatically determine flame stability. In this paper, flame features of different flow rates are compared and analyzed. The selected flame features are used as inputs to the proposed fuzzy inference system to determine flame stability. Neural network is used to test the performance of the fuzzy inference system.

  3. Automated Aesthetic Analysis of Photographic Images.

    Science.gov (United States)

    Aydın, Tunç Ozan; Smolic, Aljoscha; Gross, Markus

    2015-01-01

    We present a perceptually calibrated system for automatic aesthetic evaluation of photographic images. Our work builds upon the concepts of no-reference image quality assessment, with the main difference being our focus on rating image aesthetic attributes rather than detecting image distortions. In contrast to the recent attempts on the highly subjective aesthetic judgment problems such as binary aesthetic classification and the prediction of an image's overall aesthetics rating, our method aims on providing a reliable objective basis of comparison between aesthetic properties of different photographs. To that end our system computes perceptually calibrated ratings for a set of fundamental and meaningful aesthetic attributes, that together form an "aesthetic signature" of an image. We show that aesthetic signatures can still be used to improve upon the current state-of-the-art in automatic aesthetic judgment, but also enable interesting new photo editing applications such as automated aesthetic analysis, HDR tone mapping evaluation, and providing aesthetic feedback during multi-scale contrast manipulation.

  4. Computed image analysis of neutron radiographs

    International Nuclear Information System (INIS)

    Dinca, M.; Anghel, E.; Preda, M.; Pavelescu, M.

    2008-01-01

    Similar with X-radiography, using neutron like penetrating particle, there is in practice a nondestructive technique named neutron radiology. When the registration of information is done on a film with the help of a conversion foil (with high cross section for neutrons) that emits secondary radiation (β,γ) that creates a latent image, the technique is named neutron radiography. A radiographic industrial film that contains the image of the internal structure of an object, obtained by neutron radiography, must be subsequently analyzed to obtain qualitative and quantitative information about the structural integrity of that object. There is possible to do a computed analysis of a film using a facility with next main components: an illuminator for film, a CCD video camera and a computer (PC) with suitable software. The qualitative analysis intends to put in evidence possibly anomalies of the structure due to manufacturing processes or induced by working processes (for example, the irradiation activity in the case of the nuclear fuel). The quantitative determination is based on measurements of some image parameters: dimensions, optical densities. The illuminator has been built specially to perform this application but can be used for simple visual observation. The illuminated area is 9x40 cm. The frame of the system is a comparer of Abbe Carl Zeiss Jena type, which has been adapted to achieve this application. The video camera assures the capture of image that is stored and processed by computer. A special program SIMAG-NG has been developed at INR Pitesti that beside of the program SMTV II of the special acquisition module SM 5010 can analyze the images of a film. The major application of the system was the quantitative analysis of a film that contains the images of some nuclear fuel pins beside a dimensional standard. The system was used to measure the length of the pellets of the TRIGA nuclear fuel. (authors)

  5. Web Based Distributed Coastal Image Analysis System, Phase I

    Data.gov (United States)

    National Aeronautics and Space Administration — This project develops Web based distributed image analysis system processing the Moderate Resolution Imaging Spectroradiometer (MODIS) data to provide decision...

  6. Quantitative Image Simulation and Analysis of Nanoparticles

    DEFF Research Database (Denmark)

    Madsen, Jacob; Hansen, Thomas Willum

    Microscopy (HRTEM) has become a routine analysis tool for structural characterization at atomic resolution, and with the recent development of in-situ TEMs, it is now possible to study catalytic nanoparticles under reaction conditions. However, the connection between an experimental image, and the underlying...... of strain measurements from TEM images, and investigate the stability of these measurements to microscope parameters. This is followed by our efforts toward simulating metal nanoparticles on a metal-oxide support using the Charge Optimized Many Body (COMB) interatomic potential. The simulated interface...

  7. Study of TCP densification via image analysis

    International Nuclear Information System (INIS)

    Silva, R.C.; Alencastro, F.S.; Oliveira, R.N.; Soares, G.A.

    2011-01-01

    Among ceramic materials that mimic human bone, β-type tri-calcium phosphate (β-TCP) has shown appropriate chemical stability and superior resorption rate when compared to hydroxyapatite. In order to increase its mechanical strength, the material is sintered, under controlled time and temperature conditions, to obtain densification without phase change. In the present work, tablets were produced via uniaxial compression and then sintered at 1150°C for 2h. The analysis via XRD and FTIR showed that the sintered tablets were composed only by β-TCP. The SEM images were used for quantification of grain size and volume fraction of pores, via digital image analysis. The tablets showed small pore fraction (between 0,67% and 6,38%) and homogeneous grain size distribution (∼2μm). Therefore, the analysis method seems viable to quantify porosity and grain size. (author)

  8. Analysis of renal nuclear medicine images

    International Nuclear Information System (INIS)

    Jose, R.M.J.

    2000-01-01

    Nuclear medicine imaging of the renal system involves producing time-sequential images showing the distribution of a radiopharmaceutical in the renal system. Producing numerical and graphical data from nuclear medicine studies requires defining regions of interest (ROIs) around various organs within the field of view, such as the left kidney, right kidney and bladder. Automating this process has several advantages: a saving of a clinician's time; enhanced objectivity and reproducibility. This thesis describes the design, implementation and assessment of an automatic ROI generation system. The performance of the system described in this work is assessed by comparing the results to those obtained using manual techniques. Since nuclear medicine images are inherently noisy, the sequence of images is reconstructed using the first few components of a principal components analysis in order to reduce the noise in the images. An image of the summed reconstructed sequence is then formed. This summed image is segmented by using an edge co-occurrence matrix as a feature space for simultaneously classifying regions and locating boundaries. Two methods for assigning the regions of a segmented image to organ class labels are assessed. The first method is based on using Dempster-Shafer theory to combine uncertain evidence from several sources into a single evidence; the second method makes use of a neural network classifier. The use of each technique in classifying the regions of a segmented image are assessed in separate experiments using 40 real patient-studies. A comparative assessment of the two techniques shows that the neural network produces more accurate region labels for the kidneys. The optimum neural system is determined experimentally. Results indicate that combining temporal and spatial information with a priori clinical knowledge produces reasonable ROIs. Consistency in the neural network assignment of regions is enhanced by taking account of the contextual

  9. Image analysis for ophthalmological diagnosis image processing of Corvis ST images using Matlab

    CERN Document Server

    Koprowski, Robert

    2016-01-01

    This monograph focuses on the use of analysis and processing methods for images from the Corvis® ST tonometer. The presented analysis is associated with the quantitative, repeatable and fully automatic evaluation of the response of the eye, eyeball and cornea to an air-puff. All the described algorithms were practically implemented in MATLAB®. The monograph also describes and provides the full source code designed to perform the discussed calculations. As a result, this monograph is intended for scientists, graduate students and students of computer science and bioengineering as well as doctors wishing to expand their knowledge of modern diagnostic methods assisted by various image analysis and processing methods.

  10. Quantitative Analysis in Nuclear Medicine Imaging

    CERN Document Server

    2006-01-01

    This book provides a review of image analysis techniques as they are applied in the field of diagnostic and therapeutic nuclear medicine. Driven in part by the remarkable increase in computing power and its ready and inexpensive availability, this is a relatively new yet rapidly expanding field. Likewise, although the use of radionuclides for diagnosis and therapy has origins dating back almost to the discovery of natural radioactivity itself, radionuclide therapy and, in particular, targeted radionuclide therapy has only recently emerged as a promising approach for therapy of cancer and, to a lesser extent, other diseases. As effort has, therefore, been made to place the reviews provided in this book in a broader context. The effort to do this is reflected by the inclusion of introductory chapters that address basic principles of nuclear medicine imaging, followed by overview of issues that are closely related to quantitative nuclear imaging and its potential role in diagnostic and therapeutic applications. ...

  11. Multimodal Imaging Brain Connectivity Analysis (MIBCA toolbox

    Directory of Open Access Journals (Sweden)

    Andre Santos Ribeiro

    2015-07-01

    Full Text Available Aim. In recent years, connectivity studies using neuroimaging data have increased the understanding of the organization of large-scale structural and functional brain networks. However, data analysis is time consuming as rigorous procedures must be assured, from structuring data and pre-processing to modality specific data procedures. Until now, no single toolbox was able to perform such investigations on truly multimodal image data from beginning to end, including the combination of different connectivity analyses. Thus, we have developed the Multimodal Imaging Brain Connectivity Analysis (MIBCA toolbox with the goal of diminishing time waste in data processing and to allow an innovative and comprehensive approach to brain connectivity.Materials and Methods. The MIBCA toolbox is a fully automated all-in-one connectivity toolbox that offers pre-processing, connectivity and graph theoretical analyses of multimodal image data such as diffusion-weighted imaging, functional magnetic resonance imaging (fMRI and positron emission tomography (PET. It was developed in MATLAB environment and pipelines well-known neuroimaging softwares such as Freesurfer, SPM, FSL, and Diffusion Toolkit. It further implements routines for the construction of structural, functional and effective or combined connectivity matrices, as well as, routines for the extraction and calculation of imaging and graph-theory metrics, the latter using also functions from the Brain Connectivity Toolbox. Finally, the toolbox performs group statistical analysis and enables data visualization in the form of matrices, 3D brain graphs and connectograms. In this paper the MIBCA toolbox is presented by illustrating its capabilities using multimodal image data from a group of 35 healthy subjects (19–73 years old with volumetric T1-weighted, diffusion tensor imaging, and resting state fMRI data, and 10 subjets with 18F-Altanserin PET data also.Results. It was observed both a high inter

  12. Single particle raster image analysis of diffusion.

    Science.gov (United States)

    Longfils, M; Schuster, E; Lorén, N; Särkkä, A; Rudemo, M

    2017-04-01

    As a complement to the standard RICS method of analysing Raster Image Correlation Spectroscopy images with estimation of the image correlation function, we introduce the method SPRIA, Single Particle Raster Image Analysis. Here, we start by identifying individual particles and estimate the diffusion coefficient for each particle by a maximum likelihood method. Averaging over the particles gives a diffusion coefficient estimate for the whole image. In examples both with simulated and experimental data, we show that the new method gives accurate estimates. It also gives directly standard error estimates. The method should be possible to extend to study heterogeneous materials and systems of particles with varying diffusion coefficient, as demonstrated in a simple simulation example. A requirement for applying the SPRIA method is that the particle concentration is low enough so that we can identify the individual particles. We also describe a bootstrap method for estimating the standard error of standard RICS. © 2016 The Authors Journal of Microscopy © 2016 Royal Microscopical Society.

  13. A Robust Color Object Analysis Approach to Efficient Image Retrieval

    Directory of Open Access Journals (Sweden)

    Zhang Ruofei

    2004-01-01

    Full Text Available We describe a novel indexing and retrieval methodology integrating color, texture, and shape information for content-based image retrieval in image databases. This methodology, we call CLEAR, applies unsupervised image segmentation to partition an image into a set of objects. Fuzzy color histogram, fuzzy texture, and fuzzy shape properties of each object are then calculated to be its signature. The fuzzification procedures effectively resolve the recognition uncertainty stemming from color quantization and human perception of colors. At the same time, the fuzzy scheme incorporates segmentation-related uncertainties into the retrieval algorithm. An adaptive and effective measure for the overall similarity between images is developed by integrating properties of all the objects in every image. In an effort to further improve the retrieval efficiency, a secondary clustering technique is developed and employed, which significantly saves query processing time without compromising retrieval precision. A prototypical system of CLEAR, we developed, demonstrated the promising retrieval performance and robustness in color variations and segmentation-related uncertainties for a test database containing general-purpose color images, as compared with its peer systems in the literature.

  14. Pain related inflammation analysis using infrared images

    Science.gov (United States)

    Bhowmik, Mrinal Kanti; Bardhan, Shawli; Das, Kakali; Bhattacharjee, Debotosh; Nath, Satyabrata

    2016-05-01

    Medical Infrared Thermography (MIT) offers a potential non-invasive, non-contact and radiation free imaging modality for assessment of abnormal inflammation having pain in the human body. The assessment of inflammation mainly depends on the emission of heat from the skin surface. Arthritis is a disease of joint damage that generates inflammation in one or more anatomical joints of the body. Osteoarthritis (OA) is the most frequent appearing form of arthritis, and rheumatoid arthritis (RA) is the most threatening form of them. In this study, the inflammatory analysis has been performed on the infrared images of patients suffering from RA and OA. For the analysis, a dataset of 30 bilateral knee thermograms has been captured from the patient of RA and OA by following a thermogram acquisition standard. The thermograms are pre-processed, and areas of interest are extracted for further processing. The investigation of the spread of inflammation is performed along with the statistical analysis of the pre-processed thermograms. The objectives of the study include: i) Generation of a novel thermogram acquisition standard for inflammatory pain disease ii) Analysis of the spread of the inflammation related to RA and OA using K-means clustering. iii) First and second order statistical analysis of pre-processed thermograms. The conclusion reflects that, in most of the cases, RA oriented inflammation affects bilateral knees whereas inflammation related to OA present in the unilateral knee. Also due to the spread of inflammation in OA, contralateral asymmetries are detected through the statistical analysis.

  15. Semiautomatic digital imaging system for cytogenetic analysis

    International Nuclear Information System (INIS)

    Chaubey, R.C.; Chauhan, P.C.; Bannur, S.V.; Kulgod, S.V.; Chadda, V.K.; Nigam, R.K.

    1999-08-01

    The paper describes a digital image processing system, developed indigenously at BARC for size measurement of microscopic biological objects such as cell, nucleus and micronucleus in mouse bone marrow; cytochalasin-B blocked human lymphocytes in-vitro; numerical counting and karyotyping of metaphase chromosomes of human lymphocytes. Errors in karyotyping of chromosomes by the imaging system may creep in due to lack of well-defined position of centromere or extensive bending of chromosomes, which may result due to poor quality of preparation. Good metaphase preparations are mandatory for precise and accurate analysis by the system. Additional new morphological parameters about each chromosome have to be incorporated to improve the accuracy of karyotyping. Though the experienced cytogenetisist is the final judge; however, the system assists him/her to carryout analysis much faster as compared to manual scoring. Further, experimental studies are in progress to validate different software packages developed for various cytogenetic applications. (author)

  16. Semantic content-based recommendations using semantic graphs.

    Science.gov (United States)

    Guo, Weisen; Kraines, Steven B

    2010-01-01

    Recommender systems (RSs) can be useful for suggesting items that might be of interest to specific users. Most existing content-based recommendation (CBR) systems are designed to recommend items based on text content, and the items in these systems are usually described with keywords. However, similarity evaluations based on keywords suffer from the ambiguity of natural languages. We present a semantic CBR method that uses Semantic Web technologies to recommend items that are more similar semantically with the items that the user prefers. We use semantic graphs to represent the items and we calculate the similarity scores for each pair of semantic graphs using an inverse graph frequency algorithm. The items having higher similarity scores to the items that are known to be preferred by the user are recommended.

  17. Content-based Music Search and Recommendation System

    Science.gov (United States)

    Takegawa, Kazuki; Hijikata, Yoshinori; Nishida, Shogo

    Recently, the turn volume of music data on the Internet has increased rapidly. This has increased the user's cost to find music data suiting their preference from such a large data set. We propose a content-based music search and recommendation system. This system has an interface for searching and finding music data and an interface for editing a user profile which is necessary for music recommendation. By exploiting the visualization of the feature space of music and the visualization of the user profile, the user can search music data and edit the user profile. Furthermore, by exploiting the infomation which can be acquired from each visualized object in a mutually complementary manner, we make it easier for the user to search music data and edit the user profile. Concretely, the system gives to the user an information obtained from the user profile when searching music data and an information obtained from the feature space of music when editing the user profile.

  18. Image Processing and Analysis in Geotechnical Investigation

    Czech Academy of Sciences Publication Activity Database

    Ščučka, Jiří; Martinec, Petr; Šňupárek, Richard; Veselý, V.

    2006-01-01

    Roč. 21, 3-4 (2006), s. 1-6 ISSN 0886-7798. [AITES-ITA 2006 World Tunnel Congres and ITA General Assembly /32./. Seoul, 22.04.2006-27.04.2006] Institutional research plan: CEZ:AV0Z30860518 Keywords : underground working face * digital photography * image analysis Subject RIV: DB - Geology ; Mineralogy Impact factor: 0.278, year: 2006

  19. [Computerized image analysis applied to urology research].

    Science.gov (United States)

    Urrutia Avisrror, M

    1994-05-01

    Diagnosis with the aid of imaging techniques in urology had developed dramatically over the last few years as a result of using state-of-the-art technology that has added digital angiology to the last generation apparatus for ultrasound. Computerized axial tomography and nuclear magnetic resonance allow very high rates of diagnostic possibilities that only a decade ago were not extended to routine use. Each of these examination procedures has its own limits of sensitivity and specificity which vary as a function of the pathoanatomical characteristics depending on the condition to be explored, although none reaches yet absolute values. With ultrasound, CAT and NMR, identification of the various diseases rely on the analysis of densities although with a significant degree of the examiner's subjectivity in the diagnostic judgement. The logic evolution of these techniques is to eliminate such subjective component and translate the features which characterize each disease in quantifiable parameters, a challenge made feasible by computerized analysis. Thanks to technological advances in the field of microcomputers and the decreased cost of the equipment, currently it is possible for any clinical investigator with average resources to use the most sophisticated imaging analysis techniques for the post-processing of the images obtained, opening in the scope of practical investigation a pathway that just a few years ago was exclusive to only certain organizations due to the high cost involved.

  20. Image analysis and modeling in medical image computing. Recent developments and advances.

    Science.gov (United States)

    Handels, H; Deserno, T M; Meinzer, H-P; Tolxdorff, T

    2012-01-01

    Medical image computing is of growing importance in medical diagnostics and image-guided therapy. Nowadays, image analysis systems integrating advanced image computing methods are used in practice e.g. to extract quantitative image parameters or to support the surgeon during a navigated intervention. However, the grade of automation, accuracy, reproducibility and robustness of medical image computing methods has to be increased to meet the requirements in clinical routine. In the focus theme, recent developments and advances in the field of modeling and model-based image analysis are described. The introduction of models in the image analysis process enables improvements of image analysis algorithms in terms of automation, accuracy, reproducibility and robustness. Furthermore, model-based image computing techniques open up new perspectives for prediction of organ changes and risk analysis of patients. Selected contributions are assembled to present latest advances in the field. The authors were invited to present their recent work and results based on their outstanding contributions to the Conference on Medical Image Computing BVM 2011 held at the University of Lübeck, Germany. All manuscripts had to pass a comprehensive peer review. Modeling approaches and model-based image analysis methods showing new trends and perspectives in model-based medical image computing are described. Complex models are used in different medical applications and medical images like radiographic images, dual-energy CT images, MR images, diffusion tensor images as well as microscopic images are analyzed. The applications emphasize the high potential and the wide application range of these methods. The use of model-based image analysis methods can improve segmentation quality as well as the accuracy and reproducibility of quantitative image analysis. Furthermore, image-based models enable new insights and can lead to a deeper understanding of complex dynamic mechanisms in the human body

  1. The Digital Image Processing And Quantitative Analysis In Microscopic Image Characterization

    International Nuclear Information System (INIS)

    Ardisasmita, M. Syamsa

    2000-01-01

    Many electron microscopes although have produced digital images, but not all of them are equipped with a supporting unit to process and analyse image data quantitatively. Generally the analysis of image has to be made visually and the measurement is realized manually. The development of mathematical method for geometric analysis and pattern recognition, allows automatic microscopic image analysis with computer. Image processing program can be used for image texture and structure periodic analysis by the application of Fourier transform. Because the development of composite materials. Fourier analysis in frequency domain become important for measure the crystallography orientation. The periodic structure analysis and crystal orientation are the key to understand many material properties like mechanical strength. stress, heat conductivity, resistance, capacitance and other material electric and magnetic properties. In this paper will be shown the application of digital image processing in microscopic image characterization and analysis in microscopic image

  2. Content-Based Information Retrieval from Forensic Databases

    NARCIS (Netherlands)

    Geradts, Z.J.M.H.

    2002-01-01

    In forensic science, the number of image databases is growing rapidly. For this reason, it is necessary to have a proper procedure for searching in these images databases based on content. The use of image databases results in more solved crimes; furthermore, statistical information can be obtained

  3. Analysis of Pregerminated Barley Using Hyperspectral Image Analysis

    DEFF Research Database (Denmark)

    Arngren, Morten; Hansen, Per Waaben; Eriksen, Birger

    2011-01-01

    imaging system in a mathematical modeling framework to identify pregerminated barley at an early stage of approximately 12 h of pregermination. Our model only assigns pregermination as the cause for a single kernel’s lack of germination and is unable to identify dormancy, kernel damage etc. The analysis...

  4. Nursing image: an evolutionary concept analysis.

    Science.gov (United States)

    Rezaei-Adaryani, Morteza; Salsali, Mahvash; Mohammadi, Eesa

    2012-12-01

    A long-term challenge to the nursing profession is the concept of image. In this study, we used the Rodgers' evolutionary concept analysis approach to analyze the concept of nursing image (NI). The aim of this concept analysis was to clarify the attributes, antecedents, consequences, and implications associated with the concept. We performed an integrative internet-based literature review to retrieve English literature published from 1980-2011. Findings showed that NI is a multidimensional, all-inclusive, paradoxical, dynamic, and complex concept. The media, invisibility, clothing style, nurses' behaviors, gender issues, and professional organizations are the most important antecedents of the concept. We found that NI is pivotal in staff recruitment and nursing shortage, resource allocation to nursing, nurses' job performance, workload, burnout and job dissatisfaction, violence against nurses, public trust, and salaries available to nurses. An in-depth understanding of the NI concept would assist nurses to eliminate negative stereotypes and build a more professional image for the nurse and the profession.

  5. Remote Sensing Digital Image Analysis An Introduction

    CERN Document Server

    Richards, John A

    2013-01-01

    Remote Sensing Digital Image Analysis provides the non-specialist with a treatment of the quantitative analysis of satellite and aircraft derived remotely sensed data. Since the first edition of the book there have been significant developments in the algorithms used for the processing and analysis of remote sensing imagery; nevertheless many of the fundamentals have substantially remained the same.  This new edition presents material that has retained value since those early days, along with new techniques that can be incorporated into an operational framework for the analysis of remote sensing data. The book is designed as a teaching text for the senior undergraduate and postgraduate student, and as a fundamental treatment for those engaged in research using digital image processing in remote sensing.  The presentation level is for the mathematical non-specialist.  Since the very great number of operational users of remote sensing come from the earth sciences communities, the text is pitched at a leve...

  6. Preprocessing with image denoising and histogram equalization for endoscopy image analysis using texture analysis.

    Science.gov (United States)

    Hiroyasu, Tomoyuki; Hayashinuma, Katsutoshi; Ichikawa, Hiroshi; Yagi, Nobuaki

    2015-08-01

    A preprocessing method for endoscopy image analysis using texture analysis is proposed. In a previous study, we proposed a feature value that combines a co-occurrence matrix and a run-length matrix to analyze the extent of early gastric cancer from images taken with narrow-band imaging endoscopy. However, the obtained feature value does not identify lesion zones correctly due to the influence of noise and halation. Therefore, we propose a new preprocessing method with a non-local means filter for de-noising and contrast limited adaptive histogram equalization. We have confirmed that the pattern of gastric mucosa in images can be improved by the proposed method. Furthermore, the lesion zone is shown more correctly by the obtained color map.

  7. Machine Learning Interface for Medical Image Analysis.

    Science.gov (United States)

    Zhang, Yi C; Kagen, Alexander C

    2017-10-01

    TensorFlow is a second-generation open-source machine learning software library with a built-in framework for implementing neural networks in wide variety of perceptual tasks. Although TensorFlow usage is well established with computer vision datasets, the TensorFlow interface with DICOM formats for medical imaging remains to be established. Our goal is to extend the TensorFlow API to accept raw DICOM images as input; 1513 DaTscan DICOM images were obtained from the Parkinson's Progression Markers Initiative (PPMI) database. DICOM pixel intensities were extracted and shaped into tensors, or n-dimensional arrays, to populate the training, validation, and test input datasets for machine learning. A simple neural network was constructed in TensorFlow to classify images into normal or Parkinson's disease groups. Training was executed over 1000 iterations for each cross-validation set. The gradient descent optimization and Adagrad optimization algorithms were used to minimize cross-entropy between the predicted and ground-truth labels. Cross-validation was performed ten times to produce a mean accuracy of 0.938 ± 0.047 (95 % CI 0.908-0.967). The mean sensitivity was 0.974 ± 0.043 (95 % CI 0.947-1.00) and mean specificity was 0.822 ± 0.207 (95 % CI 0.694-0.950). We extended the TensorFlow API to enable DICOM compatibility in the context of DaTscan image analysis. We implemented a neural network classifier that produces diagnostic accuracies on par with excellent results from previous machine learning models. These results indicate the potential role of TensorFlow as a useful adjunct diagnostic tool in the clinical setting.

  8. Use of Content Based Instruction and Socratic Discussion for ESL Undergraduate Biomedical Science Students to Develop Critical Thinking Skills

    Science.gov (United States)

    Burder, Ronan L.; Tangalaki, Kathy; Hryciw, Deanne H.

    2014-01-01

    Content based language instruction can assist English as a second language (ESL) students to achieve better learning and teaching outcomes, however, it is primarily used to understand content, and may not help to develop critical analysis skills. Here we describe a pilot study that used a "Socratic" small-group discussion in addition to…

  9. A report on digital image processing and analysis

    International Nuclear Information System (INIS)

    Singh, B.; Alex, J.; Haridasan, G.

    1989-01-01

    This report presents developments in software, connected with digital image processing and analysis in the Centre. In image processing, one resorts to either alteration of grey level values so as to enhance features in the image or resorts to transform domain operations for restoration or filtering. Typical transform domain operations like Karhunen-Loeve transforms are statistical in nature and are used for a good registration of images or template - matching. Image analysis procedures segment grey level images into images contained within selectable windows, for the purpose of estimating geometrical features in the image, like area, perimeter, projections etc. In short, in image processing both the input and output are images, whereas in image analyses, the input is an image whereas the output is a set of numbers and graphs. (author). 19 refs

  10. Automatic indexing of news video for content-based retrieval

    Science.gov (United States)

    Yang, Myung-Sup; Yoo, Cheol-Jung; Chang, Ok-Bae

    1998-06-01

    Since it is impossible to automatically parse a general video, we investigated an integrated solution for the content-based news video indexing and the retrieval. Thus, a specific structural video such as news video is parsed, because it is included both temporal and spatial characteristics that the news event with an anchor-person is iteratively appeared, a news icon and a caption are involved in some frame, respectively. To extract automatically the key frames by using the structured knowledge of news, the model used in this paper is consisted of the news event segmentation, caption recognition and search browser module. The following are three main modules represented in this paper: (1) The news event segmentation module (NESM) for both the recognition and the division of an anchor-person shot. (2) The caption recognition module (CRM) for the detection of the caption-frames in a news event, the extraction of their caption region in the frame by using split-merge method, and the recognition of the region as a text with OCR software. 3) The search browser module (SBM) for the display of the list of news events and news captions, which are included in selected news event. However, the SBM can be caused various searching mechanisms.

  11. Neutron structure analysis using neutron imaging plate

    International Nuclear Information System (INIS)

    Karasawa, Yuko; Minezaki, Yoshiaki; Niimura, Nobuo

    1997-01-01

    Neutron is complementary against X-ray and is dispensable for structure analysis. However, because of the lack of the neutron intensity, it was not so common as X-ray. In order to overcome the intensity problem, a neutron imaging plate (NIP) has been successfully developed. The NIP has opened the door of neutron structure biology, where all the hydrogen atoms and bound water molecules of protein are determined, and contributed to development of other fields such as neutron powder diffraction and neutron radiography, too. (author)

  12. Image reconstruction from Pulsed Fast Neutron Analysis

    International Nuclear Information System (INIS)

    Bendahan, Joseph; Feinstein, Leon; Keeley, Doug; Loveman, Rob

    1999-01-01

    Pulsed Fast Neutron Analysis (PFNA) has been demonstrated to detect drugs and explosives in trucks and large cargo containers. PFNA uses a collimated beam of nanosecond-pulsed fast neutrons that interact with the cargo contents to produce gamma rays characteristic to their elemental composition. By timing the arrival of the emitted radiation to an array of gamma-ray detectors a three-dimensional elemental density map or image of the cargo is created. The process to determine the elemental densities is complex and requires a number of steps. The first step consists of extracting from the characteristic gamma-ray spectra the counts associated with the elements of interest. Other steps are needed to correct for physical quantities such as gamma-ray production cross sections and angular distributions. The image processing includes also phenomenological corrections that take into account the neutron attenuation through the cargo, and the attenuation of the gamma rays from the point they were generated to the gamma-ray detectors. Additional processing is required to map the elemental densities from the data acquisition system of coordinates to a rectilinear system. This paper describes the image processing used to compute the elemental densities from the counts observed in the gamma-ray detectors

  13. Sparse Superpixel Unmixing for Hyperspectral Image Analysis

    Science.gov (United States)

    Castano, Rebecca; Thompson, David R.; Gilmore, Martha

    2010-01-01

    Software was developed that automatically detects minerals that are present in each pixel of a hyperspectral image. An algorithm based on sparse spectral unmixing with Bayesian Positive Source Separation is used to produce mineral abundance maps from hyperspectral images. A superpixel segmentation strategy enables efficient unmixing in an interactive session. The algorithm computes statistically likely combinations of constituents based on a set of possible constituent minerals whose abundances are uncertain. A library of source spectra from laboratory experiments or previous remote observations is used. A superpixel segmentation strategy improves analysis time by orders of magnitude, permitting incorporation into an interactive user session (see figure). Mineralogical search strategies can be categorized as supervised or unsupervised. Supervised methods use a detection function, developed on previous data by hand or statistical techniques, to identify one or more specific target signals. Purely unsupervised results are not always physically meaningful, and may ignore subtle or localized mineralogy since they aim to minimize reconstruction error over the entire image. This algorithm offers advantages of both methods, providing meaningful physical interpretations and sensitivity to subtle or unexpected minerals.

  14. Imaging radionuclide analysis apparatus and method

    International Nuclear Information System (INIS)

    Fleming, R.H.

    1993-01-01

    Imaging neutron activation analysis apparatus is described comprising: a vacuum chamber, means for positioning a sample in said vacuum chamber, means for irradiating the sample with neutrons, means for detecting the time when and the energy of gamma rays emitted from the sample and for establishing from the detected gamma ray energies the presence of certain elements in the sample, means for detecting when delayed beta-electrons are emitted from the sample and for imaging the location on the sample from which such delayed beta-electrons are emitted, means for determining time coincidence between detection of gamma rays by said gamma ray detecting means and detection of electrons by said delayed beta-electron detecting means and means for establishing the location of certain elements on the sample from determined coincidence of detected gamma rays and detected delayed beta-electrons and the established gamma ray energies and the image of the location on the sample from which such delayed beta-electrons are emitted

  15. Analysis of image plane's Illumination in Image-forming System

    International Nuclear Information System (INIS)

    Duan Lihua; Zeng Yan'an; Zhang Nanyangsheng; Wang Zhiguo; Yin Shiliang

    2011-01-01

    In the detection of optical radiation, the detecting accuracy is affected by optic power distribution of the detector's surface to a large extent. In addition, in the image-forming system, the quality of the image is greatly determined by the uniformity of the image's illumination distribution. However, in the practical optical system, affected by the factors such as field of view, false light and off axis and so on, the distribution of the image's illumination tends to be non uniform, so it is necessary to discuss the image plane's illumination in image-forming systems. In order to analyze the characteristics of the image-forming system at a full range, on the basis of photometry, the formulas to calculate the illumination of the imaging plane have been summarized by the numbers. Moreover, the relationship between the horizontal offset of the light source and the illumination of the image has been discussed in detail. After that, the influence of some key factors such as aperture angle, off-axis distance and horizontal offset on illumination of the image has been brought forward. Through numerical simulation, various theoretical curves of those key factors have been given. The results of the numerical simulation show that it is recommended to aggrandize the diameter of the exit pupil to increase the illumination of the image. The angle of view plays a negative role in the illumination distribution of the image, that is, the uniformity of the illumination distribution can be enhanced by compressing the angle of view. Lastly, it is proved that telecentric optical design is an effective way to advance the uniformity of the illumination distribution.

  16. [A novel image processing and analysis system for medical images based on IDL language].

    Science.gov (United States)

    Tang, Min

    2009-08-01

    Medical image processing and analysis system, which is of great value in medical research and clinical diagnosis, has been a focal field in recent years. Interactive data language (IDL) has a vast library of built-in math, statistics, image analysis and information processing routines, therefore, it has become an ideal software for interactive analysis and visualization of two-dimensional and three-dimensional scientific datasets. The methodology is proposed to design a novel image processing and analysis system for medical images based on IDL. There are five functional modules in this system: Image Preprocessing, Image Segmentation, Image Reconstruction, Image Measurement and Image Management. Experimental results demonstrate that this system is effective and efficient, and it has the advantages of extensive applicability, friendly interaction, convenient extension and favorable transplantation.

  17. An expert image analysis system for chromosome analysis application

    International Nuclear Information System (INIS)

    Wu, Q.; Suetens, P.; Oosterlinck, A.; Van den Berghe, H.

    1987-01-01

    This paper reports a recent study on applying a knowledge-based system approach as a new attempt to solve the problem of chromosome classification. A theoretical framework of an expert image analysis system is proposed, based on such a study. In this scheme, chromosome classification can be carried out under a hypothesize-and-verify paradigm, by integrating a rule-based component, in which the expertise of chromosome karyotyping is formulated with an existing image analysis system which uses conventional pattern recognition techniques. Results from the existing system can be used to bring in hypotheses, and with the rule-based verification and modification procedures, improvement of the classification performance can be excepted

  18. Etching and image analysis of the microstructure in marble

    DEFF Research Database (Denmark)

    Alm, Ditte; Brix, Susanne; Howe-Rasmussen, Helle

    2005-01-01

    of grains exposed on that surface are measured on the microscope images using image analysis by the program Adobe Photoshop 7.0 with Image Processing Toolkit 4.0. The parameters measured by the program on microscope images of thin sections of two marble types are used for calculation of the coefficient...

  19. From Pixels to Geographic Objects in Remote Sensing Image Analysis

    NARCIS (Netherlands)

    Addink, E.A.; Van Coillie, Frieke M.B.; Jong, Steven M. de

    Traditional image analysis methods are mostly pixel-based and use the spectral differences of landscape elements at the Earth surface to classify these elements or to extract element properties from the Earth Observation image. Geographic object-based image analysis (GEOBIA) has received

  20. An image scanner for real time analysis of spark chamber images

    International Nuclear Information System (INIS)

    Cesaroni, F.; Penso, G.; Locci, A.M.; Spano, M.A.

    1975-01-01

    The notes describes the semiautomatic scanning system at LNF for the analysis of spark chamber images. From the projection of the images on the scanner table, the trajectory in the real space is reconstructed

  1. Direct identification of fungi using image analysis

    DEFF Research Database (Denmark)

    Dørge, Thorsten Carlheim; Carstensen, Jens Michael; Frisvad, Jens Christian

    1999-01-01

    Filamentous fungi have often been characterized, classified or identified with a major emphasis on macromorphological characters, i.e. the size, texture and color of fungal colonies grown on one or more identification media. This approach has been rejcted by several taxonomists because of the sub......Filamentous fungi have often been characterized, classified or identified with a major emphasis on macromorphological characters, i.e. the size, texture and color of fungal colonies grown on one or more identification media. This approach has been rejcted by several taxonomists because...... of the subjectivity in the visual evaluation and quantification (if any)of such characters and the apparent large variability of the features. We present an image analysis approach for objective identification and classification of fungi. The approach is exemplified by several isolates of nine different species...... of the genus Penicillium, known to be very difficult to identify correctly. The fungi were incubated on YES and CYA for one week at 25 C (3 point inoculation) in 9 cm Petri dishes. The cultures are placed under a camera where a digital image of the front of the colonies is acquired under optimal illumination...

  2. MORPHOLOGICAL GRANULOMETRIC ANALYSIS OF SEDIMENT IMAGES

    Directory of Open Access Journals (Sweden)

    Yoganand Balagurunathan

    2011-05-01

    Full Text Available Sediments are routinely analyzed in terms of the sizing characteristics of the grains of which they are composed. Via sieving methods, the grains are separated and a weight-based size distribution constructed. Various moment parameters are computed from the size distribution and these serve as sediment characteristics. This paper examines the feasibility of a fully electronic granularity analysis using digital image processing. The study uses a random model of three-dimensional grains in conjunction with the morphological method of granulometric size distributions. The random model is constructed to simulate sand, silt, and clay particle distributions. Owing to the impossibility of perfectly sifting small grains so that they do not touch, the model is used in both disjoint and non-disjoint modes, and watershed segmentation is applied in the non-disjoint model. The image-based granulometric size distributions are transformed so that they take into account the necessity to view sediment fractions at different magnifications and in different frames. Gray-scale granulometric moments are then computed using both ordinary and reconstructive granulometries. The resulting moments are then compared to moments found from real grains in seven different sediments using standard weight-based size distributions.

  3. Vector processing enhancements for real-time image analysis

    International Nuclear Information System (INIS)

    Shoaf, S.

    2008-01-01

    A real-time image analysis system was developed for beam imaging diagnostics. An Apple Power Mac G5 with an Active Silicon LFG frame grabber was used to capture video images that were processed and analyzed. Software routines were created to utilize vector-processing hardware to reduce the time to process images as compared to conventional methods. These improvements allow for more advanced image processing diagnostics to be performed in real time.

  4. FOOD IMAGE ANALYSIS: SEGMENTATION, IDENTIFICATION AND WEIGHT ESTIMATION

    OpenAIRE

    He, Ye; Xu, Chang; Khanna, Nitin; Boushey, Carol J.; Delp, Edward J.

    2013-01-01

    We are developing a dietary assessment system that records daily food intake through the use of food images taken at a meal. The food images are then analyzed to extract the nutrient content in the food. In this paper, we describe the image analysis tools to determine the regions where a particular food is located (image segmentation), identify the food type (feature classification) and estimate the weight of the food item (weight estimation). An image segmentation and classification system i...

  5. Quantitative MR Image Analysis for Brian Tumor.

    Science.gov (United States)

    Shboul, Zeina A; Reza, Sayed M S; Iftekharuddin, Khan M

    2018-01-01

    This paper presents an integrated quantitative MR image analysis framework to include all necessary steps such as MRI inhomogeneity correction, feature extraction, multiclass feature selection and multimodality abnormal brain tissue segmentation respectively. We first obtain mathematical algorithm to compute a novel Generalized multifractional Brownian motion (GmBm) texture feature. We then demonstrate efficacy of multiple multiresolution texture features including regular fractal dimension (FD) texture, and stochastic texture such as multifractional Brownian motion (mBm) and GmBm features for robust tumor and other abnormal tissue segmentation in brain MRI. We evaluate these texture and associated intensity features to effectively delineate multiple abnormal tissues within and around the tumor core, and stroke lesions using large scale public and private datasets.

  6. Biostatistical analysis of quantitative immunofluorescence microscopy images.

    Science.gov (United States)

    Giles, C; Albrecht, M A; Lam, V; Takechi, R; Mamo, J C

    2016-12-01

    Semiquantitative immunofluorescence microscopy has become a key methodology in biomedical research. Typical statistical workflows are considered in the context of avoiding pseudo-replication and marginalising experimental error. However, immunofluorescence microscopy naturally generates hierarchically structured data that can be leveraged to improve statistical power and enrich biological interpretation. Herein, we describe a robust distribution fitting procedure and compare several statistical tests, outlining their potential advantages/disadvantages in the context of biological interpretation. Further, we describe tractable procedures for power analysis that incorporates the underlying distribution, sample size and number of images captured per sample. The procedures outlined have significant potential for increasing understanding of biological processes and decreasing both ethical and financial burden through experimental optimization. © 2016 The Authors Journal of Microscopy © 2016 Royal Microscopical Society.

  7. Computerised image analysis of biocrystallograms originating from agricultural products

    DEFF Research Database (Denmark)

    Andersen, Jens-Otto; Henriksen, Christian B.; Laursen, J.

    1999-01-01

    Procedures are presented for computerised image analysis of iocrystallogram images, originating from biocrystallization investigations of agricultural products. The biocrystallization method is based on the crystallographic phenomenon that when adding biological substances, such as plant extracts...... on up to eight parameters indicated strong relationships, with R2 up to 0.98. It is concluded that the procedures were able to discriminate the seven groups of images, and are applicable for biocrystallization investigations of agricultural products. Perspectives for the application of image analysis...

  8. Forensic image analysis - CCTV distortion and artefacts.

    Science.gov (United States)

    Seckiner, Dilan; Mallett, Xanthé; Roux, Claude; Meuwly, Didier; Maynard, Philip

    2018-04-01

    As a result of the worldwide deployment of surveillance cameras, authorities have gained a powerful tool that captures footage of activities of people in public areas. Surveillance cameras allow continuous monitoring of the area and allow footage to be obtained for later use, if a criminal or other act of interest occurs. Following this, a forensic practitioner, or expert witness can be required to analyse the footage of the Person of Interest. The examination ultimately aims at evaluating the strength of evidence at source and activity levels. In this paper, both source and activity levels are inferred from the trace, obtained in the form of CCTV footage. The source level alludes to features observed within the anatomy and gait of an individual, whilst the activity level relates to activity undertaken by the individual within the footage. The strength of evidence depends on the value of the information recorded, where the activity level is robust, yet source level requires further development. It is therefore suggested that the camera and the associated distortions should be assessed first and foremost and, where possible, quantified, to determine the level of each type of distortion present within the footage. A review of the 'forensic image analysis' review is presented here. It will outline the image distortion types and detail the limitations of differing surveillance camera systems. The aim is to highlight various types of distortion present particularly from surveillance footage, as well as address gaps in current literature in relation to assessment of CCTV distortions in tandem with gait analysis. Future work will consider the anatomical assessment from surveillance footage. Copyright © 2018 Elsevier B.V. All rights reserved.

  9. Dynamic Chest Image Analysis: Model-Based Perfusion Analysis in Dynamic Pulmonary Imaging

    Directory of Open Access Journals (Sweden)

    Kiuru Aaro

    2003-01-01

    Full Text Available The "Dynamic Chest Image Analysis" project aims to develop model-based computer analysis and visualization methods for showing focal and general abnormalities of lung ventilation and perfusion based on a sequence of digital chest fluoroscopy frames collected with the dynamic pulmonary imaging technique. We have proposed and evaluated a multiresolutional method with an explicit ventilation model for ventilation analysis. This paper presents a new model-based method for pulmonary perfusion analysis. According to perfusion properties, we first devise a novel mathematical function to form a perfusion model. A simple yet accurate approach is further introduced to extract cardiac systolic and diastolic phases from the heart, so that this cardiac information may be utilized to accelerate the perfusion analysis and improve its sensitivity in detecting pulmonary perfusion abnormalities. This makes perfusion analysis not only fast but also robust in computation; consequently, perfusion analysis becomes computationally feasible without using contrast media. Our clinical case studies with 52 patients show that this technique is effective for pulmonary embolism even without using contrast media, demonstrating consistent correlations with computed tomography (CT and nuclear medicine (NM studies. This fluoroscopical examination takes only about 2 seconds for perfusion study with only low radiation dose to patient, involving no preparation, no radioactive isotopes, and no contrast media.

  10. Comparing the quality of accessing medical literature using content-based visual and textual information retrieval

    Science.gov (United States)

    Müller, Henning; Kalpathy-Cramer, Jayashree; Kahn, Charles E., Jr.; Hersh, William

    2009-02-01

    Content-based visual information (or image) retrieval (CBIR) has been an extremely active research domain within medical imaging over the past ten years, with the goal of improving the management of visual medical information. Many technical solutions have been proposed, and application scenarios for image retrieval as well as image classification have been set up. However, in contrast to medical information retrieval using textual methods, visual retrieval has only rarely been applied in clinical practice. This is despite the large amount and variety of visual information produced in hospitals every day. This information overload imposes a significant burden upon clinicians, and CBIR technologies have the potential to help the situation. However, in order for CBIR to become an accepted clinical tool, it must demonstrate a higher level of technical maturity than it has to date. Since 2004, the ImageCLEF benchmark has included a task for the comparison of visual information retrieval algorithms for medical applications. In 2005, a task for medical image classification was introduced and both tasks have been run successfully for the past four years. These benchmarks allow an annual comparison of visual retrieval techniques based on the same data sets and the same query tasks, enabling the meaningful comparison of various retrieval techniques. The datasets used from 2004-2007 contained images and annotations from medical teaching files. In 2008, however, the dataset used was made up of 67,000 images (along with their associated figure captions and the full text of their corresponding articles) from two Radiological Society of North America (RSNA) scientific journals. This article describes the results of the medical image retrieval task of the ImageCLEF 2008 evaluation campaign. We compare the retrieval results of both visual and textual information retrieval systems from 15 research groups on the aforementioned data set. The results show clearly that, currently

  11. Quality assessment of butter cookies applying multispectral imaging

    DEFF Research Database (Denmark)

    Stenby Andresen, Mette; Dissing, Bjørn Skovlund; Løje, Hanne

    2013-01-01

    in a forced convection electrically heated oven. In addition to the browning score, a model for predicting the average water content based on the same images is presented. This shows how multispectral images of butter cookies may be used for the assessment of different quality parameters. Statistical analysis...

  12. Multiple Object Retrieval in Image Databases Using Hierarchical Segmentation Tree

    Science.gov (United States)

    Chen, Wei-Bang

    2012-01-01

    The purpose of this research is to develop a new visual information analysis, representation, and retrieval framework for automatic discovery of salient objects of user's interest in large-scale image databases. In particular, this dissertation describes a content-based image retrieval framework which supports multiple-object retrieval. The…

  13. Image based SAR product simulation for analysis

    Science.gov (United States)

    Domik, G.; Leberl, F.

    1987-01-01

    SAR product simulation serves to predict SAR image gray values for various flight paths. Input typically consists of a digital elevation model and backscatter curves. A new method is described of product simulation that employs also a real SAR input image for image simulation. This can be denoted as 'image-based simulation'. Different methods to perform this SAR prediction are presented and advantages and disadvantages discussed. Ascending and descending orbit images from NASA's SIR-B experiment were used for verification of the concept: input images from ascending orbits were converted into images from a descending orbit; the results are compared to the available real imagery to verify that the prediction technique produces meaningful image data.

  14. Machine learning based analysis of cardiovascular images

    NARCIS (Netherlands)

    Wolterink, JM

    2017-01-01

    Cardiovascular diseases (CVDs), including coronary artery disease (CAD) and congenital heart disease (CHD) are the global leading cause of death. Computed tomography (CT) and magnetic resonance imaging (MRI) allow non-invasive imaging of cardiovascular structures. This thesis presents machine

  15. Image quality analysis of digital mammographic equipments

    International Nuclear Information System (INIS)

    Mayo, P.; Pascual, A.; Verdu, G.; Rodenas, F.; Campayo, J.M.; Villaescusa, J.I.

    2006-01-01

    The image quality assessment of a radiographic phantom image is one of the fundamental points in a complete quality control programme. The good functioning result of all the process must be an image with an appropriate quality to carry out a suitable diagnostic. Nowadays, the digital radiographic equipments are replacing the traditional film-screen equipments and it is necessary to update the parameters to guarantee the quality of the process. Contrast-detail phantoms are applied to digital radiography to study the threshold contrast detail sensitivity at operation conditions of the equipment. The phantom that is studied in this work is C.D.M.A.M. 3.4, which facilitates the evaluation of image contrast and detail resolution. One of the most extended indexes to measure the image quality in an objective way is the Image Quality Figure (I.Q.F.). This parameter is useful to calculate the image quality taking into account the contrast and detail resolution of the image analysed. The contrast-detail curve is useful as a measure of the image quality too, because it is a graphical representation in which the hole thickness and diameter are plotted for each contrast-detail combination detected in the radiographic image of the phantom. It is useful for the comparison of the functioning of different radiographic image systems, for phantom images under the same exposition conditions. The aim of this work is to study the image quality of different images contrast-detail phantom C.D.M.A.M. 3.4, carrying out the automatic detection of the contrast-detail combination and to establish a parameter which characterize in an objective way the mammographic image quality. This is useful to compare images obtained at different digital mammographic equipments to study the functioning of the equipments. (authors)

  16. Image quality analysis of digital mammographic equipments

    Energy Technology Data Exchange (ETDEWEB)

    Mayo, P.; Pascual, A.; Verdu, G. [Valencia Univ. Politecnica, Chemical and Nuclear Engineering Dept. (Spain); Rodenas, F. [Valencia Univ. Politecnica, Applied Mathematical Dept. (Spain); Campayo, J.M. [Valencia Univ. Hospital Clinico, Servicio de Radiofisica y Proteccion Radiologica (Spain); Villaescusa, J.I. [Hospital Clinico La Fe, Servicio de Proteccion Radiologica, Valencia (Spain)

    2006-07-01

    The image quality assessment of a radiographic phantom image is one of the fundamental points in a complete quality control programme. The good functioning result of all the process must be an image with an appropriate quality to carry out a suitable diagnostic. Nowadays, the digital radiographic equipments are replacing the traditional film-screen equipments and it is necessary to update the parameters to guarantee the quality of the process. Contrast-detail phantoms are applied to digital radiography to study the threshold contrast detail sensitivity at operation conditions of the equipment. The phantom that is studied in this work is C.D.M.A.M. 3.4, which facilitates the evaluation of image contrast and detail resolution. One of the most extended indexes to measure the image quality in an objective way is the Image Quality Figure (I.Q.F.). This parameter is useful to calculate the image quality taking into account the contrast and detail resolution of the image analysed. The contrast-detail curve is useful as a measure of the image quality too, because it is a graphical representation in which the hole thickness and diameter are plotted for each contrast-detail combination detected in the radiographic image of the phantom. It is useful for the comparison of the functioning of different radiographic image systems, for phantom images under the same exposition conditions. The aim of this work is to study the image quality of different images contrast-detail phantom C.D.M.A.M. 3.4, carrying out the automatic detection of the contrast-detail combination and to establish a parameter which characterize in an objective way the mammographic image quality. This is useful to compare images obtained at different digital mammographic equipments to study the functioning of the equipments. (authors)

  17. Machine learning approaches in medical image analysis

    DEFF Research Database (Denmark)

    de Bruijne, Marleen

    2016-01-01

    Machine learning approaches are increasingly successful in image-based diagnosis, disease prognosis, and risk assessment. This paper highlights new research directions and discusses three main challenges related to machine learning in medical imaging: coping with variation in imaging protocols......, learning from weak labels, and interpretation and evaluation of results....

  18. Introduction to image processing and analysis

    CERN Document Server

    Russ, John C

    2007-01-01

    ADJUSTING PIXEL VALUES Optimizing Contrast Color Correction Correcting Nonuniform Illumination Geometric Transformations Image Arithmetic NEIGHBORHOOD OPERATIONS Convolution Other Neighborhood Operations Statistical Operations IMAGE PROCESSING IN THE FOURIER DOMAIN The Fourier Transform Removing Periodic Noise Convolution and Correlation Deconvolution Other Transform Domains Compression BINARY IMAGES Thresholding Morphological Processing Other Morphological Operations Boolean Operations MEASUREMENTS Global Measurements Feature Measurements Classification APPENDIX: SOFTWARE REFERENCES AND LITERATURE INDEX.

  19. Principal component analysis of psoriasis lesions images

    DEFF Research Database (Denmark)

    Maletti, Gabriela Mariel; Ersbøll, Bjarne Kjær

    2003-01-01

    A set of RGB images of psoriasis lesions is used. By visual examination of these images, there seem to be no common pattern that could be used to find and align the lesions within and between sessions. It is expected that the principal components of the original images could be useful during future...

  20. An application of image processing techniques in computed tomography image analysis

    DEFF Research Database (Denmark)

    McEvoy, Fintan

    2007-01-01

    number of animals and image slices, automation of the process was desirable. The open-source and free image analysis program ImageJ was used. A macro procedure was created that provided the required functionality. The macro performs a number of basic image processing procedures. These include an initial...... process designed to remove the scanning table from the image and to center the animal in the image. This is followed by placement of a vertical line segment from the mid point of the upper border of the image to the image center. Measurements are made between automatically detected outer and inner...... boundaries of subcutaneous adipose tissue along this line segment. This process was repeated as the image was rotated (with the line position remaining unchanged) so that measurements around the complete circumference were obtained. Additionally, an image was created showing all detected boundary points so...

  1. Image pattern recognition supporting interactive analysis and graphical visualization

    Science.gov (United States)

    Coggins, James M.

    1992-01-01

    Image Pattern Recognition attempts to infer properties of the world from image data. Such capabilities are crucial for making measurements from satellite or telescope images related to Earth and space science problems. Such measurements can be the required product itself, or the measurements can be used as input to a computer graphics system for visualization purposes. At present, the field of image pattern recognition lacks a unified scientific structure for developing and evaluating image pattern recognition applications. The overall goal of this project is to begin developing such a structure. This report summarizes results of a 3-year research effort in image pattern recognition addressing the following three principal aims: (1) to create a software foundation for the research and identify image pattern recognition problems in Earth and space science; (2) to develop image measurement operations based on Artificial Visual Systems; and (3) to develop multiscale image descriptions for use in interactive image analysis.

  2. Studies on computer analysis for radioisotope images

    International Nuclear Information System (INIS)

    Takizawa, Masaomi

    1977-01-01

    A hybrid type image file and processing system are devised by the author to file and the radioisotope image processing with analog display. This system has some functions as follows; Ten thousand images can be stored in 60 feets length video-tape-recorder (VTR) tape. Maximum of an image on the VTR tape is within 15 sec. An image display enabled by the analog memory, which has brightness more than 15 gray levels. By using the analog memories, effective image processing can be done by the small computer. Many signal sources can be inputted into the hybrid system. This system can be applied many fields to both routine works and multi-purpose radioisotope image processing. (auth.)

  3. Content-Based Multimedia Retrieval in the Presence of Unknown User Preferences

    DEFF Research Database (Denmark)

    Beecks, Christian; Assent, Ira; Seidl, Thomas

    2011-01-01

    Content-based multimedia retrieval requires an appropriate similarity model which reflects user preferences. When these preferences are unknown or when the structure of the data collection is unclear, retrieving the most preferable objects the user has in mind is challenging, as the notion...... address the problem of content-based multimedia retrieval in the presence of unknown user preferences. Our idea consists in performing content-based retrieval by considering all possibilities in a family of similarity models simultaneously. To this end, we propose a novel content-based retrieval approach...

  4. Automatic quantitative analysis of cardiac MR perfusion images

    NARCIS (Netherlands)

    Breeuwer, Marcel; Spreeuwers, Lieuwe Jan; Quist, Marcel

    2001-01-01

    Magnetic Resonance Imaging (MRI) is a powerful technique for imaging cardiovascular diseases. The introduction of cardiovascular MRI into clinical practice is however hampered by the lack of efficient and accurate image analysis methods. This paper focuses on the evaluation of blood perfusion in the

  5. Study and analysis of wavelet based image compression techniques ...

    African Journals Online (AJOL)

    This paper presented comprehensive study with performance analysis of very recent Wavelet transform based image compression techniques. Image compression is one of the necessities for such communication. The goals of image compression are to minimize the storage requirement and communication bandwidth.

  6. Mesh Processing in Medical-Image Analysis-a Tutorial

    DEFF Research Database (Denmark)

    Levine, Joshua A.; Paulsen, Rasmus Reinhold; Zhang, Yongjie

    2012-01-01

    Medical-image analysis requires an understanding of sophisticated scanning modalities, constructing geometric models, building meshes to represent domains, and downstream biological applications. These four steps form an image-to-mesh pipeline. For research in this field to progress, the imaging,...

  7. Image quality preferences among radiographers and radiologists. A conjoint analysis

    International Nuclear Information System (INIS)

    Ween, Borgny; Kristoffersen, Doris Tove; Hamilton, Glenys A.; Olsen, Dag Rune

    2005-01-01

    Purpose: The aim of this study was to investigate the image quality preferences among radiographers and radiologists. The radiographers' preferences are mainly related to technical parameters, whereas radiologists assess image quality based on diagnostic value. Methods: A conjoint analysis was undertaken to survey image quality preferences; the study included 37 respondents: 19 radiographers and 18 radiologists. Digital urograms were post-processed into 8 images with different properties of image quality for 3 different patients. The respondents were asked to rank the images according to their personally perceived subjective image quality. Results: Nearly half of the radiographers and radiologists were consistent in their ranking of the image characterised as 'very best image quality'. The analysis showed, moreover, that chosen filtration level and image intensity were responsible for 72% and 28% of the preferences, respectively. The corresponding figures for each of the two professions were 76% and 24% for the radiographers, and 68% and 32% for the radiologists. In addition, there were larger variations in image preferences among the radiologists, as compared to the radiographers. Conclusions: Radiographers revealed a more consistent preference than the radiologists with respect to image quality. There is a potential for image quality improvement by developing sets of image property criteria

  8. Image segmentation by iterative parallel region growing with application to data compression and image analysis

    Science.gov (United States)

    Tilton, James C.

    1988-01-01

    Image segmentation can be a key step in data compression and image analysis. However, the segmentation results produced by most previous approaches to region growing are suspect because they depend on the order in which portions of the image are processed. An iterative parallel segmentation algorithm avoids this problem by performing globally best merges first. Such a segmentation approach, and two implementations of the approach on NASA's Massively Parallel Processor (MPP) are described. Application of the segmentation approach to data compression and image analysis is then described, and results of such application are given for a LANDSAT Thematic Mapper image.

  9. Diffraction efficiency and noise analysis of hidden image holograms

    DEFF Research Database (Denmark)

    Tamulevičius, Sigitas; Andrulevičius, Mindaugas; Puodžiukynas, Linas

    2017-01-01

    The simplified approach for analysis of hidden image holograms is discussed in this paper. Diffraction efficiency and signal to noise ratio of reconstructed images were investigated using direct measurements technique and digitized image analysis employing “ImageJ” software. All holograms were...... energy densities demonstrated improved diffraction efficiency and reduced signal to noise ratio of the reconstructed image. The best diffraction efficiency at sufficient signal to noise ratio was obtained using exposure energy density in the range from 150 to 200 J/m2 during the hologram writing process....

  10. Analysis of Two-Dimensional Electrophoresis Gel Images

    DEFF Research Database (Denmark)

    Pedersen, Lars

    2002-01-01

    This thesis describes and proposes solutions to some of the currently most important problems in pattern recognition and image analysis of two-dimensional gel electrophoresis (2DGE) images. 2DGE is the leading technique to separate individual proteins in biological samples with many biological...... and pharmaceutical applications, e.g., drug development. The technique results in an image, where the proteins appear as dark spots on a bright background. However, the analysis of these images is very time consuming and requires a large amount of manual work so there is a great need for fast, objective, and robust...... methods based on image analysis techniques in order to significantly accelerate this key technology. The methods described and developed fall into three categories: image segmentation, point pattern matching, and a unified approach simultaneously segmentation the image and matching the spots. The main...

  11. IMAGE ANALYSIS FOR MODELLING SHEAR BEHAVIOUR

    Directory of Open Access Journals (Sweden)

    Philippe Lopez

    2011-05-01

    Full Text Available Through laboratory research performed over the past ten years, many of the critical links between fracture characteristics and hydromechanical and mechanical behaviour have been made for individual fractures. One of the remaining challenges at the laboratory scale is to directly link fracture morphology of shear behaviour with changes in stress and shear direction. A series of laboratory experiments were performed on cement mortar replicas of a granite sample with a natural fracture perpendicular to the axis of the core. Results show that there is a strong relationship between the fracture's geometry and its mechanical behaviour under shear stress and the resulting damage. Image analysis, geostatistical, stereological and directional data techniques are applied in combination to experimental data. The results highlight the role of geometric characteristics of the fracture surfaces (surface roughness, size, shape, locations and orientations of asperities to be damaged in shear behaviour. A notable improvement in shear understanding is that shear behaviour is controlled by the apparent dip in the shear direction of elementary facets forming the fracture.

  12. Convergence analysis in near-field imaging

    International Nuclear Information System (INIS)

    Bao, Gang; Li, Peijun

    2014-01-01

    This paper is devoted to the mathematical analysis of the direct and inverse modeling of the diffraction by a perfectly conducting grating surface in the near-field regime. It is motivated by our effort to analyze recent significant numerical results, in order to solve a class of inverse rough surface scattering problems in near-field imaging. In a model problem, the diffractive grating surface is assumed to be a small and smooth deformation of a plane surface. On the basis of the variational method, the direct problem is shown to have a unique weak solution. An analytical solution is introduced as a convergent power series in the deformation parameter by using the transformed field and Fourier series expansions. A local uniqueness result is proved for the inverse problem where only a single incident field is needed. On the basis of the analytic solution of the direct problem, an explicit reconstruction formula is presented for recovering the grating surface function with resolution beyond the Rayleigh criterion. Error estimates for the reconstructed grating surface are established with fully revealed dependence on such quantities as the surface deformation parameter, measurement distance, noise level of the scattering data, and regularity of the exact grating surface function. (paper)

  13. VLSI Architectures For Syntactic Image Analysis

    Science.gov (United States)

    Chiang, Y. P.; Fu, K. S.

    1984-01-01

    Earley's algorithm has been commonly used for the parsing of general context-free languages and error-correcting parsing in syntactic pattern recognition. The time complexity for parsing is 0(n3). In this paper we present a parallel Earley's recognition algorithm in terms of "x*" operation. By restricting the input context-free grammar to be X-free, we are able to implement this parallel algorithm on a triangular shape VLSI array. This system has an efficient way of moving data to the right place at the right time. Simulation results show that this system can recognize a string with length n in 2n+1 system time. We also present an error-correcting recognition algorithm. The parallel error-correcting recognition algorithm has also been im-plemented on a triangular VLSI array. This array recognizes an erroneous string length n in time 2n+1 and gives the correct error count. Applications of the proposed VLSI architectures to image analysis are illus-trated by examples.

  14. CMOS Image Sensor with On-Chip Image Compression: A Review and Performance Analysis

    Directory of Open Access Journals (Sweden)

    Milin Zhang

    2010-01-01

    Full Text Available Demand for high-resolution, low-power sensing devices with integrated image processing capabilities, especially compression capability, is increasing. CMOS technology enables the integration of image sensing and image processing, making it possible to improve the overall system performance. This paper reviews the current state of the art in CMOS image sensors featuring on-chip image compression. Firstly, typical sensing systems consisting of separate image-capturing unit and image-compression processing unit are reviewed, followed by systems that integrate focal-plane compression. The paper also provides a thorough review of a new design paradigm, in which image compression is performed during the image-capture phase prior to storage, referred to as compressive acquisition. High-performance sensor systems reported in recent years are also introduced. Performance analysis and comparison of the reported designs using different design paradigm are presented at the end.

  15. MORPHOLOGY BY IMAGE ANALYSIS K. Belaroui and M. N Pons ...

    African Journals Online (AJOL)

    31 déc. 2012 ... Keywords: Characterization; particle size; morphology; image analysis; porous media. 1. INTRODUCTION. La puissance de l'analyse d'images comme ... en une image numérique au moyen d'un convertisseur analogique digital (A/D). Les points de l'image sont disposés suivant une grille en réseau carré, ...

  16. PIZZARO: Forensic analysis and restoration of image and video data

    Czech Academy of Sciences Publication Activity Database

    Kamenický, Jan; Bartoš, Michal; Flusser, Jan; Mahdian, Babak; Kotera, Jan; Novozámský, Adam; Saic, Stanislav; Šroubek, Filip; Šorel, Michal; Zita, Aleš; Zitová, Barbara; Šíma, Z.; Švarc, P.; Hořínek, J.

    2016-01-01

    Roč. 264, č. 1 (2016), s. 153-166 ISSN 0379-0738 R&D Projects: GA MV VG20102013064; GA ČR GA13-29225S Institutional support: RVO:67985556 Keywords : Image forensic analysis * Image restoration * Image tampering detection * Image source identification Subject RIV: JD - Computer Applications, Robotics Impact factor: 1.989, year: 2016 http://library.utia.cas.cz/separaty/2016/ZOI/kamenicky-0459504.pdf

  17. Measure by image analysis of industrial radiographs

    International Nuclear Information System (INIS)

    Brillault, B.

    1988-01-01

    A digital radiographic picture processing system for non destructive testing intends to provide the expert with computer tool, to precisely quantify radiographic images. The author describes the main problems, from the image formation to its characterization. She also insists on the necessity to define a precise process in order to automatize the system. Some examples illustrate the efficiency of digital processing for radiographic images [fr

  18. Quantitative image measurements for outcome analysis of lung nodule treatment

    Science.gov (United States)

    Zhu, Xiaoming; Lee, Ki-Nam; Wong, Stephen T. C.; Huang, H. K.

    1996-04-01

    In this study, we designed and implemented a temporal image database for outcome analysis of lung nodules based on spiral CT images. The software package is composed of three parts. They are, respectively, a database management system which stores patient image data and nodule information; a user-friendly graphical user interface which allows a user to interface with the image database; and image processing tools that are designed to segment out lung nodules in the CT image with a simple mouse click anywhere inside a nodule. The image database uses the relational Sybase database system. Patient images and nodule information are stored in separate tables. Software interface has been designed to allow a user to retrieve any patient study from the picture archiving and communication system into the image database.

  19. New approaches in intelligent image analysis techniques, methodologies and applications

    CERN Document Server

    Nakamatsu, Kazumi

    2016-01-01

    This book presents an Introduction and 11 independent chapters, which are devoted to various new approaches of intelligent image processing and analysis. The book also presents new methods, algorithms and applied systems for intelligent image processing, on the following basic topics: Methods for Hierarchical Image Decomposition; Intelligent Digital Signal Processing and Feature Extraction; Data Clustering and Visualization via Echo State Networks; Clustering of Natural Images in Automatic Image Annotation Systems; Control System for Remote Sensing Image Processing; Tissue Segmentation of MR Brain Images Sequence; Kidney Cysts Segmentation in CT Images; Audio Visual Attention Models in Mobile Robots Navigation; Local Adaptive Image Processing; Learning Techniques for Intelligent Access Control; Resolution Improvement in Acoustic Maps. Each chapter is self-contained with its own references. Some of the chapters are devoted to the theoretical aspects while the others are presenting the practical aspects and the...

  20. Analysis of engineering drawings and raster map images

    CERN Document Server

    Henderson, Thomas C

    2013-01-01

    Presents up-to-date methods and algorithms for the automated analysis of engineering drawings and digital cartographic maps Discusses automatic engineering drawing and map analysis techniques Covers detailed accounts of the use of unsupervised segmentation algorithms to map images

  1. Density-based similarity measures for content based search

    Energy Technology Data Exchange (ETDEWEB)

    Hush, Don R [Los Alamos National Laboratory; Porter, Reid B [Los Alamos National Laboratory; Ruggiero, Christy E [Los Alamos National Laboratory

    2009-01-01

    We consider the query by multiple example problem where the goal is to identify database samples whose content is similar to a coUection of query samples. To assess the similarity we use a relative content density which quantifies the relative concentration of the query distribution to the database distribution. If the database distribution is a mixture of the query distribution and a background distribution then it can be shown that database samples whose relative content density is greater than a particular threshold {rho} are more likely to have been generated by the query distribution than the background distribution. We describe an algorithm for predicting samples with relative content density greater than {rho} that is computationally efficient and possesses strong performance guarantees. We also show empirical results for applications in computer network monitoring and image segmentation.

  2. Slide Set: Reproducible image analysis and batch processing with ImageJ.

    Science.gov (United States)

    Nanes, Benjamin A

    2015-11-01

    Most imaging studies in the biological sciences rely on analyses that are relatively simple. However, manual repetition of analysis tasks across multiple regions in many images can complicate even the simplest analysis, making record keeping difficult, increasing the potential for error, and limiting reproducibility. While fully automated solutions are necessary for very large data sets, they are sometimes impractical for the small- and medium-sized data sets common in biology. Here we present the Slide Set plugin for ImageJ, which provides a framework for reproducible image analysis and batch processing. Slide Set organizes data into tables, associating image files with regions of interest and other relevant information. Analysis commands are automatically repeated over each image in the data set, and multiple commands can be chained together for more complex analysis tasks. All analysis parameters are saved, ensuring transparency and reproducibility. Slide Set includes a variety of built-in analysis commands and can be easily extended to automate other ImageJ plugins, reducing the manual repetition of image analysis without the set-up effort or programming expertise required for a fully automated solution.

  3. Medical infrared imaging and orthostatic analysis to determine ...

    African Journals Online (AJOL)

    Analysis was performed in a static position. The asymmetry index for each stance variable and optimal cutoff point for the peak vertical force and thermal image temperatures were calculated. Image pattern analysis revealed 88% success in differentiating the lame group, and 100% in identifying the same thermal pattern in ...

  4. Standardization of Image Quality Analysis – ISO 19264

    DEFF Research Database (Denmark)

    Kejser, Ulla Bøgvad; Wüller, Dietmar

    2016-01-01

    There are a variety of image quality analysis tools available for the archiving world, which are based on different test charts and analysis algorithms. ISO has formed a working group in 2012 to harmonize these approaches and create a standard way of analyzing the image quality for archiving...

  5. Facial Image Analysis Based on Local Binary Patterns: A Survey

    NARCIS (Netherlands)

    Huang, D.; Shan, C.; Ardebilian, M.; Chen, L.

    2011-01-01

    Facial image analysis, including face detection, face recognition,facial expression analysis, facial demographic classification, and so on, is an important and interesting research topic in the computervision and image processing area, which has many important applications such as human-computer

  6. Geographic Object-Based Image Analysis: Towards a new paradigm

    NARCIS (Netherlands)

    Blaschke, T.; Hay, G.J.; Kelly, M.; Lang, S.; Hofmann, P.; Addink, E.A.; Queiroz Feitosa, R.; van der Meer, F.D.; van der Werff, H.M.A.; van Coillie, F.; Tiede, A.

    2014-01-01

    The amount of scientific literature on (Geographic) Object-based Image Analysis – GEOBIA has been and still is sharply increasing. These approaches to analysing imagery have antecedents in earlier research on image segmentation and use GIS-like spatial analysis within classification and feature

  7. A short introduction to image analysis - Matlab exercises

    DEFF Research Database (Denmark)

    Hansen, Michael Adsetts Edberg

    2000-01-01

    This document contain a short introduction to Image analysis. In addition small exercises has been prepared in order to support the theoretical understanding.......This document contain a short introduction to Image analysis. In addition small exercises has been prepared in order to support the theoretical understanding....

  8. Evaluating fracture healing using digital x-ray image analysis

    African Journals Online (AJOL)

    2011-03-02

    Mar 2, 2011 ... of Edinburgh, developing techniques for assessing fracture healing using digital X-ray image analysis. She currently works in ... Digital X-ray combined with image analysis could provide a simple and cost-effective solution to this problem. .... output in which post-processing can be controlled. If digital X-ray ...

  9. Multi-spectral Image Analysis for Astaxanthin Coating Classification

    DEFF Research Database (Denmark)

    Ljungqvist, Martin Georg; Ersbøll, Bjarne Kjær; Nielsen, Michael Engelbrecht

    2011-01-01

    Industrial quality inspection using image analysis on astaxanthin coating in aquaculture feed pellets is of great importance for automatic production control. In this study multi-spectral image analysis of pellets was performed using LDA, QDA, SNV and PCA on pixel level and mean value of pixels...

  10. Coupling the image analysis and the artificial neural networks to ...

    African Journals Online (AJOL)

    ... from a non-destructive method (Image Analysis) which was used in order to characterize the homogeneity of powder mixture in a V-Blender as well as a Cubic Blender which are most used in the pharmaceutical industry. Keywords: ANN; Image analysis; Homogeneity; Back-propagation algorithm; multi-layer perceptron ...

  11. Analysis of licensed South African diagnostic imaging equipment ...

    African Journals Online (AJOL)

    Analysis of licensed South African diagnostic imaging equipment. ... Pan African Medical Journal ... Introduction: Objective: To conduct an analysis of all registered South Africa (SA) diagnostic radiology equipment, assess the number of equipment units per capita by imaging modality, and compare SA figures with published ...

  12. On Texture and Geometry in Image Analysis

    DEFF Research Database (Denmark)

    Gustafsson, David Karl John

    2009-01-01

    -out of the captured scene will also change. At large viewing distances, the sky occupies a large region in the image and buildings, trees and lawns appear as uniformly colored regions. The following questions are addressed: How much of the visual appearance in terms of geometry and texture of an image can...

  13. Low-cost image analysis system

    Energy Technology Data Exchange (ETDEWEB)

    Lassahn, G.D.

    1995-01-01

    The author has developed an Automatic Target Recognition system based on parallel processing using transputers. This approach gives a powerful, fast image processing system at relatively low cost. This system scans multi-sensor (e.g., several infrared bands) image data to find any identifiable target, such as physical object or a type of vegetation.

  14. Teaching Concepts of Natural Sciences to Foreigners through Content-Based Instruction: The Adjunct Model

    Science.gov (United States)

    Satilmis, Yilmaz; Yakup, Doganay; Selim, Guvercin; Aybarsha, Islam

    2015-01-01

    This study investigates three models of content-based instruction in teaching concepts and terms of natural sciences in order to increase the efficiency of teaching these kinds of concepts in realization and to prove that the content-based instruction is a teaching strategy that helps students understand concepts of natural sciences. Content-based…

  15. Design and realisation of an efficient content based music playlist generation system

    NARCIS (Netherlands)

    Balkema, Jan Wietse

    2009-01-01

    This thesis is on the subject of content based music playlist generation systems. The primary aim is to develop algorithms for content based music playlist generation that are faster than the current state of technology while keeping the quality of the playlists at a level that is at least

  16. Digital image processing and analysis for activated sludge wastewater treatment.

    Science.gov (United States)

    Khan, Muhammad Burhan; Lee, Xue Yong; Nisar, Humaira; Ng, Choon Aun; Yeap, Kim Ho; Malik, Aamir Saeed

    2015-01-01

    Activated sludge system is generally used in wastewater treatment plants for processing domestic influent. Conventionally the activated sludge wastewater treatment is monitored by measuring physico-chemical parameters like total suspended solids (TSSol), sludge volume index (SVI) and chemical oxygen demand (COD) etc. For the measurement, tests are conducted in the laboratory, which take many hours to give the final measurement. Digital image processing and analysis offers a better alternative not only to monitor and characterize the current state of activated sludge but also to predict the future state. The characterization by image processing and analysis is done by correlating the time evolution of parameters extracted by image analysis of floc and filaments with the physico-chemical parameters. This chapter briefly reviews the activated sludge wastewater treatment; and, procedures of image acquisition, preprocessing, segmentation and analysis in the specific context of activated sludge wastewater treatment. In the latter part additional procedures like z-stacking, image stitching are introduced for wastewater image preprocessing, which are not previously used in the context of activated sludge. Different preprocessing and segmentation techniques are proposed, along with the survey of imaging procedures reported in the literature. Finally the image analysis based morphological parameters and correlation of the parameters with regard to monitoring and prediction of activated sludge are discussed. Hence it is observed that image analysis can play a very useful role in the monitoring of activated sludge wastewater treatment plants.

  17. A novel image processing procedure for thermographic image analysis.

    Science.gov (United States)

    Matteoli, Sara; Coppini, Davide; Corvi, Andrea

    2018-03-14

    The imaging procedure shown in this paper has been developed for processing thermographic images, measuring the ocular surface temperature (OST) and visualizing the ocular thermal maps in a fast, reliable, and reproducible way. The strength of this new method is that the measured OSTs do not depend on the ocular geometry; hence, it is possible to compare the ocular profiles belonging to the same subject (right and left eye) as well as to different populations. In this paper, the developed procedure is applied on two subjects' eyes: a healthy case and another affected by an ocular malignant lesion. However, the method has already been tested on a bigger group of subjects for clinical purpose. For demonstrating the potentiality of this method, both intra- and inter-examiner repeatability were investigated in terms of coefficients of repeatability (COR). All OST indices showed repeatability with small intra-examiner (%COR 0.06-0.80) and inter-examiner variability (%COR 0.03-0.94). Measured OSTs and thermal maps clearly showed the clinical condition of the eyes investigated. The subject with no ocular pathology had no significant difference (P value = 0.25) between the OSTs of the right and left eye. On the contrary, the eye affected by a malignant lesion was significantly warmer (P value < 0.0001) than the contralateral, where the lesion was located. This new procedure demonstrated its reliability; it is featured by simplicity, immediacy, modularity, and genericity. The latter point is extremely precious as thermography has been used, in the last decades, in different clinical applications. Graphical abstract Ocular thermography and normalization process.

  18. PIZZARO: Forensic analysis and restoration of image and video data.

    Science.gov (United States)

    Kamenicky, Jan; Bartos, Michal; Flusser, Jan; Mahdian, Babak; Kotera, Jan; Novozamsky, Adam; Saic, Stanislav; Sroubek, Filip; Sorel, Michal; Zita, Ales; Zitova, Barbara; Sima, Zdenek; Svarc, Petr; Horinek, Jan

    2016-07-01

    This paper introduces a set of methods for image and video forensic analysis. They were designed to help to assess image and video credibility and origin and to restore and increase image quality by diminishing unwanted blur, noise, and other possible artifacts. The motivation came from the best practices used in the criminal investigation utilizing images and/or videos. The determination of the image source, the verification of the image content, and image restoration were identified as the most important issues of which automation can facilitate criminalists work. Novel theoretical results complemented with existing approaches (LCD re-capture detection and denoising) were implemented in the PIZZARO software tool, which consists of the image processing functionality as well as of reporting and archiving functions to ensure the repeatability of image analysis procedures and thus fulfills formal aspects of the image/video analysis work. Comparison of new proposed methods with the state of the art approaches is shown. Real use cases are presented, which illustrate the functionality of the developed methods and demonstrate their applicability in different situations. The use cases as well as the method design were solved in tight cooperation of scientists from the Institute of Criminalistics, National Drug Headquarters of the Criminal Police and Investigation Service of the Police of the Czech Republic, and image processing experts from the Czech Academy of Sciences. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.

  19. Image analysis of vocal fold histology

    Science.gov (United States)

    Reinisch, Lou; Garrett, C. Gaelyn

    2001-05-01

    To visualize the concentration gradients of collagen, elastin and ground substance in histologic sections of vocal folds, an image enhancement scheme was devised. Slides stained with Movat's solution were viewed on a light microscope. The image was digitally photographed. Using commercially available software, all pixels within a color range are selected from the mucosa presented on the image. Using the Movat's pentachrome stain, yellow to yellow-brown pixels represented mature collagen, blue to blue-green pixels represented young collagen (collagen that is not fully cross-linked) and black to dark violet pixels represented elastin. From each of the color range selections, a black and white image was created. The pixels not within the color range were black. The selected pixels within the color range were white. The image was averaged and smoothed to produce 256 levels of gray with less spatial resolution. This new grey-scale image showed the concentration gradient. These images were further enhanced with contour lines surrounding equivalent levels of gray. This technique is helpful to compare the micro-anatomy of the vocal folds. For instance, we find large concentration of the collagen deep in the mucosa and adjacent to the vocalis muscle.

  20. Transfer representation learning for medical image analysis.

    Science.gov (United States)

    Chuen-Kai Shie; Chung-Hisang Chuang; Chun-Nan Chou; Meng-Hsi Wu; Chang, Edward Y

    2015-08-01

    There are two major challenges to overcome when developing a classifier to perform automatic disease diagnosis. First, the amount of labeled medical data is typically very limited, and a classifier cannot be effectively trained to attain high disease-detection accuracy. Second, medical domain knowledge is required to identify representative features in data for detecting a target disease. Most computer scientists and statisticians do not have such domain knowledge. In this work, we show that employing transfer learning can remedy both problems. We use Otitis Media (OM) to conduct our case study. Instead of using domain knowledge to extract features from labeled OM images, we construct features based on a dataset entirely OM-irrelevant. More specifically, we first learn a codebook in an unsupervised way from 15 million images collected from ImageNet. The codebook gives us what the encoders consider being the fundamental elements of those 15 million images. We then encode OM images using the codebook and obtain a weighting vector for each OM image. Using the resulting weighting vectors as the feature vectors of the OM images, we employ a traditional supervised learning algorithm to train an OM classifier. The achieved detection accuracy is 88.5% (89.63% in sensitivity and 86.9% in specificity), markedly higher than all previous attempts, which relied on domain experts to help extract features.

  1. A linear mixture analysis-based compression for hyperspectral image analysis

    Energy Technology Data Exchange (ETDEWEB)

    C. I. Chang; I. W. Ginsberg

    2000-06-30

    In this paper, the authors present a fully constrained least squares linear spectral mixture analysis-based compression technique for hyperspectral image analysis, particularly, target detection and classification. Unlike most compression techniques that directly deal with image gray levels, the proposed compression approach generates the abundance fractional images of potential targets present in an image scene and then encodes these fractional images so as to achieve data compression. Since the vital information used for image analysis is generally preserved and retained in the abundance fractional images, the loss of information may have very little impact on image analysis. In some occasions, it even improves analysis performance. Airborne visible infrared imaging spectrometer (AVIRIS) data experiments demonstrate that it can effectively detect and classify targets while achieving very high compression ratios.

  2. Vector sparse representation of color image using quaternion matrix analysis.

    Science.gov (United States)

    Xu, Yi; Yu, Licheng; Xu, Hongteng; Zhang, Hao; Nguyen, Truong

    2015-04-01

    Traditional sparse image models treat color image pixel as a scalar, which represents color channels separately or concatenate color channels as a monochrome image. In this paper, we propose a vector sparse representation model for color images using quaternion matrix analysis. As a new tool for color image representation, its potential applications in several image-processing tasks are presented, including color image reconstruction, denoising, inpainting, and super-resolution. The proposed model represents the color image as a quaternion matrix, where a quaternion-based dictionary learning algorithm is presented using the K-quaternion singular value decomposition (QSVD) (generalized K-means clustering for QSVD) method. It conducts the sparse basis selection in quaternion space, which uniformly transforms the channel images to an orthogonal color space. In this new color space, it is significant that the inherent color structures can be completely preserved during vector reconstruction. Moreover, the proposed sparse model is more efficient comparing with the current sparse models for image restoration tasks due to lower redundancy between the atoms of different color channels. The experimental results demonstrate that the proposed sparse image model avoids the hue bias issue successfully and shows its potential as a general and powerful tool in color image analysis and processing domain.

  3. Interpretation of medical images by model guided analysis

    International Nuclear Information System (INIS)

    Karssemeijer, N.

    1989-01-01

    Progress in the development of digital pictorial information systems stimulates a growing interest in the use of image analysis techniques in medicine. Especially when precise quantitative information is required the use of fast and reproducable computer analysis may be more appropriate than relying on visual judgement only. Such quantitative information can be valuable, for instance, in diagnostics or in irradiation therapy planning. As medical images are mostly recorded in a prescribed way, human anatomy guarantees a common image structure for each particular type of exam. In this thesis it is investigated how to make use of this a priori knowledge to guide image analysis. For that purpose models are developed which are suited to capture common image structure. The first part of this study is devoted to an analysis of nuclear medicine images of myocardial perfusion. In ch. 2 a model of these images is designed in order to represent characteristic image properties. It is shown that for these relatively simple images a compact symbolic description can be achieved, without significant loss of diagnostically importance of several image properties. Possibilities for automatic interpretation of more complex images is investigated in the following chapters. The central topic is segmentation of organs. Two methods are proposed and tested on a set of abdominal X-ray CT scans. Ch. 3 describes a serial approach based on a semantic network and the use of search areas. Relational constraints are used to guide the image processing and to classify detected image segments. In teh ch.'s 4 and 5 a more general parallel approach is utilized, based on a markov random field image model. A stochastic model used to represent prior knowledge about the spatial arrangement of organs is implemented as an external field. (author). 66 refs.; 27 figs.; 6 tabs

  4. Multifractal analysis of three-dimensional histogram from color images

    International Nuclear Information System (INIS)

    Chauveau, Julien; Rousseau, David; Richard, Paul; Chapeau-Blondeau, Francois

    2010-01-01

    Natural images, especially color or multicomponent images, are complex information-carrying signals. To contribute to the characterization of this complexity, we investigate the possibility of multiscale organization in the colorimetric structure of natural images. This is realized by means of a multifractal analysis applied to the three-dimensional histogram from natural color images. The observed behaviors are confronted to those of reference models with known multifractal properties. We use for this purpose synthetic random images with trivial monofractal behavior, and multidimensional multiplicative cascades known for their actual multifractal behavior. The behaviors observed on natural images exhibit similarities with those of the multifractal multiplicative cascades and display the signature of elaborate multiscale organizations stemming from the histograms of natural color images. This type of characterization of colorimetric properties can be helpful to various tasks of digital image processing, as for instance modeling, classification, indexing.

  5. Analysis of Factors Affecting Positron Emission Mammography (PEM) Image Formation

    International Nuclear Information System (INIS)

    Smith, Mark F.; Majewski, Stan; Weisenberger, Andrew G.; Kieper, Douglas A.; Raylman, Raymond R.; Turkington, Timothy G.

    2001-01-01

    Image reconstruction for positron emission mammography (PEM) with the breast positioned between two parallel, planar detectors is usually performed by backprojection to image planes. Three important factors affecting PEM image reconstruction by backprojection are investigated: (1) image uniformity (flood) corrections, (2) image sampling (pixel size) and (3) count allocation methods. An analytic expression for uniformity correction is developed that incorporates factors for spatial-dependent detector sensitivity and geometric effects from acceptance angle limits on coincidence events. There is good agreement between experimental floods from a PEM system with a pixellated detector and numerical simulations. The analytic uniformity corrections are successfully applied to image reconstruction of compressed breast phantoms and reduce the necessity for flood scans at different image planes. Experimental and simulated compressed breast phantom studies show that lesion contrast is improved when the image pixel size is half of, rather than equal to, the detector pixel size, though this occurs at the expense of some additional image noise. In PEM reconstruction counts usually are allocated to the pixel in the image plane intersected by the line of response (LOR) between the centers of the detection pixels. An alternate count allocation method is investigated that distributes counts to image pixels in proportion to the area of the tube of response (TOR) connecting the detection pixels that they overlay in the image plane. This TOR method eliminates some image artifacts that occur with the LOR method and increases tumor signal-to-noise ratios at the expense of a slight decrease in tumor contrast. Analysis of image uniformity, image sampling and count allocation methods in PEM image reconstruction points to ways of improving image formation. Further work is required to optimize image reconstruction parameters for particular detection or quantitation tasks

  6. Analysis of Factors Affecting Positron Emission Mammography (PEM) Image Formation

    Energy Technology Data Exchange (ETDEWEB)

    Mark F. Smith; Stan Majewski; Andrew G. Weisenberger; Douglas A. Kieper; Raymond R. Raylman; Timothy G. Turkington

    2001-11-01

    Image reconstruction for positron emission mammography (PEM) with the breast positioned between two parallel, planar detectors is usually performed by backprojection to image planes. Three important factors affecting PEM image reconstruction by backprojection are investigated: (1) image uniformity (flood) corrections, (2) image sampling (pixel size) and (3) count allocation methods. An analytic expression for uniformity correction is developed that incorporates factors for spatial-dependent detector sensitivity and geometric effects from acceptance angle limits on coincidence events. There is good agreement between experimental floods from a PEM system with a pixellated detector and numerical simulations. The analytic uniformity corrections are successfully applied to image reconstruction of compressed breast phantoms and reduce the necessity for flood scans at different image planes. Experimental and simulated compressed breast phantom studies show that lesion contrast is improved when the image pixel size is half of, rather than equal to, the detector pixel size, though this occurs at the expense of some additional image noise. In PEM reconstruction counts usually are allocated to the pixel in the image plane intersected by the line of response (LOR) between the centers of the detection pixels. An alternate count allocation method is investigated that distributes counts to image pixels in proportion to the area of the tube of response (TOR) connecting the detection pixels that they overlay in the image plane. This TOR method eliminates some image artifacts that occur with the LOR method and increases tumor signal-to-noise ratios at the expense of a slight decrease in tumor contrast. Analysis of image uniformity, image sampling and count allocation methods in PEM image reconstruction points to ways of improving image formation. Further work is required to optimize image reconstruction parameters for particular detection or quantitation tasks.

  7. Hierarchical system for content-based audio classification and retrieval

    Science.gov (United States)

    Zhang, Tong; Kuo, C.-C. Jay

    1998-10-01

    A hierarchical system for audio classification and retrieval based on audio content analysis is presented in this paper. The system consists of three stages. The audio recordings are first classical and segmented into speech, music, several types of environmental sounds, and silence, based on morphological and statistical analysis of temporal curves of the energy function, the average zero-crossing rate, and the fundamental frequency of audio signals. The first stage is called the coarse-level audio classification and segmentation. Then, environmental sounds are classified into finer classes such as applause, rain, birds' sound, etc., which is called the fine-level audio classification. The second stage is based on time-frequency analysis of audio signals and the use of the hidden Markov model (HMM) for classification. In the third stage, the query-by-example audio retrieval is implemented where similar sounds can be found according to the input sample audio. The way of modeling audio features with the hidden Markov model, the procedures of audio classification and retrieval, and the experimental results are described. It is shown that, with the proposed new system, audio recordings can be automatically segmented and classified into basic types in real time with an accuracy higher than 90%. Examples of audio fine classification and audio retrieval with the proposed HMM-based method are also provided.

  8. Plant phenomics: an overview of image acquisition technologies and image data analysis algorithms.

    Science.gov (United States)

    Perez-Sanz, Fernando; Navarro, Pedro J; Egea-Cortines, Marcos

    2017-11-01

    The study of phenomes or phenomics has been a central part of biology. The field of automatic phenotype acquisition technologies based on images has seen an important advance in the last years. As with other high-throughput technologies, it addresses a common set of problems, including data acquisition and analysis. In this review, we give an overview of the main systems developed to acquire images. We give an in-depth analysis of image processing with its major issues and the algorithms that are being used or emerging as useful to obtain data out of images in an automatic fashion. © The Author 2017. Published by Oxford University Press.

  9. Medical image analysis of 3D CT images based on extensions of Haralick texture features

    Czech Academy of Sciences Publication Activity Database

    Tesař, Ludvík; Shimizu, A.; Smutek, D.; Kobatake, H.; Nawano, S.

    2008-01-01

    Roč. 32, č. 6 (2008), s. 513-520 ISSN 0895-6111 R&D Projects: GA AV ČR 1ET101050403; GA MŠk 1M0572 Institutional research plan: CEZ:AV0Z10750506 Keywords : image segmentation * Gaussian mixture model * 3D image analysis Subject RIV: IN - Informatics, Computer Science Impact factor: 1.192, year: 2008 http://library.utia.cas.cz/separaty/2008/AS/tesar-medical image analysis of 3d ct image s based on extensions of haralick texture features.pdf

  10. Diagnostic imaging analysis of the impacted mesiodens

    Energy Technology Data Exchange (ETDEWEB)

    Noh, Jeong Jun; Choi, Bo Ram; Jeong, Hwan Seok; Huh, Kyung Hoe; Yi, Won Jin; Heo, Min Suk; Lee, Sam Sun; Choi, Soon Chul [School of Dentistry, Seoul National University, Seoul (Korea, Republic of)

    2010-06-15

    The research was performed to predict the three dimensional relationship between the impacted mesiodens and the maxillary central incisors and the proximity with the anatomic structures by comparing their panoramic images with the CT images. Among the patients visiting Seoul National University Dental Hospital from April 2003 to July 2007, those with mesiodens were selected (154 mesiodens of 120 patients). The numbers, shapes, orientation and positional relationship of mesiodens with maxillary central incisors were investigated in the panoramic images. The proximity with the anatomical structures and complications were investigated in the CT images as well. The sex ratio (M : F) was 2.28 : 1 and the mean number of mesiodens per one patient was 1.28. Conical shape was 84.4% and inverted orientation was 51.9%. There were more cases of anatomical structures encroachment, especially on the nasal floor and nasopalatine duct, when the mesiodens was not superimposed with the central incisor. There were, however, many cases of the nasopalatine duct encroachment when the mesiodens was superimpoised with the apical 1/3 of central incisor (52.6%). Delayed eruption (55.6%), crown rotation (66.7%) and crown resorption (100%) were observed when the mesiodens was superimposed with the crown of the central incisor. It is possible to predict three dimensional relationship between the impacted mesiodens and the maxillary central incisors in the panoramic images, but more details should be confirmed by the CT images when necessary.

  11. Efficiency analysis of color image filtering

    Science.gov (United States)

    Fevralev, Dmitriy V.; Ponomarenko, Nikolay N.; Lukin, Vladimir V.; Abramov, Sergey K.; Egiazarian, Karen O.; Astola, Jaakko T.

    2011-12-01

    This article addresses under which conditions filtering can visibly improve the image quality. The key points are the following. First, we analyze filtering efficiency for 25 test images, from the color image database TID2008. This database allows assessing filter efficiency for images corrupted by different noise types for several levels of noise variance. Second, the limit of filtering efficiency is determined for independent and identically distributed (i.i.d.) additive noise and compared to the output mean square error of state-of-the-art filters. Third, component-wise and vector denoising is studied, where the latter approach is demonstrated to be more efficient. Fourth, using of modern visual quality metrics, we determine that for which levels of i.i.d. and spatially correlated noise the noise in original images or residual noise and distortions because of filtering in output images are practically invisible. We also demonstrate that it is possible to roughly estimate whether or not the visual quality can clearly be improved by filtering.

  12. Three-dimensional temporal reconstruction and analysis of plume images

    Science.gov (United States)

    Dhawan, Atam P.; Disimile, Peter J.; Peck, Charles, III

    1992-01-01

    An experiment with two subsonic jets generating a cross-flow was conducted as part of a study of the structural features of temporal reconstruction of plume images. The flow field structure was made visible using a direct injection flow visualization technique. It is shown that image analysis and temporal three-dimensional visualization can provide new information on the vortical structural dynamics of multiple jets in a cross-flow. It is expected that future developments in image analysis, quantification and interpretation, and flow visualization of rocket engine plume images may provide a tool for correlating the engine diagnostic features by interpreting the evolution of the structures in the plume.

  13. Methods for processing and analysis functional and anatomical brain images: computerized tomography, emission tomography and nuclear resonance imaging

    International Nuclear Information System (INIS)

    Mazoyer, B.M.

    1988-01-01

    The various methods for brain image processing and analysis are presented and compared. The following topics are developed: the physical basis of brain image comparison (nature and formation of signals intrinsic performance of the methods image characteristics); mathematical methods for image processing and analysis (filtering, functional parameter extraction, morphological analysis, robotics and artificial intelligence); methods for anatomical localization (neuro-anatomy atlas, proportional stereotaxic atlas, numerized atlas); methodology of cerebral image superposition (normalization, retiming); image networks [fr

  14. Image Harvest: an open-source platform for high-throughput plant image processing and analysis.

    Science.gov (United States)

    Knecht, Avi C; Campbell, Malachy T; Caprez, Adam; Swanson, David R; Walia, Harkamal

    2016-05-01

    High-throughput plant phenotyping is an effective approach to bridge the genotype-to-phenotype gap in crops. Phenomics experiments typically result in large-scale image datasets, which are not amenable for processing on desktop computers, thus creating a bottleneck in the image-analysis pipeline. Here, we present an open-source, flexible image-analysis framework, called Image Harvest (IH), for processing images originating from high-throughput plant phenotyping platforms. Image Harvest is developed to perform parallel processing on computing grids and provides an integrated feature for metadata extraction from large-scale file organization. Moreover, the integration of IH with the Open Science Grid provides academic researchers with the computational resources required for processing large image datasets at no cost. Image Harvest also offers functionalities to extract digital traits from images to interpret plant architecture-related characteristics. To demonstrate the applications of these digital traits, a rice (Oryza sativa) diversity panel was phenotyped and genome-wide association mapping was performed using digital traits that are used to describe different plant ideotypes. Three major quantitative trait loci were identified on rice chromosomes 4 and 6, which co-localize with quantitative trait loci known to regulate agronomically important traits in rice. Image Harvest is an open-source software for high-throughput image processing that requires a minimal learning curve for plant biologists to analyzephenomics datasets. © The Author 2016. Published by Oxford University Press on behalf of the Society for Experimental Biology.

  15. Histology image analysis for carcinoma detection and grading.

    Science.gov (United States)

    He, Lei; Long, L Rodney; Antani, Sameer; Thoma, George R

    2012-09-01

    This paper presents an overview of the image analysis techniques in the domain of histopathology, specifically, for the objective of automated carcinoma detection and classification. As in other biomedical imaging areas such as radiology, many computer assisted diagnosis (CAD) systems have been implemented to aid histopathologists and clinicians in cancer diagnosis and research, which have been attempted to significantly reduce the labor and subjectivity of traditional manual intervention with histology images. The task of automated histology image analysis is usually not simple due to the unique characteristics of histology imaging, including the variability in image preparation techniques, clinical interpretation protocols, and the complex structures and very large size of the images themselves. In this paper we discuss those characteristics, provide relevant background information about slide preparation and interpretation, and review the application of digital image processing techniques to the field of histology image analysis. In particular, emphasis is given to state-of-the-art image segmentation methods for feature extraction and disease classification. Four major carcinomas of cervix, prostate, breast, and lung are selected to illustrate the functions and capabilities of existing CAD systems. Published by Elsevier Ireland Ltd.

  16. Pattern recognition software and techniques for biological image analysis.

    Directory of Open Access Journals (Sweden)

    Lior Shamir

    2010-11-01

    Full Text Available The increasing prevalence of automated image acquisition systems is enabling new types of microscopy experiments that generate large image datasets. However, there is a perceived lack of robust image analysis systems required to process these diverse datasets. Most automated image analysis systems are tailored for specific types of microscopy, contrast methods, probes, and even cell types. This imposes significant constraints on experimental design, limiting their application to the narrow set of imaging methods for which they were designed. One of the approaches to address these limitations is pattern recognition, which was originally developed for remote sensing, and is increasingly being applied to the biology domain. This approach relies on training a computer to recognize patterns in images rather than developing algorithms or tuning parameters for specific image processing tasks. The generality of this approach promises to enable data mining in extensive image repositories, and provide objective and quantitative imaging assays for routine use. Here, we provide a brief overview of the technologies behind pattern recognition and its use in computer vision for biological and biomedical imaging. We list available software tools that can be used by biologists and suggest practical experimental considerations to make the best use of pattern recognition techniques for imaging assays.

  17. Identifying radiotherapy target volumes in brain cancer by image analysis.

    Science.gov (United States)

    Cheng, Kun; Montgomery, Dean; Feng, Yang; Steel, Robin; Liao, Hanqing; McLaren, Duncan B; Erridge, Sara C; McLaughlin, Stephen; Nailon, William H

    2015-10-01

    To establish the optimal radiotherapy fields for treating brain cancer patients, the tumour volume is often outlined on magnetic resonance (MR) images, where the tumour is clearly visible, and mapped onto computerised tomography images used for radiotherapy planning. This process requires considerable clinical experience and is time consuming, which will continue to increase as more complex image sequences are used in this process. Here, the potential of image analysis techniques for automatically identifying the radiation target volume on MR images, and thereby assisting clinicians with this difficult task, was investigated. A gradient-based level set approach was applied on the MR images of five patients with grades II, III and IV malignant cerebral glioma. The relationship between the target volumes produced by image analysis and those produced by a radiation oncologist was also investigated. The contours produced by image analysis were compared with the contours produced by an oncologist and used for treatment. In 93% of cases, the Dice similarity coefficient was found to be between 60 and 80%. This feasibility study demonstrates that image analysis has the potential for automatic outlining in the management of brain cancer patients, however, more testing and validation on a much larger patient cohort is required.

  18. Uncooled LWIR imaging: applications and market analysis

    Science.gov (United States)

    Takasawa, Satomi

    2015-05-01

    The evolution of infrared (IR) imaging sensor technology for defense market has played an important role in developing commercial market, as dual use of the technology has expanded. In particular, technologies of both reduction in pixel pitch and vacuum package have drastically evolved in the area of uncooled Long-Wave IR (LWIR; 8-14 μm wavelength region) imaging sensor, increasing opportunity to create new applications. From the macroscopic point of view, the uncooled LWIR imaging market is divided into two areas. One is a high-end market where uncooled LWIR imaging sensor with sensitivity as close to that of cooled one as possible is required, while the other is a low-end market which is promoted by miniaturization and reduction in price. Especially, in the latter case, approaches towards consumer market have recently appeared, such as applications of uncooled LWIR imaging sensors to night visions for automobiles and smart phones. The appearance of such a kind of commodity surely changes existing business models. Further technological innovation is necessary for creating consumer market, and there will be a room for other companies treating components and materials such as lens materials and getter materials and so on to enter into the consumer market.

  19. Digital image processing and analysis human and computer vision applications with CVIPtools

    CERN Document Server

    Umbaugh, Scott E

    2010-01-01

    Section I Introduction to Digital Image Processing and AnalysisDigital Image Processing and AnalysisOverviewImage Analysis and Computer VisionImage Processing and Human VisionKey PointsExercisesReferencesFurther ReadingComputer Imaging SystemsImaging Systems OverviewImage Formation and SensingCVIPtools SoftwareImage RepresentationKey PointsExercisesSupplementary ExercisesReferencesFurther ReadingSection II Digital Image Analysis and Computer VisionIntroduction to Digital Image AnalysisIntroductionPreprocessingBinary Image AnalysisKey PointsExercisesSupplementary ExercisesReferencesFurther Read

  20. Analysis of live cell images: Methods, tools and opportunities.

    Science.gov (United States)

    Nketia, Thomas A; Sailem, Heba; Rohde, Gustavo; Machiraju, Raghu; Rittscher, Jens

    2017-02-15

    Advances in optical microscopy, biosensors and cell culturing technologies have transformed live cell imaging. Thanks to these advances live cell imaging plays an increasingly important role in basic biology research as well as at all stages of drug development. Image analysis methods are needed to extract quantitative information from these vast and complex data sets. The aim of this review is to provide an overview of available image analysis methods for live cell imaging, in particular required preprocessing image segmentation, cell tracking and data visualisation methods. The potential opportunities recent advances in machine learning, especially deep learning, and computer vision provide are being discussed. This review includes overview of the different available software packages and toolkits. Copyright © 2017. Published by Elsevier Inc.

  1. Applications of Digital Image Analysis in Experimental Mechanics

    DEFF Research Database (Denmark)

    Lyngbye, J. : Ph.D.

    The present thesis "Application of Digital Image Analysis in Experimental Mechanics" has been prepared as a part of Janus Lyngbyes Ph.D. study during the period December 1988 to June 1992 at the Department of Building technology and Structural Engineering, University of Aalborg, Denmark....... In this thesis attention will be focused on optimal use and analysis of the information of digital images. This is realized during investigation and application of parametric methods in digital image analysis. The parametric methods will be implemented in applications representative for the area of experimental...

  2. Analysis of PETT images in psychiatric disorders

    Energy Technology Data Exchange (ETDEWEB)

    Brodie, J.D.; Gomez-Mont, F.; Volkow, N.D.; Corona, J.F.; Wolf, A.P.; Wolkin, A.; Russell, J.A.G.; Christman, D.; Jaeger, J.

    1983-01-01

    A quantitative method is presented for studying the pattern of metabolic activity in a set of Positron Emission Transaxial Tomography (PETT) images. Using complex Fourier coefficients as a feature vector for each image, cluster, principal components, and discriminant function analyses are used to empirically describe metabolic differences between control subjects and patients with DSM III diagnosis for schizophrenia or endogenous depression. We also present data on the effects of neuroleptic treatment on the local cerebral metabolic rate of glucose utilization (LCMRGI) in a group of chronic schizophrenics using the region of interest approach. 15 references, 4 figures, 3 tables.

  3. Analysis of PETT images in psychiatric disorders

    International Nuclear Information System (INIS)

    Brodie, J.D.; Gomez-Mont, F.; Volkow, N.D.; Corona, J.F.; Wolf, A.P.; Wolkin, A.; Russell, J.A.G.; Christman, D.; Jaeger, J.

    1983-01-01

    A quantitative method is presented for studying the pattern of metabolic activity in a set of Positron Emission Transaxial Tomography (PETT) images. Using complex Fourier coefficients as a feature vector for each image, cluster, principal components, and discriminant function analyses are used to empirically describe metabolic differences between control subjects and patients with DSM III diagnosis for schizophrenia or endogenous depression. We also present data on the effects of neuroleptic treatment on the local cerebral metabolic rate of glucose utilization (LCMRGI) in a group of chronic schizophrenics using the region of interest approach. 15 references, 4 figures, 3 tables

  4. Basic research planning in mathematical pattern recognition and image analysis

    Science.gov (United States)

    Bryant, J.; Guseman, L. F., Jr.

    1981-01-01

    Fundamental problems encountered while attempting to develop automated techniques for applications of remote sensing are discussed under the following categories: (1) geometric and radiometric preprocessing; (2) spatial, spectral, temporal, syntactic, and ancillary digital image representation; (3) image partitioning, proportion estimation, and error models in object scene interference; (4) parallel processing and image data structures; and (5) continuing studies in polarization; computer architectures and parallel processing; and the applicability of "expert systems" to interactive analysis.

  5. Within-subject template estimation for unbiased longitudinal image analysis

    OpenAIRE

    Reuter, Martin; Schmansky, Nicholas J.; Rosas, H. Diana; Fischl, Bruce

    2012-01-01

    Longitudinal image analysis has become increasingly important in clinical studies of normal aging and neurodegenerative disorders. Furthermore, there is a growing appreciation of the potential utility of longitudinally acquired structural images and reliable image processing to evaluate disease modifying therapies. Challenges have been related to the variability that is inherent in the available cross-sectional processing tools, to the introduction of bias in longitudinal processing and to po...

  6. Architectural design and analysis of a programmable image processor

    International Nuclear Information System (INIS)

    Siyal, M.Y.; Chowdhry, B.S.; Rajput, A.Q.K.

    2003-01-01

    In this paper we present an architectural design and analysis of a programmable image processor, nicknamed Snake. The processor was designed with a high degree of parallelism to speed up a range of image processing operations. Data parallelism found in array processors has been included into the architecture of the proposed processor. The implementation of commonly used image processing algorithms and their performance evaluation are also discussed. The performance of Snake is also compared with other types of processor architectures. (author)

  7. Report on RecSys 2016 Workshop on New Trends in Content-Based Recommender Systems

    DEFF Research Database (Denmark)

    Bogers, Toine; Koolen, Marijn; Musto, Cataldo

    2017-01-01

    This article reports on the CBRecSys 2016 workshop, the third edition of the workshop on New Trends in Content-based Recommender Systems, co-located with RecSys 2016 in Boston, MA. Content-based recommendation has been applied successfully in many different domains, but it has not seen the same...... for work dedicated to all aspects of content-based recommender systems....... level of attention as collaborative filtering techniques have. Nevertheless, there are many recommendation domains and applications where content and metadata play a key role, either in addition to or instead of ratings and implicit usage data. The CBRecSys workshop series provides a dedicated venue...

  8. Anima: Modular Workflow System for Comprehensive Image Data Analysis

    Science.gov (United States)

    Rantanen, Ville; Valori, Miko; Hautaniemi, Sampsa

    2014-01-01

    Modern microscopes produce vast amounts of image data, and computational methods are needed to analyze and interpret these data. Furthermore, a single image analysis project may require tens or hundreds of analysis steps starting from data import and pre-processing to segmentation and statistical analysis; and ending with visualization and reporting. To manage such large-scale image data analysis projects, we present here a modular workflow system called Anima. Anima is designed for comprehensive and efficient image data analysis development, and it contains several features that are crucial in high-throughput image data analysis: programing language independence, batch processing, easily customized data processing, interoperability with other software via application programing interfaces, and advanced multivariate statistical analysis. The utility of Anima is shown with two case studies focusing on testing different algorithms developed in different imaging platforms and an automated prediction of alive/dead C. elegans worms by integrating several analysis environments. Anima is a fully open source and available with documentation at www.anduril.org/anima. PMID:25126541

  9. Anima: Modular workflow system for comprehensive image data analysis

    Directory of Open Access Journals (Sweden)

    Ville eRantanen

    2014-07-01

    Full Text Available Modern microscopes produce vast amounts of image data, and computational methods are needed to analyze and interpret these data. Furthermore, a single image analysis project may require tens or hundreds of analysis steps starting from data import and preprocessing to segmentation and statistical analysis; and ending with visualization and reporting. To manage such large-scale image data analysis projects, we present here a modular workflow system called Anima. Anima is designed for comprehensive and efficient image data analysis, and it contains several features that are crucial in high-throughput image data analysis: programming language independence, batch processing, easily customized data processing, interoperability with other software via application programming interfaces, and advanced multivariate statistical analysis. The utility of Anima is shown with two case studies focusing on testing different algorithms developed in different imaging platforms and an automated prediction of alive/dead C. elegans worms by integrating several analysis environmens. Anima is fully open source and available with documentation at http://www.anduril.org/anima

  10. Some selected quantitative methods of thermal image analysis in Matlab.

    Science.gov (United States)

    Koprowski, Robert

    2016-05-01

    The paper presents a new algorithm based on some selected automatic quantitative methods for analysing thermal images. It shows the practical implementation of these image analysis methods in Matlab. It enables to perform fully automated and reproducible measurements of selected parameters in thermal images. The paper also shows two examples of the use of the proposed image analysis methods for the area of ​​the skin of a human foot and face. The full source code of the developed application is also provided as an attachment. The main window of the program during dynamic analysis of the foot thermal image. © 2016 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  11. Spectral identity mapping for enhanced chemical image analysis

    Science.gov (United States)

    Turner, John F., II

    2005-03-01

    Advances in spectral imaging instrumentation during the last two decades has lead to higher image fidelity, tighter spatial resolution, narrower spectral resolution, and improved signal to noise ratios. An important sub-classification of spectral imaging is chemical imaging, in which the sought-after information from the sample is its chemical composition. Consequently, chemical imaging can be thought of as a two-step process, spectral image acquisition and the subsequent processing of the spectral image data to generate chemically relevant image contrast. While chemical imaging systems that provide turnkey data acquisition are increasingly widespread, better strategies to analyze the vast datasets they produce are needed. The Generation of chemically relevant image contrast from spectral image data requires multivariate processing algorithms that can categorize spectra according to shape. Conventional chemometric techniques like inverse least squares, classical least squares, multiple linear regression, principle component regression, and multivariate curve resolution are effective for predicting the chemical composition of samples having known constituents, but are less effective when a priori information about the sample is unavailable. To address these problems, we have developed a fully automated non-parametric technique called spectral identity mapping (SIMS) that reduces the dependence of spectral image analysis on training datasets. The qualitative SIMS method provides enhanced spectral shape specificity and improved chemical image contrast. We present SIMS results of infrared spectral image data acquired from polymer coated paper substrates used in the manufacture of pressure sensitive adhesive tapes. In addition, we compare the SIMS results to results from spectral angle mapping (SAM) and cosine correlation analysis (CCA), two closely related techniques.

  12. Difficulties in image analysis of peritoneographies

    International Nuclear Information System (INIS)

    Erbe, W.; Geissler, A.

    1987-01-01

    Peritoneography was performed in 122 patients clinically suspected of hernia without definite palpation findings. 108 cases could be assessed without any doubt (in 40 patients proof of hernia and in 68 patients definite exclusion of hernia). In 14 cases (=11,5%) the images were difficult to classify. These cases and the variations of normal X-ray anatomy are described. (orig.) [de

  13. Real-time evaluation of aggregation using confocal imaging and image analysis tools.

    Science.gov (United States)

    Hamrang, Zahra; Zindy, Egor; Clarke, David; Pluen, Alain

    2014-02-07

    Real-time confocal imaging was utilised to monitor the in situ loss of BSA monomers and aggregate formation using Spatial Intensity Distribution Analysis (SpIDA) and Raster Image Correlation Spectroscopy (RICS). At the proof of concept level this work has demonstrated the applicability of RICS and SpIDA for monitoring protein oligomerisation and larger aggregate formation.

  14. Automated Image Analysis Corrosion Working Group Update: February 1, 2018

    Energy Technology Data Exchange (ETDEWEB)

    Wendelberger, James G. [Los Alamos National Lab. (LANL), Los Alamos, NM (United States)

    2018-02-01

    These are slides for the automated image analysis corrosion working group update. The overall goals were: automate the detection and quantification of features in images (faster, more accurate), how to do this (obtain data, analyze data), focus on Laser Scanning Confocal Microscope (LCM) data (laser intensity, laser height/depth, optical RGB, optical plus laser RGB).

  15. An Online Image Analysis Tool for Science Education

    Science.gov (United States)

    Raeside, L.; Busschots, B.; Waddington, S.; Keating, J. G.

    2008-01-01

    This paper describes an online image analysis tool developed as part of an iterative, user-centered development of an online Virtual Learning Environment (VLE) called the Education through Virtual Experience (EVE) Portal. The VLE provides a Web portal through which schoolchildren and their teachers create scientific proposals, retrieve images and…

  16. Principal component analysis of image gradient orientations for face recognition

    NARCIS (Netherlands)

    Tzimiropoulos, Georgios; Zafeiriou, Stefanos; Pantic, Maja

    We introduce the notion of Principal Component Analysis (PCA) of image gradient orientations. As image data is typically noisy, but noise is substantially different from Gaussian, traditional PCA of pixel intensities very often fails to estimate reliably the low-dimensional subspace of a given data

  17. Hyperspectral image analysis using artificial color

    Science.gov (United States)

    Fu, Jian; Caulfield, H. John; Wu, Dongsheng; Tadesse, Wubishet

    2010-03-01

    By definition, HSC (HyperSpectral Camera) images are much richer in spectral data than, say, a COTS (Commercial-Off-The-Shelf) color camera. But data are not information. If we do the task right, useful information can be derived from the data in HSC images. Nature faced essentially the identical problem. The incident light is so complex spectrally that measuring it with high resolution would provide far more data than animals can handle in real time. Nature's solution was to do irreversible POCS (Projections Onto Convex Sets) to achieve huge reductions in data with minimal reduction in information. Thus we can arrange for our manmade systems to do what nature did - project the HSC image onto two or more broad, overlapping curves. The task we have undertaken in the last few years is to develop this idea that we call Artificial Color. What we report here is the use of the measured HSC image data projected onto two or three convex, overlapping, broad curves in analogy with the sensitivity curves of human cone cells. Testing two quite different HSC images in that manner produced the desired result: good discrimination or segmentation that can be done very simply and hence are likely to be doable in real time with specialized computers. Using POCS on the HSC data to reduce the processing complexity produced excellent discrimination in those two cases. For technical reasons discussed here, the figures of merit for the kind of pattern recognition we use is incommensurate with the figures of merit of conventional pattern recognition. We used some force fitting to make a comparison nevertheless, because it shows what is also obvious qualitatively. In our tasks our method works better.

  18. On the applicability of numerical image mapping for PIV image analysis near curved interfaces

    International Nuclear Information System (INIS)

    Masullo, Alessandro; Theunissen, Raf

    2017-01-01

    This paper scrutinises the general suitability of image mapping for particle image velocimetry (PIV) applications. Image mapping can improve PIV measurement accuracy by eliminating overlap between the PIV interrogation windows and an interface, as illustrated by some examples in the literature. Image mapping transforms the PIV images using a curvilinear interface-fitted mesh prior to performing the PIV cross correlation. However, degrading effects due to particle image deformation and the Jacobian transformation inherent in the mapping along curvilinear grid lines have never been deeply investigated. Here, the implementation of image mapping from mesh generation to image resampling is presented in detail, and related error sources are analysed. Systematic comparison with standard PIV approaches shows that image mapping is effective only in a very limited set of flow conditions and geometries, and depends strongly on a priori knowledge of the boundary shape and streamlines. In particular, with strongly curved geometries or streamlines that are not parallel to the interface, the image-mapping approach is easily outperformed by more traditional image analysis methodologies invoking suitable spatial relocation of the obtained displacement vector. (paper)

  19. Image analysis of dye stained patterns in soils

    Science.gov (United States)

    Bogner, Christina; Trancón y Widemann, Baltasar; Lange, Holger

    2013-04-01

    Quality of surface water and groundwater is directly affected by flow processes in the unsaturated zone. In general, it is difficult to measure or model water flow. Indeed, parametrization of hydrological models is problematic and often no unique solution exists. To visualise flow patterns in soils directly dye tracer studies can be done. These experiments provide images of stained soil profiles and their evaluation demands knowledge in hydrology as well as in image analysis and statistics. First, these photographs are converted to binary images classifying the pixels in dye stained and non-stained ones. Then, some feature extraction is necessary to discern relevant hydrological information. In our study we propose to use several index functions to extract different (ideally complementary) features. We associate each image row with a feature vector (i.e. a certain number of image function values) and use these features to cluster the image rows to identify similar image areas. Because images of stained profiles might have different reasonable clusterings, we calculate multiple consensus clusterings. An expert can explore these different solutions and base his/her interpretation of predominant flow mechanisms on quantitative (objective) criteria. The complete workflow from reading-in binary images to final clusterings has been implemented in the free R system, a language and environment for statistical computing. The calculation of image indices is part of our own package Indigo, manipulation of binary images, clustering and visualization of results are done using either build-in facilities in R, additional R packages or the LATEX system.

  20. Electron Microscopy and Image Analysis for Selected Materials

    Science.gov (United States)

    Williams, George

    1999-01-01

    This particular project was completed in collaboration with the metallurgical diagnostics facility. The objective of this research had four major components. First, we required training in the operation of the environmental scanning electron microscope (ESEM) for imaging of selected materials including biological specimens. The types of materials range from cyanobacteria and diatoms to cloth, metals, sand, composites and other materials. Second, to obtain training in surface elemental analysis technology using energy dispersive x-ray (EDX) analysis, and in the preparation of x-ray maps of these same materials. Third, to provide training for the staff of the metallurgical diagnostics and failure analysis team in the area of image processing and image analysis technology using NIH Image software. Finally, we were to assist in the sample preparation, observing, imaging, and elemental analysis for Mr. Richard Hoover, one of NASA MSFC's solar physicists and Marshall's principal scientist for the agency-wide virtual Astrobiology Institute. These materials have been collected from various places around the world including the Fox Tunnel in Alaska, Siberia, Antarctica, ice core samples from near Lake Vostoc, thermal vents in the ocean floor, hot springs and many others. We were successful in our efforts to obtain high quality, high resolution images of various materials including selected biological ones. Surface analyses (EDX) and x-ray maps were easily prepared with this technology. We also discovered and used some applications for NIH Image software in the metallurgical diagnostics facility.

  1. Determination of fish gender using fractal analysis of ultrasound images

    DEFF Research Database (Denmark)

    McEvoy, Fintan J.; Tomkiewicz, Jonna; Støttrup, Josianne

    2009-01-01

    The gender of cod Gadus morhua can be determined by considering the complexity in their gonadal ultrasonographic appearance. The fractal dimension (DB) can be used to describe this feature in images. B-mode gonadal ultrasound images in 32 cod, where gender was known, were collected. Fractal...... by subjective analysis alone. The mean (and standard deviation) of the fractal dimension DB for male fish was 1.554 (0.073) while for female fish it was 1.468 (0.061); the difference was statistically significant (P=0.001). The area under the ROC curve was 0.84 indicating the value of fractal analysis in gender...... result. Fractal analysis is useful for gender determination in cod. This or a similar form of analysis may have wide application in veterinary imaging as a tool for quantification of complexity in images...

  2. Porosity determination on pyrocarbon using automatic quantitative image analysis

    International Nuclear Information System (INIS)

    Koizlik, K.; Uhlenbruck, U.; Delle, W.; Nickel, H.

    Methods of porosity determination are reviewed and applied to the measurement of the porosity of pyrocarbon. Specifically, the mathematical basis of stereology and the procedures involved in quantitative image analysis are detailed

  3. Digital image analysis of X-ray television with an image digitizer

    International Nuclear Information System (INIS)

    Mochizuki, Yasuo; Akaike, Hisahiko; Ogawa, Hitoshi; Kyuma, Yukishige

    1995-01-01

    When video signals of X-ray fluoroscopy were transformed from analog-to-digital ones with an image digitizer, their digital characteristic curves, pre-sampling MTF's and digital Wiener spectral could be measured. This method was advant ageous in that it was able to carry out data sampling because the pixel values inputted could be verified on a CRT. The system of image analysis by this method is inexpensive and effective in evaluating the image quality of digital system. Also, it is expected that this method can be used as a tool for learning the measurement techniques and physical characteristics of digital image quality effectively. (author)

  4. Single-labelled music genre classification using content-based features

    CSIR Research Space (South Africa)

    Ajoodha, R

    2015-11-01

    Full Text Available In this paper we use content-based features to perform automatic classification of music pieces into genres. We categorise these features into four groups: features extracted from the Fourier transform’s magnitude spectrum, features designed...

  5. Image Segmentation Analysis for NASA Earth Science Applications

    Science.gov (United States)

    Tilton, James C.

    2010-01-01

    NASA collects large volumes of imagery data from satellite-based Earth remote sensing sensors. Nearly all of the computerized image analysis of this data is performed pixel-by-pixel, in which an algorithm is applied directly to individual image pixels. While this analysis approach is satisfactory in many cases, it is usually not fully effective in extracting the full information content from the high spatial resolution image data that s now becoming increasingly available from these sensors. The field of object-based image analysis (OBIA) has arisen in recent years to address the need to move beyond pixel-based analysis. The Recursive Hierarchical Segmentation (RHSEG) software developed by the author is being used to facilitate moving from pixel-based image analysis to OBIA. The key unique aspect of RHSEG is that it tightly intertwines region growing segmentation, which produces spatially connected region objects, with region object classification, which groups sets of region objects together into region classes. No other practical, operational image segmentation approach has this tight integration of region growing object finding with region classification This integration is made possible by the recursive, divide-and-conquer implementation utilized by RHSEG, in which the input image data is recursively subdivided until the image data sections are small enough to successfully mitigat the combinatorial explosion caused by the need to compute the dissimilarity between each pair of image pixels. RHSEG's tight integration of region growing object finding and region classification is what enables the high spatial fidelity of the image segmentations produced by RHSEG. This presentation will provide an overview of the RHSEG algorithm and describe how it is currently being used to support OBIA or Earth Science applications such as snow/ice mapping and finding archaeological sites from remotely sensed data.

  6. Micro imaging analysis for osteoporosis assessment

    Science.gov (United States)

    Lima, I.; Farias, M. L. F.; Percegoni, N.; Rosenthal, D.; de Assis, J. T.; Anjos, M. J.; Lopes, R. T.

    2010-03-01

    Characterization of trabeculae structures is one of the most important applications of imaging techniques in the biomedical area. The aim of this study was to investigate structure modifications in trabecular and cortical bones using non destructive techniques such as X-ray microtomography, X-ray microfluorescence by synchrotron radiation and scanning electron microscopy. The results obtained reveal the potential of this computational technique to verify the capability of characterization of internal bone structures.

  7. Micro imaging analysis for osteoporosis assessment

    Energy Technology Data Exchange (ETDEWEB)

    Lima, I., E-mail: inaya@lin.ufrj.b [Nuclear Instrumentation Laboratory, COPPE, UFRJ (Brazil); Polytechnic Institute of Rio de Janeiro State/UERJ/Brazil (Brazil); Farias, M.L.F. [University Hospital, UFRJ, RJ (Brazil); Percegoni, N. [Biophysics Institute, CCS, UFRJ, RJ (Brazil); Rosenthal, D. [Physics Institute, UERJ, RJ (Brazil); Assis, J.T. de [Polytechnic Institute of Rio de Janeiro State/UERJ/Brazil (Brazil); Anjos, M.J. [Physics Institute, UERJ, RJ (Brazil); Lopes, R.T. [Nuclear Instrumentation Laboratory, COPPE, UFRJ (Brazil)

    2010-03-15

    Characterization of trabeculae structures is one of the most important applications of imaging techniques in the biomedical area. The aim of this study was to investigate structure modifications in trabecular and cortical bones using non destructive techniques such as X-ray microtomography, X-ray microfluorescence by synchrotron radiation and scanning electron microscopy. The results obtained reveal the potential of this computational technique to verify the capability of characterization of internal bone structures.

  8. Computerized microscopic image analysis of follicular lymphoma

    Science.gov (United States)

    Sertel, Olcay; Kong, Jun; Lozanski, Gerard; Catalyurek, Umit; Saltz, Joel H.; Gurcan, Metin N.

    2008-03-01

    Follicular Lymphoma (FL) is a cancer arising from the lymphatic system. Originating from follicle center B cells, FL is mainly comprised of centrocytes (usually middle-to-small sized cells) and centroblasts (relatively large malignant cells). According to the World Health Organization's recommendations, there are three histological grades of FL characterized by the number of centroblasts per high-power field (hpf) of area 0.159 mm2. In current practice, these cells are manually counted from ten representative fields of follicles after visual examination of hematoxylin and eosin (H&E) stained slides by pathologists. Several studies clearly demonstrate the poor reproducibility of this grading system with very low inter-reader agreement. In this study, we are developing a computerized system to assist pathologists with this process. A hybrid approach that combines information from several slides with different stains has been developed. Thus, follicles are first detected from digitized microscopy images with immunohistochemistry (IHC) stains, (i.e., CD10 and CD20). The average sensitivity and specificity of the follicle detection tested on 30 images at 2×, 4× and 8× magnifications are 85.5+/-9.8% and 92.5+/-4.0%, respectively. Since the centroblasts detection is carried out in the H&E-stained slides, the follicles in the IHC-stained images are mapped to H&E-stained counterparts. To evaluate the centroblast differentiation capabilities of the system, 11 hpf images have been marked by an experienced pathologist who identified 41 centroblast cells and 53 non-centroblast cells. A non-supervised clustering process differentiates the centroblast cells from noncentroblast cells, resulting in 92.68% sensitivity and 90.57% specificity.

  9. Advanced Imaging Techniques for Multiphase Flows Analysis

    Science.gov (United States)

    Amoresano, A.; Langella, G.; Di Santo, M.; Iodice, P.

    2017-08-01

    Advanced numerical techniques, such as fuzzy logic and neural networks have been applied in this work to digital images acquired on two applications, a centrifugal pump and a stationary spray in order to define, in a stochastic way, the gas-liquid interface evolution. Starting from the numeric matrix representing the image it is possible to characterize geometrical parameters and the time evolution of the jet. The algorithm used works with the fuzzy logic concept to binarize the chromatist of the pixels, depending them, by using the difference of the light scattering for the gas and the liquid phase.. Starting from a primary fixed threshold, the applied technique, can select the ‘gas’ pixel from the ‘liquid’ pixel and so it is possible define the first most probably boundary lines of the spray. Acquiring continuously the images, fixing a frame rate, a most fine threshold can be select and, at the limit, the most probably geometrical parameters of the jet can be detected.

  10. Second Workshop on New Trends in Content-based Recommender Systems (CBRecSys 2015)

    DEFF Research Database (Denmark)

    Bogers, Toine; Koolen, Marijn

    2015-01-01

    While content-based recommendation has been applied successfully in many different domains, it has not seen the same level of attention as collaborative filtering techniques have. However, there are many recommendation domains and applications where content and metadata play a key role, either...... these data sources should be combined to provided the best recommendation performance. The CBRecSys 2015 workshop aims to address this by providing a dedicated venue for papers dedicated to all aspects of content-based recommendation....

  11. CBRecSys 2015. New Trends on Content-Based Recommender Systems

    DEFF Research Database (Denmark)

    While content-based recommendation has been applied successfully in many different domains, it has not seen the same level of attention as collaborative filtering techniques have. However, there are many recommendation domains and applications where content and metadata play a key role, either...... these data sources should be combined to provided the best recommendation performance. The CBRecSys 2015 workshop aims to address this by providing a dedicated venue for papers dedicated to all aspects of content-based recommendation....

  12. Workshop on New Trends in Content-based Recommender Systems (CBRecSys 2014)

    DEFF Research Database (Denmark)

    Bogers, Toine; Koolen, Marijn; Cantádor, Ivan

    2014-01-01

    While content-based recommendation has been applied successfully in many different domains, it has not seen the same level of attention as collaborative filtering techniques have. However, there are many recommendation domains and applications where content and metadata play a key role, either...... these data sources should be combined to provided the best recommendation performance. The CBRecSys 2014 workshop aims to address this by providing a dedicated venue for papers dedicated to all aspects of content-based recommendation....

  13. Third Workshop on New Trends in Content-based Recommender Systems (CBRecSys 2016)

    DEFF Research Database (Denmark)

    Bogers, Toine; Koolen, Marijn; Musto, Cataldo

    2016-01-01

    While content-based recommendation has been applied successfully in many different domains, it has not seen the same level of attention as collaborative filtering techniques have. However, there are many recommendation domains and applications where content and metadata play a key role, either...... these data sources should be combined to provided the best recommendation performance. The CBRecSys 2016 workshop provides a dedicated venue for papers dedicated to all aspects of content-based recommendation....

  14. CBRecSys 2016. New Trends on Content-Based Recommender Systems

    DEFF Research Database (Denmark)

    While content-based recommendation has been applied successfully in many different domains, it has not seen the same level of attention as collaborative filtering techniques have. However, there are many recommendation domains and applications where content and metadata play a key role, either...... these data sources should be combined to provided the best recommendation performance. The CBRecSys 2016 workshop provides a dedicated venue for papers dedicated to all aspects of content-based recommendation....

  15. Multivariate statistical analysis for x-ray photoelectron spectroscopy spectral imaging: Effect of image acquisition time

    International Nuclear Information System (INIS)

    Peebles, D.E.; Ohlhausen, J.A.; Kotula, P.G.; Hutton, S.; Blomfield, C.

    2004-01-01

    The acquisition of spectral images for x-ray photoelectron spectroscopy (XPS) is a relatively new approach, although it has been used with other analytical spectroscopy tools for some time. This technique provides full spectral information at every pixel of an image, in order to provide a complete chemical mapping of the imaged surface area. Multivariate statistical analysis techniques applied to the spectral image data allow the determination of chemical component species, and their distribution and concentrations, with minimal data acquisition and processing times. Some of these statistical techniques have proven to be very robust and efficient methods for deriving physically realistic chemical components without input by the user other than the spectral matrix itself. The benefits of multivariate analysis of the spectral image data include significantly improved signal to noise, improved image contrast and intensity uniformity, and improved spatial resolution - which are achieved due to the effective statistical aggregation of the large number of often noisy data points in the image. This work demonstrates the improvements in chemical component determination and contrast, signal-to-noise level, and spatial resolution that can be obtained by the application of multivariate statistical analysis to XPS spectral images

  16. Developments in Dynamic Analysis for quantitative PIXE true elemental imaging

    International Nuclear Information System (INIS)

    Ryan, C.G.

    2001-01-01

    Dynamic Analysis (DA) is a method for projecting quantitative major and trace element images from PIXE event data-streams (off-line or on-line) obtained using the Nuclear Microprobe. The method separates full elemental spectral signatures to produce images that strongly reject artifacts due to overlapping elements, detector effects (such as escape peaks and tailing) and background. The images are also quantitative, stored in ppm-charge units, enabling images to be directly interrogated for the concentrations of all elements in areas of the images. Recent advances in the method include the correction for changing X-ray yields due to varying sample compositions across the image area and the construction of statistical variance images. The resulting accuracy of major element concentrations extracted directly from these images is better than 3% relative as determined from comparisons with electron microprobe point analysis. These results are complemented by error estimates derived from the variance images together with detection limits. This paper provides an update of research on these issues, introduces new software designed to make DA more accessible, and illustrates the application of the method to selected geological problems.

  17. Infrared thermal facial image sequence registration analysis and verification

    Science.gov (United States)

    Chen, Chieh-Li; Jian, Bo-Lin

    2015-03-01

    To study the emotional responses of subjects to the International Affective Picture System (IAPS), infrared thermal facial image sequence is preprocessed for registration before further analysis such that the variance caused by minor and irregular subject movements is reduced. Without affecting the comfort level and inducing minimal harm, this study proposes an infrared thermal facial image sequence registration process that will reduce the deviations caused by the unconscious head shaking of the subjects. A fixed image for registration is produced through the localization of the centroid of the eye region as well as image translation and rotation processes. Thermal image sequencing will then be automatically registered using the two-stage genetic algorithm proposed. The deviation before and after image registration will be demonstrated by image quality indices. The results show that the infrared thermal image sequence registration process proposed in this study is effective in localizing facial images accurately, which will be beneficial to the correlation analysis of psychological information related to the facial area.

  18. Chemical imaging and solid state analysis at compact surfaces using UV imaging

    DEFF Research Database (Denmark)

    Wu, Jian X.; Rehder, Sönke; van den Berg, Frans

    2014-01-01

    , and microcrystalline cellulose together with magnesium stearate as excipients were used as model materials in the compacts. The UV imaging based drug and excipient distribution was in good agreement with hyperspectral NIR imaging. The UV wavelength region can be utilized in distinguishing between glibenclamide......Fast non-destructive multi-wavelength UV imaging together with multivariate image analysis was utilized to visualize distribution of chemical components and their solid state form at compact surfaces. Amorphous and crystalline solid forms of the antidiabetic compound glibenclamide...... and excipients in a non-invasive way, as well as mapping the glibenclamide solid state form. An exploratory data analysis supported the critical evaluation of the mapping results and the selection of model parameters for the chemical mapping. The present study demonstrated that the multi-wavelength UV imaging...

  19. ImageJ-MATLAB: a bidirectional framework for scientific image analysis interoperability.

    Science.gov (United States)

    Hiner, Mark C; Rueden, Curtis T; Eliceiri, Kevin W

    2017-02-15

    ImageJ-MATLAB is a lightweight Java library facilitating bi-directional interoperability between MATLAB and ImageJ. By defining a standard for translation between matrix and image data structures, researchers are empowered to select the best tool for their image-analysis tasks. Freely available extension to ImageJ2 ( http://imagej.net/Downloads ). Installation and use instructions available at http://imagej.net/MATLAB_Scripting. Tested with ImageJ 2.0.0-rc-54 , Java 1.8.0_66 and MATLAB R2015b. eliceiri@wisc.edu. Supplementary data are available at Bioinformatics online. © The Author 2016. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com

  20. A spectral identity mapper for chemical image analysis.

    Science.gov (United States)

    Turner, John F; Zhang, Jing; O'Connor, Anne

    2004-11-01

    Generating chemically relevant image contrast from spectral image data requires multivariate processing algorithms that can categorize spectra according to shape. Conventional chemometric techniques like inverse least squares, classical least squares, multiple linear regression, principle component regression, and multivariate curve resolution are effective for predicting the chemical composition of samples having known constituents, but they are less effective when a priori information about the sample is unavailable. We have developed a multivariate technique called spectral identity mapping (SIM) that reduces the dependence of spectral image analysis on training datasets. The qualitative SIM method provides enhanced spectral shape specificity and improved chemical image contrast. We present SIM results of spectral image data acquired from polymer-coated paper substrates used in the manufacture of pressure sensitive adhesive tapes. In addition, we compare the SIM results to results from spectral angle mapping (SAM) and cosine correlation analysis (CCA), two closely related techniques.

  1. Image Registration Algorithm Based on Parallax Constraint and Clustering Analysis

    Science.gov (United States)

    Wang, Zhe; Dong, Min; Mu, Xiaomin; Wang, Song

    2018-01-01

    To resolve the problem of slow computation speed and low matching accuracy in image registration, a new image registration algorithm based on parallax constraint and clustering analysis is proposed. Firstly, Harris corner detection algorithm is used to extract the feature points of two images. Secondly, use Normalized Cross Correlation (NCC) function to perform the approximate matching of feature points, and the initial feature pair is obtained. Then, according to the parallax constraint condition, the initial feature pair is preprocessed by K-means clustering algorithm, which is used to remove the feature point pairs with obvious errors in the approximate matching process. Finally, adopt Random Sample Consensus (RANSAC) algorithm to optimize the feature points to obtain the final feature point matching result, and the fast and accurate image registration is realized. The experimental results show that the image registration algorithm proposed in this paper can improve the accuracy of the image matching while ensuring the real-time performance of the algorithm.

  2. A software platform for the analysis of dermatology images

    Science.gov (United States)

    Vlassi, Maria; Mavraganis, Vlasios; Asvestas, Panteleimon

    2017-11-01

    The purpose of this paper is to present a software platform developed in Python programming environment that can be used for the processing and analysis of dermatology images. The platform provides the capability for reading a file that contains a dermatology image. The platform supports image formats such as Windows bitmaps, JPEG, JPEG2000, portable network graphics, TIFF. Furthermore, it provides suitable tools for selecting, either manually or automatically, a region of interest (ROI) on the image. The automated selection of a ROI includes filtering for smoothing the image and thresholding. The proposed software platform has a friendly and clear graphical user interface and could be a useful second-opinion tool to a dermatologist. Furthermore, it could be used to classify images including from other anatomical parts such as breast or lung, after proper re-training of the classification algorithms.

  3. Applications of image analysis in precision guided weapons

    Science.gov (United States)

    Grinaker, S.

    1988-11-01

    An autonomous fire and forget weapon will have to automatically navigate from the launch site to the target area, detect and recognize the target, home on to it, and select an aim-point in the terminal phase. All of these tasks can be performed by using an imaging seeker-head. Image based target acquisition and tracking are well established ideas and implemented in weapons in operation or under development. A brief survey of the involved image processing techniques are provided, including a few examples of state-of-the-art algorithms. The basic ideas of image based navigation and aim-point selection are introduced and accompanied by examples, and the possibility of replacing the gyro stabilization by image processing is discussed. Finally, the problems concerning the implementation of image analysis for real time processing are addressed, and some principles for system design are provided.

  4. Computer vision approaches to medical image analysis. Revised papers

    International Nuclear Information System (INIS)

    Beichel, R.R.; Sonka, M.

    2006-01-01

    This book constitutes the thoroughly refereed post proceedings of the international workshop Computer Vision Approaches to Medical Image Analysis, CVAMIA 2006, held in Graz, Austria in May 2006 as a satellite event of the 9th European Conference on Computer Vision, EECV 2006. The 10 revised full papers and 11 revised poster papers presented together with 1 invited talk were carefully reviewed and selected from 38 submissions. The papers are organized in topical sections on clinical applications, image registration, image segmentation and analysis, and the poster session. (orig.)

  5. Curvilinear component analysis for nonlinear dimensionality reduction of hyperspectral images

    Science.gov (United States)

    Lennon, Marc; Mercier, Gregoire; Mouchot, Marie-Catherine; Hubert-Moy, Laurence

    2002-01-01

    This paper presents a multidimensional data nonlinear projection method applied to the dimensionality reduction of hyperspectral images. The method, called Curvilinear Component Analysis (CCA) consists in reproducing at best the topology of the joint distribution of the data in a projection subspace whose dimension is lower than the dimension of the initial space, thus preserving a maximum amount of information. The Curvilinear Distance Analysis (CDA) is an improvement of the CCA that allows data including high nonlinearities to be projected. Its interest for reducing the dimension of hyperspectral images is shown. The results are presented on real hyperspectral images and compared with usual linear projection methods.

  6. Analysis of imaging quality under the systematic parameters for thermal imaging system

    Science.gov (United States)

    Liu, Bin; Jin, Weiqi

    2009-07-01

    The integration of thermal imaging system and radar system could increase the range of target identification as well as strengthen the accuracy and reliability of detection, which is a state-of-the-art and mainstream integrated system to search any invasive target and guard homeland security. When it works, there is, however, one defect existing of what the thermal imaging system would produce affected images which could cause serious consequences when searching and detecting. In this paper, we study and reveal the reason why and how the affected images would occur utilizing the principle of lightwave before establishing mathematical imaging model which could meet the course of ray transmitting. In the further analysis, we give special attentions to the systematic parameters of the model, and analyse in detail all parameters which could possibly affect the imaging process and the function how it does respectively. With comprehensive research, we obtain detailed information about the regulation of diffractive phenomena shaped by these parameters. Analytical results have been convinced through the comparison between experimental images and MATLAB simulated images, while simulated images based on the parameters we revised to judge our expectation have good comparability with images acquired in reality.

  7. The microcomputer and image analysis in diagnostic pathology.

    Science.gov (United States)

    Jarvis, L R

    1992-06-01

    This paper presents a snapshot view of the influence and direction of microcomputer technology for image analysis techniques in diagnostic pathology. Microcomputers have had considerable impact in bringing image analysis to wider application. Semi-automated tracing techniques are a simple means of providing objective data and assist in a wide range of diagnostic problems. From the common theme of reducing subjectivity in diagnostic assessment, an extensive body of research has accrued. Some studies have addressed the need for quality control for reliable, routine application. Video digitizer cards bring digital image analysis within the reach of laboratory budgets, providing powerful tools for investigation of a wide range of cellular and tissue features. The use of staining procedures compatible with quantitative evaluation has become equally important. As well as assisting scene segmentation, cytochemical and immunochemical staining techniques relate the data to biological processes. With the present state of the art, practical use of microcomputer based image analysis is impaired by limitations of information extraction and specimen throughput. Recent advances in colour video imaging provide an extra dimension in the analysis of multi-spectral stains. Improvements will also be felt with predictable increase in speed of microprocessors, and with single chip devices which deliver video rate processing. If the full potential of this hardware is realized, high-speed, routine analysis becomes feasible. In addition, a microcomputer imaging system can play host to companion functions, such as image archiving and transmission. With this outlook, the use of microcomputers for image analysis in diagnostic pathology is certain to increase.(ABSTRACT TRUNCATED AT 250 WORDS)

  8. An approach for quantitative image quality analysis for CT

    Science.gov (United States)

    Rahimi, Amir; Cochran, Joe; Mooney, Doug; Regensburger, Joe

    2016-03-01

    An objective and standardized approach to assess image quality of Compute Tomography (CT) systems is required in a wide variety of imaging processes to identify CT systems appropriate for a given application. We present an overview of the framework we have developed to help standardize and to objectively assess CT image quality for different models of CT scanners used for security applications. Within this framework, we have developed methods to quantitatively measure metrics that should correlate with feature identification, detection accuracy and precision, and image registration capabilities of CT machines and to identify strengths and weaknesses in different CT imaging technologies in transportation security. To that end we have designed, developed and constructed phantoms that allow for systematic and repeatable measurements of roughly 88 image quality metrics, representing modulation transfer function, noise equivalent quanta, noise power spectra, slice sensitivity profiles, streak artifacts, CT number uniformity, CT number consistency, object length accuracy, CT number path length consistency, and object registration. Furthermore, we have developed a sophisticated MATLAB based image analysis tool kit to analyze CT generated images of phantoms and report these metrics in a format that is standardized across the considered models of CT scanners, allowing for comparative image quality analysis within a CT model or between different CT models. In addition, we have developed a modified sparse principal component analysis (SPCA) method to generate a modified set of PCA components as compared to the standard principal component analysis (PCA) with sparse loadings in conjunction with Hotelling T2 statistical analysis method to compare, qualify, and detect faults in the tested systems.

  9. Advanced Color Image Processing and Analysis

    CERN Document Server

    2013-01-01

    This volume does much more than survey modern advanced color processing. Starting with a historical perspective on ways we have classified color, it sets out the latest numerical techniques for analyzing and processing colors, the leading edge in our search to accurately record and print what we see. The human eye perceives only a fraction of available light wavelengths, yet we live in a multicolor world of myriad shining hues. Colors rich in metaphorical associations make us “purple with rage” or “green with envy” and cause us to “see red.” Defining colors has been the work of centuries, culminating in today’s complex mathematical coding that nonetheless remains a work in progress: only recently have we possessed the computing capacity to process the algebraic matrices that reproduce color more accurately. With chapters on dihedral color and image spectrometers, this book provides technicians and researchers with the knowledge they need to grasp the intricacies of today’s color imaging.

  10. Complex Event Processing for Content-Based Text, Image, and Video Retrieval

    Science.gov (United States)

    2016-06-01

    Hidden Markov Models, Mixture Models, Bags of Words, Artificial Neural Networks (ANNs), higher-order logic, graph theory , and classifiers. Most are well... Dalton , J, Mancini C, Selvin AM. Co-OPR: design and evaluation of collaborative sensemaking and planning tools for personnel recovery. Edinburgh (UK

  11. Automated image analysis of atomic force microscopy images of rotavirus particles

    Energy Technology Data Exchange (ETDEWEB)

    Venkataraman, S. [Life Sciences Division, Oak Ridge National Laboratory, Oak Ridge, Tennessee 37831 (United States); Department of Electrical and Computer Engineering, University of Tennessee, Knoxville, TN 37996 (United States); Allison, D.P. [Life Sciences Division, Oak Ridge National Laboratory, Oak Ridge, Tennessee 37831 (United States); Department of Biochemistry, Cellular, and Molecular Biology, University of Tennessee, Knoxville, TN 37996 (United States); Molecular Imaging Inc. Tempe, AZ, 85282 (United States); Qi, H. [Department of Electrical and Computer Engineering, University of Tennessee, Knoxville, TN 37996 (United States); Morrell-Falvey, J.L. [Life Sciences Division, Oak Ridge National Laboratory, Oak Ridge, Tennessee 37831 (United States); Kallewaard, N.L. [Vanderbilt University Medical Center, Nashville, TN 37232-2905 (United States); Crowe, J.E. [Vanderbilt University Medical Center, Nashville, TN 37232-2905 (United States); Doktycz, M.J. [Life Sciences Division, Oak Ridge National Laboratory, Oak Ridge, Tennessee 37831 (United States)]. E-mail: doktyczmj@ornl.gov

    2006-06-15

    A variety of biological samples can be imaged by the atomic force microscope (AFM) under environments that range from vacuum to ambient to liquid. Generally imaging is pursued to evaluate structural features of the sample or perhaps identify some structural changes in the sample that are induced by the investigator. In many cases, AFM images of sample features and induced structural changes are interpreted in general qualitative terms such as markedly smaller or larger, rougher, highly irregular, or smooth. Various manual tools can be used to analyze images and extract more quantitative data, but this is usually a cumbersome process. To facilitate quantitative AFM imaging, automated image analysis routines are being developed. Viral particles imaged in water were used as a test case to develop an algorithm that automatically extracts average dimensional information from a large set of individual particles. The extracted information allows statistical analyses of the dimensional characteristics of the particles and facilitates interpretation related to the binding of the particles to the surface. This algorithm is being extended for analysis of other biological samples and physical objects that are imaged by AFM.

  12. Cnn Based Retinal Image Upscaling Using Zero Component Analysis

    Science.gov (United States)

    Nasonov, A.; Chesnakov, K.; Krylov, A.

    2017-05-01

    The aim of the paper is to obtain high quality of image upscaling for noisy images that are typical in medical image processing. A new training scenario for convolutional neural network based image upscaling method is proposed. Its main idea is a novel dataset preparation method for deep learning. The dataset contains pairs of noisy low-resolution images and corresponding noiseless highresolution images. To achieve better results at edges and textured areas, Zero Component Analysis is applied to these images. The upscaling results are compared with other state-of-the-art methods like DCCI, SI-3 and SRCNN on noisy medical ophthalmological images. Objective evaluation of the results confirms high quality of the proposed method. Visual analysis shows that fine details and structures like blood vessels are preserved, noise level is reduced and no artifacts or non-existing details are added. These properties are essential in retinal diagnosis establishment, so the proposed algorithm is recommended to be used in real medical applications.

  13. NEPR Principle Component Analysis - NOAA TIFF Image

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — This GeoTiff is a representation of seafloor topography in Northeast Puerto Rico derived from a bathymetry model with a principle component analysis (PCA). The area...

  14. A parallel solution for high resolution histological image analysis.

    Science.gov (United States)

    Bueno, G; González, R; Déniz, O; García-Rojo, M; González-García, J; Fernández-Carrobles, M M; Vállez, N; Salido, J

    2012-10-01

    This paper describes a general methodology for developing parallel image processing algorithms based on message passing for high resolution images (on the order of several Gigabytes). These algorithms have been applied to histological images and must be executed on massively parallel processing architectures. Advances in new technologies for complete slide digitalization in pathology have been combined with developments in biomedical informatics. However, the efficient use of these digital slide systems is still a challenge. The image processing that these slides are subject to is still limited both in terms of data processed and processing methods. The work presented here focuses on the need to design and develop parallel image processing tools capable of obtaining and analyzing the entire gamut of information included in digital slides. Tools have been developed to assist pathologists in image analysis and diagnosis, and they cover low and high-level image processing methods applied to histological images. Code portability, reusability and scalability have been tested by using the following parallel computing architectures: distributed memory with massive parallel processors and two networks, INFINIBAND and Myrinet, composed of 17 and 1024 nodes respectively. The parallel framework proposed is flexible, high performance solution and it shows that the efficient processing of digital microscopic images is possible and may offer important benefits to pathology laboratories. Copyright © 2012 Elsevier Ireland Ltd. All rights reserved.

  15. Acne image analysis: lesion localization and classification

    Science.gov (United States)

    Abas, Fazly Salleh; Kaffenberger, Benjamin; Bikowski, Joseph; Gurcan, Metin N.

    2016-03-01

    Acne is a common skin condition present predominantly in the adolescent population, but may continue into adulthood. Scarring occurs commonly as a sequel to severe inflammatory acne. The presence of acne and resultant scars are more than cosmetic, with a significant potential to alter quality of life and even job prospects. The psychosocial effects of acne and scars can be disturbing and may be a risk factor for serious psychological concerns. Treatment efficacy is generally determined based on an invalidated gestalt by the physician and patient. However, the validated assessment of acne can be challenging and time consuming. Acne can be classified into several morphologies including closed comedones (whiteheads), open comedones (blackheads), papules, pustules, cysts (nodules) and scars. For a validated assessment, the different morphologies need to be counted independently, a method that is far too time consuming considering the limited time available for a consultation. However, it is practical to record and analyze images since dermatologists can validate the severity of acne within seconds after uploading an image. This paper covers the processes of region-ofinterest determination using entropy-based filtering and thresholding as well acne lesion feature extraction. Feature extraction methods using discrete wavelet frames and gray-level co-occurence matrix were presented and their effectiveness in separating the six major acne lesion classes were discussed. Several classifiers were used to test the extracted features. Correct classification accuracy as high as 85.5% was achieved using the binary classification tree with fourteen principle components used as descriptors. Further studies are underway to further improve the algorithm performance and validate it on a larger database.

  16. Quantitative analysis and classification of AFM images of human hair.

    Science.gov (United States)

    Gurden, S P; Monteiro, V F; Longo, E; Ferreira, M M C

    2004-07-01

    The surface topography of human hair, as defined by the outer layer of cellular sheets, termed cuticles, largely determines the cosmetic properties of the hair. The condition of the cuticles is of great cosmetic importance, but also has the potential to aid diagnosis in the medical and forensic sciences. Atomic force microscopy (AFM) has been demonstrated to offer unique advantages for analysis of the hair surface, mainly due to the high image resolution and the ease of sample preparation. This article presents an algorithm for the automatic analysis of AFM images of human hair. The cuticular structure is characterized using a series of descriptors, such as step height, tilt angle and cuticle density, allowing quantitative analysis and comparison of different images. The usefulness of this approach is demonstrated by a classification study. Thirty-eight AFM images were measured, consisting of hair samples from (a) untreated and bleached hair samples, and (b) the root and distal ends of the hair fibre. The multivariate classification technique partial least squares discriminant analysis is used to test the ability of the algorithm to characterize the images according to the properties of the hair samples. Most of the images (86%) were found to be classified correctly.

  17. Utilizing Minkowski functionals for image analysis: a marching square algorithm

    International Nuclear Information System (INIS)

    Mantz, Hubert; Jacobs, Karin; Mecke, Klaus

    2008-01-01

    Comparing noisy experimental image data with statistical models requires a quantitative analysis of grey-scale images beyond mean values and two-point correlations. A real-space image analysis technique is introduced for digitized grey-scale images, based on Minkowski functionals of thresholded patterns. A novel feature of this marching square algorithm is the use of weighted side lengths for pixels, so that boundary lengths are captured accurately. As examples to illustrate the technique we study surface topologies emerging during the dewetting process of thin films and analyse spinodal decomposition as well as turbulent patterns in chemical reaction–diffusion systems. The grey-scale value corresponds to the height of the film or to the concentration of chemicals, respectively. Comparison with analytic calculations in stochastic geometry models reveals a remarkable agreement of the examples with a Gaussian random field. Thus, a statistical test for non-Gaussian features in experimental data becomes possible with this image analysis technique—even for small image sizes. Implementations of the software used for the analysis are offered for download

  18. Image analysis for remote examination of fuel pins

    International Nuclear Information System (INIS)

    Cook, J.H.; Nayak, U.P.

    1982-01-01

    An image analysis system operating in the Wing 9 Hot Cell Facility at Los Alamos National Laboratory provides quantitative microstructural analyses of irradiated fuels and materials. With this system, fewer photomicrographs are required during postirradiation microstructural examination and data are available for analysis much faster. The system has been used successfully to examine Westinghouse Advanced Reactors Division experimental fuel pins

  19. Automated image analysis of the pathological lung in CT

    NARCIS (Netherlands)

    Sluimer, Ingrid Christine

    2005-01-01

    The general objective of the thesis is automation of the analysis of the pathological lung from CT images. Specifically, we aim for automated detection and classification of abnormalities in the lung parenchyma. We first provide a review of computer analysis techniques applied to CT of the

  20. Deep Learning for Intelligent Substation Device Infrared Fault Image Analysis

    Directory of Open Access Journals (Sweden)

    Lin Ying

    2016-01-01

    Full Text Available As an important kind of data for device status evaluation, the increasing infrared image data in electrical system puts forward a new challenge to traditional manually processing mode. To overcome this problem, this paper proposes a feasible way to automatically process massive infrared fault images. We take advantage of the imaging characteristics of infrared fault images and detect fault regions together with its belonging device part by our proposed algorithm, which first segment images into superpixels, and then adopt the state-of-the-art convolutional and recursive neural network for intelligent object recognition. In the experiment, we compare several unsupervised pre-training methods considering the importance of a pre-train procedure, and discuss the proper parameters for the proposed network. The experimental results show the good performance of our algorithm, and its efficiency for infrared analysis.

  1. Image analysis of ocular fundus for retinopathy characterization

    Energy Technology Data Exchange (ETDEWEB)

    Ushizima, Daniela; Cuadros, Jorge

    2010-02-05

    Automated analysis of ocular fundus images is a common procedure in countries as England, including both nonemergency examination and retinal screening of patients with diabetes mellitus. This involves digital image capture and transmission of the images to a digital reading center for evaluation and treatment referral. In collaboration with the Optometry Department, University of California, Berkeley, we have tested computer vision algorithms to segment vessels and lesions in ground-truth data (DRIVE database) and hundreds of images of non-macular centric and nonuniform illumination views of the eye fundus from EyePACS program. Methods under investigation involve mathematical morphology (Figure 1) for image enhancement and pattern matching. Recently, we have focused in more efficient techniques to model the ocular fundus vasculature (Figure 2), using deformable contours. Preliminary results show accurate segmentation of vessels and high level of true-positive microaneurysms.

  2. Looking beyond the ELT Approach in China's Higher Education from the Perspective of Bilingual Education: Immersion, Content-Based Instruction or Something Else?

    Science.gov (United States)

    Wang, Ping

    2017-01-01

    This article starts with definitions of bilingualism with a focus on the analysis of bilingual competence. Then the aims and types of bilingual education in developing bilingual competence are introduced with focus on analyses of immersion and content-based instruction. Subsequently, the contextual settings of the study are briefly presented.…

  3. Mathematical methods in time series analysis and digital image processing

    CERN Document Server

    Kurths, J; Maass, P; Timmer, J

    2008-01-01

    The aim of this volume is to bring together research directions in theoretical signal and imaging processing developed rather independently in electrical engineering, theoretical physics, mathematics and the computer sciences. In particular, mathematically justified algorithms and methods, the mathematical analysis of these algorithms, and methods as well as the investigation of connections between methods from time series analysis and image processing are reviewed. An interdisciplinary comparison of these methods, drawing upon common sets of test problems from medicine and geophysical/enviromental sciences, is also addressed. This volume coherently summarizes work carried out in the field of theoretical signal and image processing. It focuses on non-linear and non-parametric models for time series as well as on adaptive methods in image processing.

  4. Imaging analysis of direct alanine uptake by rice seedlings

    International Nuclear Information System (INIS)

    Nihei, Naoto; Masuda, Sayaka; Rai, Hiroki; Nakanishi, Tomoko M.

    2008-01-01

    We presented alanine, a kind of amino acids, uptake by a rice seedling to study the basic mechanism of the organic fertilizer effectiveness in organic farming. The rice grown in the culture solution containing alanine as a nitrogen source absorbed alanine approximately two times faster than that grown with NH 4 + from analysis of 14 C-alanine images by Imaging Plate method. It was suggested that the active transport ability of the rice seeding was induced in roots by existence of alanine in the rhizosphere. The alanine uptake images of the rice roots were acquired every 5 minutes successively by the real-time autoradiography system we developed. The analysis of the successive images showed that alanine uptake was not uniform throughout the root but especially active at the root tip. (author)

  5. Multivariate image analysis for quality inspection in fish feed production

    DEFF Research Database (Denmark)

    Ljungqvist, Martin Georg

    . The colour appearance of fish products is important for customers. Salmonid fish get their red colour from a natural pigment called astaxanthin. To ensure a similar red colour of fish in aquaculture astaxanthin is used as an additive coated on the feed pellets. Astaxanthin can either be of natural origin......, or synthesised chemically. Common for both types is that they are relatively expensive in comparison to the other feed ingredients. This thesis investigates multi-variate data collection for visual inspection and optimisation of industrial production in the fish feed industry. Quality parameters focused on here...... of the work demonstrate a high potential of image analysis and spectral imaging for assessing the product quality of fish feed pellets, astaxanthin and fish meat. We show how image analysis can be used to inspect the pellet size, and how spectral imaging can be used to inspect the surface quality...

  6. Implicitly Weighted Methods in Robust Image Analysis

    Czech Academy of Sciences Publication Activity Database

    Kalina, Jan

    2012-01-01

    Roč. 44, č. 3 (2012), s. 449-462 ISSN 0924-9907 R&D Projects: GA MŠk(CZ) 1M06014 Institutional research plan: CEZ:AV0Z10300504 Keywords : robustness * high breakdown point * outlier detection * robust correlation analysis * template matching * face recognition Subject RIV: BB - Applied Statistics, Operational Research Impact factor: 1.767, year: 2012

  7. Peripheral blood smear image analysis: A comprehensive review

    Directory of Open Access Journals (Sweden)

    Emad A Mohammed

    2014-01-01

    Full Text Available Peripheral blood smear image examination is a part of the routine work of every laboratory. The manual examination of these images is tedious, time-consuming and suffers from interobserver variation. This has motivated researchers to develop different algorithms and methods to automate peripheral blood smear image analysis. Image analysis itself consists of a sequence of steps consisting of image segmentation, features extraction and selection and pattern classification. The image segmentation step addresses the problem of extraction of the object or region of interest from the complicated peripheral blood smear image. Support vector machine (SVM and artificial neural networks (ANNs are two common approaches to image segmentation. Features extraction and selection aims to derive descriptive characteristics of the extracted object, which are similar within the same object class and different between different objects. This will facilitate the last step of the image analysis process: pattern classification. The goal of pattern classification is to assign a class to the selected features from a group of known classes. There are two types of classifier learning algorithms: supervised and unsupervised. Supervised learning algorithms predict the class of the object under test using training data of known classes. The training data have a predefined label for every class and the learning algorithm can utilize this data to predict the class of a test object. Unsupervised learning algorithms use unlabeled training data and divide them into groups using similarity measurements. Unsupervised learning algorithms predict the group to which a new test object belong to, based on the training data without giving an explicit class to that object. ANN, SVM, decision tree and K-nearest neighbor are possible approaches to classification algorithms. Increased discrimination may be obtained by combining several classifiers together.

  8. Peripheral blood smear image analysis: A comprehensive review.

    Science.gov (United States)

    Mohammed, Emad A; Mohamed, Mostafa M A; Far, Behrouz H; Naugler, Christopher

    2014-01-01

    Peripheral blood smear image examination is a part of the routine work of every laboratory. The manual examination of these images is tedious, time-consuming and suffers from interobserver variation. This has motivated researchers to develop different algorithms and methods to automate peripheral blood smear image analysis. Image analysis itself consists of a sequence of steps consisting of image segmentation, features extraction and selection and pattern classification. The image segmentation step addresses the problem of extraction of the object or region of interest from the complicated peripheral blood smear image. Support vector machine (SVM) and artificial neural networks (ANNs) are two common approaches to image segmentation. Features extraction and selection aims to derive descriptive characteristics of the extracted object, which are similar within the same object class and different between different objects. This will facilitate the last step of the image analysis process: pattern classification. The goal of pattern classification is to assign a class to the selected features from a group of known classes. There are two types of classifier learning algorithms: supervised and unsupervised. Supervised learning algorithms predict the class of the object under test using training data of known classes. The training data have a predefined label for every class and the learning algorithm can utilize this data to predict the class of a test object. Unsupervised learning algorithms use unlabeled training data and divide them into groups using similarity measurements. Unsupervised learning algorithms predict the group to which a new test object belong to, based on the training data without giving an explicit class to that object. ANN, SVM, decision tree and K-nearest neighbor are possible approaches to classification algorithms. Increased discrimination may be obtained by combining several classifiers together.

  9. A software package for biomedical image processing and analysis

    International Nuclear Information System (INIS)

    Goncalves, J.G.M.; Mealha, O.

    1988-01-01

    The decreasing cost of computing power and the introduction of low cost imaging boards justifies the increasing number of applications of digital image processing techniques in the area of biomedicine. There is however a large software gap to be fulfilled, between the application and the equipment. The requirements to bridge this gap are twofold: good knowledge of the hardware provided and its interface to the host computer, and expertise in digital image processing and analysis techniques. A software package incorporating these two requirements was developed using the C programming language, in order to create a user friendly image processing programming environment. The software package can be considered in two different ways: as a data structure adapted to image processing and analysis, which acts as the backbone and the standard of communication for all the software; and as a set of routines implementing the basic algorithms used in image processing and analysis. Hardware dependency is restricted to a single module upon which all hardware calls are based. The data structure that was built has four main features: hierchical, open, object oriented, and object dependent dimensions. Considering the vast amount of memory needed by imaging applications and the memory available in small imaging systems, an effective image memory management scheme was implemented. This software package is being used for more than one and a half years by users with different applications. It proved to be an excellent tool for helping people to get adapted into the system, and for standardizing and exchanging software, yet preserving flexibility allowing for users' specific implementations. The philosophy of the software package is discussed and the data structure that was built is described in detail

  10. Muscle contraction analysis with MRI image

    International Nuclear Information System (INIS)

    Horio, Hideyuki; Kuroda, Yoshihiro; Imura, Masataka; Oshiro, Osamu

    2010-01-01

    The MRI measurement has been widely used from the advantage such as no radiation exposure and high resolution. In various measurement objects, the muscle is used for a research and clinical practice. But it was difficult to judge static state of a muscle contraction. In this study, we focused on a proton density change by the blood vessel pressure at the time of the muscle contraction, and aimed the judgments of muscle contraction from variance of the signal intensity. First, the background was removed from the measured images. Second, each signal divided into the low signal side and the high signal side, and variance values (σ H , σ L ) and the ratio (μ) were calculated. Finally, Relax and strain state ware judged from the ratio (μ). As a Result, in relax state, ratio (μ r ) was 0.9823±0.06133. And in strain state, ratio (μ s ) was 0.7547±0.10824. Therefore, a significant difference was obtained in relax state and strain state. Therefore, the strain state judgment of the muscle was possible by this study's method. (author)

  11. Evaluation of stereoscopic 3D displays for image analysis tasks

    Science.gov (United States)

    Peinsipp-Byma, E.; Rehfeld, N.; Eck, R.

    2009-02-01

    In many application domains the analysis of aerial or satellite images plays an important role. The use of stereoscopic display technologies can enhance the image analyst's ability to detect or to identify certain objects of interest, which results in a higher performance. Changing image acquisition from analog to digital techniques entailed the change of stereoscopic visualisation techniques. Recently different kinds of digital stereoscopic display techniques with affordable prices have appeared on the market. At Fraunhofer IITB usability tests were carried out to find out (1) with which kind of these commercially available stereoscopic display techniques image analysts achieve the best performance and (2) which of these techniques achieve a high acceptance. First, image analysts were interviewed to define typical image analysis tasks which were expected to be solved with a higher performance using stereoscopic display techniques. Next, observer experiments were carried out whereby image analysts had to solve defined tasks with different visualization techniques. Based on the experimental results (performance parameters and qualitative subjective evaluations of the used display techniques) two of the examined stereoscopic display technologies were found to be very good and appropriate.

  12. Telemetry Timing Analysis for Image Reconstruction of Kompsat Spacecraft

    Directory of Open Access Journals (Sweden)

    Jin-Ho Lee

    2000-06-01

    Full Text Available The KOMPSAT (KOrea Multi-Purpose SATellite has two optical imaging instruments called EOC (Electro-Optical Camera and OSMI (Ocean Scanning Multispectral Imager. The image data of these instruments are transmitted to ground station and restored correctly after post-processing with the telemetry data transferred from KOMPSAT spacecraft. The major timing information of the KOMPSAT is OBT (On-Board Time which is formatted by the on-board computer of the spacecraft, based on 1Hz sync. pulse coming from the GPS receiver involved. The OBT is transmitted to ground station with the house-keeping telemetry data of the spacecraft while it is distributed to the instruments via 1553B data bus for synchronization during imaging and formatting. The timing information contained in the spacecraft telemetry data would have direct relation to the image data of the instruments, which should be well explained to get a more accurate image. This paper addresses the timing analysis of the KOMPSAT spacecraft and instruments, including the gyro data timing analysis for the correct restoration of the EOC and OSMI image data at ground station.

  13. Database design and implementation for quantitative image analysis research.

    Science.gov (United States)

    Brown, Matthew S; Shah, Sumit K; Pais, Richard C; Lee, Yeng-Zhong; McNitt-Gray, Michael F; Goldin, Jonathan G; Cardenas, Alfonso F; Aberle, Denise R

    2005-03-01

    Quantitative image analysis (QIA) goes beyond subjective visual assessment to provide computer measurements of the image content, typically following image segmentation to identify anatomical regions of interest (ROIs). Commercially available picture archiving and communication systems focus on storage of image data. They are not well suited to efficient storage and mining of new types of quantitative data. In this paper, we present a system that integrates image segmentation, quantitation, and characterization with database and data mining facilities. The paper includes generic process and data models for QIA in medicine and describes their practical use. The data model is based upon the Digital Imaging and Communications in Medicine (DICOM) data hierarchy, which is augmented with tables to store segmentation results (ROIs) and quantitative data from multiple experiments. Data mining for statistical analysis of the quantitative data is described along with example queries. The database is implemented in PostgreSQL on a UNIX server. Database requirements and capabilities are illustrated through two quantitative imaging experiments related to lung cancer screening and assessment of emphysema lung disease. The system can manage the large amounts of quantitative data necessary for research, development, and deployment of computer-aided diagnosis tools.

  14. Water imaging in living plant by nondestructive neutron beam analysis

    International Nuclear Information System (INIS)

    Nakanishi, M. Tomoko

    1998-01-01

    Analysis of biological activity in intact cells or tissues is essential to understand many life processes. Techniques for these in vivo measurements have not been well developed. We present here a nondestructive method to image water in living plants using a neutron beam. This technique provides the highest resolution for water in tissue yet obtainable. With high specificity to water, this neutron beam technique images water movement in seeds or in roots imbedded in soil, as well as in wood and meristems during development. The resolution of the image attainable now is about 15um. We also describe how this new technique will allow new investigations in the field of plant research. (author)

  15. Satellite Image Pansharpening Using a Hybrid Approach for Object-Based Image Analysis

    Directory of Open Access Journals (Sweden)

    Nguyen Thanh Hoan

    2012-10-01

    Full Text Available Intensity-Hue-Saturation (IHS, Brovey Transform (BT, and Smoothing-Filter-Based-Intensity Modulation (SFIM algorithms were used to pansharpen GeoEye-1 imagery. The pansharpened images were then segmented in Berkeley Image Seg using a wide range of segmentation parameters, and the spatial and spectral accuracy of image segments was measured. We found that pansharpening algorithms that preserve more of the spatial information of the higher resolution panchromatic image band (i.e., IHS and BT led to more spatially-accurate segmentations, while pansharpening algorithms that minimize the distortion of spectral information of the lower resolution multispectral image bands (i.e., SFIM led to more spectrally-accurate image segments. Based on these findings, we developed a new IHS-SFIM combination approach, specifically for object-based image analysis (OBIA, which combined the better spatial information of IHS and the more accurate spectral information of SFIM to produce image segments with very high spatial and spectral accuracy.

  16. Imaging for dismantlement verification: Information management and analysis algorithms

    International Nuclear Information System (INIS)

    Robinson, S.M.; Jarman, K.D.; Pitts, W.K.; Seifert, A.; Misner, A.C.; Woodring, M.L.; Myjak, M.J.

    2012-01-01

    The level of detail discernible in imaging techniques has generally excluded them from consideration as verification tools in inspection regimes. An image will almost certainly contain highly sensitive information, and storing a comparison image will almost certainly violate a cardinal principle of information barriers: that no sensitive information be stored in the system. To overcome this problem, some features of the image might be reduced to a few parameters suitable for definition as an attribute, which must be non-sensitive to be acceptable in an Information Barrier regime. However, this process must be performed with care. Features like the perimeter, area, and intensity of an object, for example, might reveal sensitive information. Any data-reduction technique must provide sufficient information to discriminate a real object from a spoofed or incorrect one, while avoiding disclosure (or storage) of any sensitive object qualities. Ultimately, algorithms are intended to provide only a yes/no response verifying the presence of features in the image. We discuss the utility of imaging for arms control applications and present three image-based verification algorithms in this context. The algorithms reduce full image information to non-sensitive feature information, in a process that is intended to enable verification while eliminating the possibility of image reconstruction. The underlying images can be highly detailed, since they are dynamically generated behind an information barrier. We consider the use of active (conventional) radiography alone and in tandem with passive (auto) radiography. We study these algorithms in terms of technical performance in image analysis and application to an information barrier scheme.

  17. Automated spine and vertebrae detection in CT images using object-based image analysis.

    Science.gov (United States)

    Schwier, M; Chitiboi, T; Hülnhagen, T; Hahn, H K

    2013-09-01

    Although computer assistance has become common in medical practice, some of the most challenging tasks that remain unsolved are in the area of automatic detection and recognition. The human visual perception is in general far superior to computer vision algorithms. Object-based image analysis is a relatively new approach that aims to lift image analysis from a pixel-based processing to a semantic region-based processing of images. It allows effective integration of reasoning processes and contextual concepts into the recognition method. In this paper, we present an approach that applies object-based image analysis to the task of detecting the spine in computed tomography images. A spine detection would be of great benefit in several contexts, from the automatic labeling of vertebrae to the assessment of spinal pathologies. We show with our approach how region-based features, contextual information and domain knowledge, especially concerning the typical shape and structure of the spine and its components, can be used effectively in the analysis process. The results of our approach are promising with a detection rate for vertebral bodies of 96% and a precision of 99%. We also gain a good two-dimensional segmentation of the spine along the more central slices and a coarse three-dimensional segmentation. Copyright © 2013 John Wiley & Sons, Ltd.

  18. NEW METHOD USING IMAGE ANALYSIS TO MEASURE GINGIVAL COLOR

    Directory of Open Access Journals (Sweden)

    Takayoshi Tsubai

    2015-07-01

    Full Text Available For many years, observation of gingival color has been a popular area of dental research. However these methods are hard to analyze for any other than the different base conditions and colors. Thus we introduced an alternative method using image analysis to measure gingival color. For the research we performed a dental examination on 30 female students.The system is set up by aligning the camera area and facial area. The subject's chin is placed in a fixed chin cup mounted 30 cm from the camera lens. Each image is acquired such that comparison may be made with the original bite holder as well as a standard color scale. After converted to computer we used a curves dialog box for color adjustment. The curves dialog box allows adjustment of the entire tonal range of an image.The results of the analysis of the free gingiva compared to the attached gingiva are that attached gingiva was more vivid red and yellow compared to the free gingiva. In conclusion, the system described herein of digital caputre and comparison of color images, analysis and separation in three channels of free and attached ginigval surface images and matching with colorimetric scales may be useful for demonstrating the diversity of ginigval color as well as analysis of gingival health.

  19. Congruence analysis of point clouds from unstable stereo image sequences

    Directory of Open Access Journals (Sweden)

    C. Jepping

    2014-06-01

    Full Text Available This paper deals with the correction of exterior orientation parameters of stereo image sequences over deformed free-form surfaces without control points. Such imaging situation can occur, for example, during photogrammetric car crash test recordings where onboard high-speed stereo cameras are used to measure 3D surfaces. As a result of such measurements 3D point clouds of deformed surfaces are generated for a complete stereo sequence. The first objective of this research focusses on the development and investigation of methods for the detection of corresponding spatial and temporal tie points within the stereo image sequences (by stereo image matching and 3D point tracking that are robust enough for a reliable handling of occlusions and other disturbances that may occur. The second objective of this research is the analysis of object deformations in order to detect stable areas (congruence analysis. For this purpose a RANSAC-based method for congruence analysis has been developed. This process is based on the sequential transformation of randomly selected point groups from one epoch to another by using a 3D similarity transformation. The paper gives a detailed description of the congruence analysis. The approach has been tested successfully on synthetic and real image data.

  20. MR image analysis: Longitudinal cardiac motion influences left ventricular measurements

    International Nuclear Information System (INIS)

    Berkovic, Patrick; Hemmink, Maarten; Parizel, Paul M.; Vrints, Christiaan J.; Paelinck, Bernard P.

    2010-01-01

    Background: Software for the analysis of left ventricular (LV) volumes and mass using border detection in short-axis images only, is hampered by through-plane cardiac motion. Therefore we aimed to evaluate software that involves longitudinal cardiac motion. Methods: Twenty-three consecutive patients underwent 1.5-Tesla cine magnetic resonance (MR) imaging of the entire heart in the long-axis and short-axis orientation with breath-hold steady-state free precession imaging. Offline analysis was performed using software that uses short-axis images (Medis MASS) and software that includes two-chamber and four-chamber images to involve longitudinal LV expansion and shortening (CAAS-MRV). Intraobserver and interobserver reproducibility was assessed by using Bland-Altman analysis. Results: Compared with MASS software, CAAS-MRV resulted in significantly smaller end-diastolic (156 ± 48 ml versus 167 ± 52 ml, p = 0.001) and end-systolic LV volumes (79 ± 48 ml versus 94 ± 52 ml, p < 0.001). In addition, CAAS-MRV resulted in higher LV ejection fraction (52 ± 14% versus 46 ± 13%, p < 0.001) and calculated LV mass (154 ± 52 g versus 142 ± 52 g, p = 0.004). Intraobserver and interobserver limits of agreement were similar for both methods. Conclusion: MR analysis of LV volumes and mass involving long-axis LV motion is a highly reproducible method, resulting in smaller LV volumes, higher ejection fraction and calculated LV mass.

  1. Photoacoustic Image Analysis for Cancer Detection and Building a Novel Ultrasound Imaging System

    Science.gov (United States)

    Sinha, Saugata

    Photoacoustic (PA) imaging is a rapidly emerging non-invasive soft tissue imaging modality which has the potential to detect tissue abnormality at early stage. Photoacoustic images map the spatially varying optical absorption property of tissue. In multiwavelength photoacoustic imaging, the soft tissue is imaged with different wavelengths, tuned to the absorption peaks of the specific light absorbing tissue constituents or chromophores to obtain images with different contrasts of the same tissue sample. From those images, spatially varying concentration of the chromophores can be recovered. As multiwavelength PA images can provide important physiological information related to function and molecular composition of the tissue, so they can be used for diagnosis of cancer lesions and differentiation of malignant tumors from benign tumors. In this research, a number of parameters have been extracted from multiwavelength 3D PA images of freshly excised human prostate and thyroid specimens, imaged at five different wavelengths. Using marked histology slides as ground truths, region of interests (ROI) corresponding to cancer, benign and normal regions have been identified in the PA images. The extracted parameters belong to different categories namely chromophore concentration, frequency parameters and PA image pixels and they represent different physiological and optical properties of the tissue specimens. Statistical analysis has been performed to test whether the extracted parameters are significantly different between cancer, benign and normal regions. A multidimensional [29 dimensional] feature set, built with the extracted parameters from the 3D PA images, has been divided randomly into training and testing sets. The training set has been used to train support vector machine (SVM) and neural network (NN) classifiers while the performance of the classifiers in differentiating different tissue pathologies have been determined by the testing dataset. Using the NN

  2. LIRA: Low-Count Image Reconstruction and Analysis

    Science.gov (United States)

    Stein, Nathan; van Dyk, David; Connors, Alanna; Siemiginowska, Aneta; Kashyap, Vinay

    2009-09-01

    LIRA is a new software package for the R statistical computing language. The package is designed for multi-scale non-parametric image analysis for use in high-energy astrophysics. The code implements an MCMC sampler that simultaneously fits the image and the necessary tuning/smoothing parameters in the model (an advance from `EMC2' of Esch et al. 2004). The model-based approach allows for quantification of the standard error of the fitted image and can be used to access the statistical significance of features in the image or to evaluate the goodness-of-fit of a proposed model. The method does not rely on Gaussian approximations, instead modeling image counts as Poisson data, making it suitable for images with extremely low counts. LIRA can include a null (or background) model and fit the departure between the observed data and the null model via a wavelet-like multi-scale component. The technique is therefore suited for problems in which some aspect of an observation is well understood (e.g, a point source), but questions remain about observed departures. To quantitatively test for the presence of diffuse structure unaccounted for by a point source null model, first, the observed image is fit with the null model. Second, multiple simulated images, generated as Poisson realizations of the point source model, are fit using the same null model. MCMC samples from the posterior distributions of the parameters of the fitted models can be compared and can be used to calibrate the misfit between the observed data and the null model. Additionally, output from LIRA includes the MCMC draws of the multi-scale component images, so that the departure of the (simulated or observed) data from the point source null model can be examined visually. To demonstrate LIRA, an example of reconstructing Chandra images of high redshift quasars with jets is presented.

  3. Functional Principal Component Analysis and Randomized Sparse Clustering Algorithm for Medical Image Analysis

    Science.gov (United States)

    Lin, Nan; Jiang, Junhai; Guo, Shicheng; Xiong, Momiao

    2015-01-01

    Due to the advancement in sensor technology, the growing large medical image data have the ability to visualize the anatomical changes in biological tissues. As a consequence, the medical images have the potential to enhance the diagnosis of disease, the prediction of clinical outcomes and the characterization of disease progression. But in the meantime, the growing data dimensions pose great methodological and computational challenges for the representation and selection of features in image cluster analysis. To address these challenges, we first extend the functional principal component analysis (FPCA) from one dimension to two dimensions to fully capture the space variation of image the signals. The image signals contain a large number of redundant features which provide no additional information for clustering analysis. The widely used methods for removing the irrelevant features are sparse clustering algorithms using a lasso-type penalty to select the features. However, the accuracy of clustering using a lasso-type penalty depends on the selection of the penalty parameters and the threshold value. In practice, they are difficult to determine. Recently, randomized algorithms have received a great deal of attentions in big data analysis. This paper presents a randomized algorithm for accurate feature selection in image clustering analysis. The proposed method is applied to both the liver and kidney cancer histology image data from the TCGA database. The results demonstrate that the randomized feature selection method coupled with functional principal component analysis substantially outperforms the current sparse clustering algorithms in image cluster analysis. PMID:26196383

  4. Software for 3D diagnostic image reconstruction and analysis

    International Nuclear Information System (INIS)

    Taton, G.; Rokita, E.; Sierzega, M.; Klek, S.; Kulig, J.; Urbanik, A.

    2005-01-01

    Recent advances in computer technologies have opened new frontiers in medical diagnostics. Interesting possibilities are the use of three-dimensional (3D) imaging and the combination of images from different modalities. Software prepared in our laboratories devoted to 3D image reconstruction and analysis from computed tomography and ultrasonography is presented. In developing our software it was assumed that it should be applicable in standard medical practice, i.e. it should work effectively with a PC. An additional feature is the possibility of combining 3D images from different modalities. The reconstruction and data processing can be conducted using a standard PC, so low investment costs result in the introduction of advanced and useful diagnostic possibilities. The program was tested on a PC using DICOM data from computed tomography and TIFF files obtained from a 3D ultrasound system. The results of the anthropomorphic phantom and patient data were taken into consideration. A new approach was used to achieve spatial correlation of two independently obtained 3D images. The method relies on the use of four pairs of markers within the regions under consideration. The user selects the markers manually and the computer calculates the transformations necessary for coupling the images. The main software feature is the possibility of 3D image reconstruction from a series of two-dimensional (2D) images. The reconstructed 3D image can be: (1) viewed with the most popular methods of 3D image viewing, (2) filtered and processed to improve image quality, (3) analyzed quantitatively (geometrical measurements), and (4) coupled with another, independently acquired 3D image. The reconstructed and processed 3D image can be stored at every stage of image processing. The overall software performance was good considering the relatively low costs of the hardware used and the huge data sets processed. The program can be freely used and tested (source code and program available at

  5. Cardiac imaging: working towards fully-automated machine analysis & interpretation.

    Science.gov (United States)

    Slomka, Piotr J; Dey, Damini; Sitek, Arkadiusz; Motwani, Manish; Berman, Daniel S; Germano, Guido

    2017-03-01

    Non-invasive imaging plays a critical role in managing patients with cardiovascular disease. Although subjective visual interpretation remains the clinical mainstay, quantitative analysis facilitates objective, evidence-based management, and advances in clinical research. This has driven developments in computing and software tools aimed at achieving fully automated image processing and quantitative analysis. In parallel, machine learning techniques have been used to rapidly integrate large amounts of clinical and quantitative imaging data to provide highly personalized individual patient-based conclusions. Areas covered: This review summarizes recent advances in automated quantitative imaging in cardiology and describes the latest techniques which incorporate machine learning principles. The review focuses on the cardiac imaging techniques which are in wide clinical use. It also discusses key issues and obstacles for these tools to become utilized in mainstream clinical practice. Expert commentary: Fully-automated processing and high-level computer interpretation of cardiac imaging are becoming a reality. Application of machine learning to the vast amounts of quantitative data generated per scan and integration with clinical data also facilitates a move to more patient-specific interpretation. These developments are unlikely to replace interpreting physicians but will provide them with highly accurate tools to detect disease, risk-stratify, and optimize patient-specific treatment. However, with each technological advance, we move further from human dependence and closer to fully-automated machine interpretation.

  6. Image decomposition as a tool for validating stress analysis models

    Directory of Open Access Journals (Sweden)

    Mottershead J.

    2010-06-01

    Full Text Available It is good practice to validate analytical and numerical models used in stress analysis for engineering design by comparison with measurements obtained from real components either in-service or in the laboratory. In reality, this critical step is often neglected or reduced to placing a single strain gage at the predicted hot-spot of stress. Modern techniques of optical analysis allow full-field maps of displacement, strain and, or stress to be obtained from real components with relative ease and at modest cost. However, validations continued to be performed only at predicted and, or observed hot-spots and most of the wealth of data is ignored. It is proposed that image decomposition methods, commonly employed in techniques such as fingerprinting and iris recognition, can be employed to validate stress analysis models by comparing all of the key features in the data from the experiment and the model. Image decomposition techniques such as Zernike moments and Fourier transforms have been used to decompose full-field distributions for strain generated from optical techniques such as digital image correlation and thermoelastic stress analysis as well as from analytical and numerical models by treating the strain distributions as images. The result of the decomposition is 101 to 102 image descriptors instead of the 105 or 106 pixels in the original data. As a consequence, it is relatively easy to make a statistical comparison of the image descriptors from the experiment and from the analytical/numerical model and to provide a quantitative assessment of the stress analysis.

  7. Geometric error analysis for shuttle imaging spectrometer experiment

    Science.gov (United States)

    Wang, S. J.; Ih, C. H.

    1984-01-01

    The demand of more powerful tools for remote sensing and management of earth resources steadily increased over the last decade. With the recent advancement of area array detectors, high resolution multichannel imaging spectrometers can be realistically constructed. The error analysis study for the Shuttle Imaging Spectrometer Experiment system is documented for the purpose of providing information for design, tradeoff, and performance prediction. Error sources including the Shuttle attitude determination and control system, instrument pointing and misalignment, disturbances, ephemeris, Earth rotation, etc., were investigated. Geometric error mapping functions were developed, characterized, and illustrated extensively with tables and charts. Selected ground patterns and the corresponding image distortions were generated for direct visual inspection of how the various error sources affect the appearance of the ground object images.

  8. Automated analysis of image mammogram for breast cancer diagnosis

    Science.gov (United States)

    Nurhasanah, Sampurno, Joko; Faryuni, Irfana Diah; Ivansyah, Okto

    2016-03-01

    Medical imaging help doctors in diagnosing and detecting diseases that attack the inside of the body without surgery. Mammogram image is a medical image of the inner breast imaging. Diagnosis of breast cancer needs to be done in detail and as soon as possible for determination of next medical treatment. The aim of this work is to increase the objectivity of clinical diagnostic by using fractal analysis. This study applies fractal method based on 2D Fourier analysis to determine the density of normal and abnormal and applying the segmentation technique based on K-Means clustering algorithm to image abnormal for determine the boundary of the organ and calculate the area of organ segmentation results. The results show fractal method based on 2D Fourier analysis can be used to distinguish between the normal and abnormal breast and segmentation techniques with K-Means Clustering algorithm is able to generate the boundaries of normal and abnormal tissue organs, so area of the abnormal tissue can be determined.

  9. Computer-aided photometric analysis of dynamic digital bioluminescent images

    Science.gov (United States)

    Gorski, Zbigniew; Bembnista, T.; Floryszak-Wieczorek, J.; Domanski, Marek; Slawinski, Janusz

    2003-04-01

    The paper deals with photometric and morphologic analysis of bioluminescent images obtained by registration of light radiated directly from some plant objects. Registration of images obtained from ultra-weak light sources by the single photon counting (SPC) technique is the subject of this work. The radiation is registered by use of a 16-bit charge coupled device (CCD) camera "Night Owl" together with WinLight EG&G Berthold software. Additional application-specific software has been developed in order to deal with objects that are changing during the exposition time. Advantages of the elaborated set of easy configurable tools named FCT for a computer-aided photometric and morphologic analysis of numerous series of quantitatively imperfect chemiluminescent images are described. Instructions are given how to use these tools and exemplified with several algorithms for the transformation of images library. Using the proposed FCT set, automatic photometric and morphologic analysis of the information hidden within series of chemiluminescent images reflecting defensive processes in poinsettia (Euphorbia pulcherrima Willd) leaves affected by a pathogenic fungus Botrytis cinerea is revealed.

  10. Aural analysis of image texture via cepstral filtering and sonification

    Science.gov (United States)

    Rangayyan, Rangaraj M.; Martins, Antonio C. G.; Ruschioni, Ruggero A.

    1996-03-01

    Texture plays an important role in image analysis and understanding, with many applications in medical imaging and computer vision. However, analysis of texture by image processing is a rather difficult issue, with most techniques being oriented towards statistical analysis which may not have readily comprehensible perceptual correlates. We propose new methods for auditory display (AD) and sonification of (quasi-) periodic texture (where a basic texture element or `texton' is repeated over the image field) and random texture (which could be modeled as filtered or `spot' noise). Although the AD designed is not intended to be speech- like or musical, we draw analogies between the two types of texture mentioned above and voiced/unvoiced speech, and design a sonification algorithm which incorporates physical and perceptual concepts of texture and speech. More specifically, we present a method for AD of texture where the projections of the image at various angles (Radon transforms or integrals) are mapped to audible signals and played in sequence. In the case of random texture, the spectral envelopes of the projections are related to the filter spot characteristics, and convey the essential information for texture discrimination. In the case of periodic texture, the AD provides timber and pitch related to the texton and periodicity. In another procedure for sonification of periodic texture, we propose to first deconvolve the image using cepstral analysis to extract information about the texton and horizontal and vertical periodicities. The projections of individual textons at various angles are used to create a voiced-speech-like signal with each projection mapped to a basic wavelet, the horizontal period to pitch, and the vertical period to rhythm on a longer time scale. The sound pattern then consists of a serial, melody-like sonification of the patterns for each projection. We believe that our approaches provide the much-desired `natural' connection between the image

  11. Structural characterisation of semiconductors by computer methods of image analysis

    Science.gov (United States)

    Hernández-Fenollosa, M. A.; Cuesta-Frau, D.; Damonte, L. C.; Satorre Aznar, M. A.

    2005-08-01

    Analysis of microscopic images for automatic particle detection and extraction is a field of growing interest in many scientific fields such as biology, medicine and physics. In this paper we present a method to analyze microscopic images of semiconductors in order to, in a non-supervised way, obtain the main characteristics of the sample under test: growing regions, grain sizes, dendrite morphology and homogenization. In particular, nanocrystalline semiconductors with dimension less than 100 nm represent a relatively new class of materials. Their short-range structures are essentially the same as bulk semiconductors but their optical and electronic properties are dramatically different. The images are obtained by scanning electron microscopy (SEM) and processed by the computer methods presented. Traditionally these tasks have been performed manually, which is time-consuming and subjective in contrast to our computer analysis. The images acquired are first pre-processed in order to improve the signal-to-noise ratio and therefore the detection rate. Images are filtered by a weighted-median filter, and contrast is enhanced using histogram equalization. Then, images are thresholded using a binarization algorithm in such a way growing regions will be segmented. This segmentation is based on the different grey levels due to different sample height of the growing areas. Next, resulting image is further processed to eliminate the resulting holes and spots of the previous stage, and this image will be used to compute the percentage of such growing areas. Finally, using pattern recognition techniques (contour following and raster to vector transformation), single crystals are extracted to obtain their characteristics.

  12. Secure thin client architecture for DICOM image analysis

    Science.gov (United States)

    Mogatala, Harsha V. R.; Gallet, Jacqueline

    2005-04-01

    This paper presents a concept of Secure Thin Client (STC) Architecture for Digital Imaging and Communications in Medicine (DICOM) image analysis over Internet. STC Architecture provides in-depth analysis and design of customized reports for DICOM images using drag-and-drop and data warehouse technology. Using a personal computer and a common set of browsing software, STC can be used for analyzing and reporting detailed patient information, type of examinations, date, Computer Tomography (CT) dose index, and other relevant information stored within the images header files as well as in the hospital databases. STC Architecture is three-tier architecture. The First-Tier consists of drag-and-drop web based interface and web server, which provides customized analysis and reporting ability to the users. The Second-Tier consists of an online analytical processing (OLAP) server and database system, which serves fast, real-time, aggregated multi-dimensional data using OLAP technology. The Third-Tier consists of a smart algorithm based software program which extracts DICOM tags from CT images in this particular application, irrespective of CT vendor's, and transfers these tags into a secure database system. This architecture provides Winnipeg Regional Health Authorities (WRHA) with quality indicators for CT examinations in the hospitals. It also provides health care professionals with analytical tool to optimize radiation dose and image quality parameters. The information is provided to the user by way of a secure socket layer (SSL) and role based security criteria over Internet. Although this particular application has been developed for WRHA, this paper also discusses the effort to extend the Architecture to other hospitals in the region. Any DICOM tag from any imaging modality could be tracked with this software.

  13. Image processing and analysis using neural networks for optometry area

    Science.gov (United States)

    Netto, Antonio V.; Ferreira de Oliveira, Maria C.

    2002-11-01

    In this work we describe the framework of a functional system for processing and analyzing images of the human eye acquired by the Hartmann-Shack technique (HS), in order to extract information to formulate a diagnosis of eye refractive errors (astigmatism, hypermetropia and myopia). The analysis is to be carried out using an Artificial Intelligence system based on Neural Nets, Fuzzy Logic and Classifier Combination. The major goal is to establish the basis of a new technology to effectively measure ocular refractive errors that is based on methods alternative those adopted in current patented systems. Moreover, analysis of images acquired with the Hartmann-Shack technique may enable the extraction of additional information on the health of an eye under exam from the same image used to detect refraction errors.

  14. Trabecular architecture analysis in femur radiographic images using fractals.

    Science.gov (United States)

    Udhayakumar, G; Sujatha, C M; Ramakrishnan, S

    2013-04-01

    Trabecular bone is a highly complex anisotropic material that exhibits varying magnitudes of strength in compression and tension. Analysis of the trabecular architectural alteration that manifest as loss of trabecular plates and connection has been shown to yield better estimation of bone strength. In this work, an attempt has been made toward the development of an automated system for investigation of trabecular femur bone architecture using fractal analysis. Conventional radiographic femur bone images recorded using standard protocols are used in this study. The compressive and tensile regions in the images are delineated using preprocessing procedures. The delineated images are analyzed using Higuchi's fractal method to quantify pattern heterogeneity and anisotropy of trabecular bone structure. The results show that the extracted fractal features are distinct for compressive and tensile regions of normal and abnormal human femur bone. As the strength of the bone depends on architectural variation in addition to bone mass, this study seems to be clinically useful.

  15. Imaging spectroscopic analysis at the Advanced Light Source

    International Nuclear Information System (INIS)

    MacDowell, A. A.; Warwick, T.; Anders, S.; Lamble, G.M.; Martin, M.C.; McKinney, W.R.; Padmore, H.A.

    1999-01-01

    One of the major advances at the high brightness third generation synchrotrons is the dramatic improvement of imaging capability. There is a large multi-disciplinary effort underway at the ALS to develop imaging X-ray, UV and Infra-red spectroscopic analysis on a spatial scale from. a few microns to 10nm. These developments make use of light that varies in energy from 6meV to 15KeV. Imaging and spectroscopy are finding applications in surface science, bulk materials analysis, semiconductor structures, particulate contaminants, magnetic thin films, biology and environmental science. This article is an overview and status report from the developers of some of these techniques at the ALS. The following table lists all the currently available microscopes at the. ALS. This article will describe some of the microscopes and some of the early applications

  16. Automated image analysis for quantification of filamentous bacteria

    DEFF Research Database (Denmark)

    Fredborg, M.; Rosenvinge, F. S.; Spillum, E.

    2015-01-01

    Background: Antibiotics of the beta-lactam group are able to alter the shape of the bacterial cell wall, e.g. filamentation or a spheroplast formation. Early determination of antimicrobial susceptibility may be complicated by filamentation of bacteria as this can be falsely interpreted as growth...... displaying different resistant profiles and differences in filamentation kinetics were used to study a novel image analysis algorithm to quantify length of bacteria and bacterial filamentation. A total of 12 beta-lactam antibiotics or beta-lactam-beta-lactamase inhibitor combinations were analyzed...... in systems relying on colorimetry or turbidometry (such as Vitek-2, Phoenix, MicroScan WalkAway). The objective was to examine an automated image analysis algorithm for quantification of filamentous bacteria using the 3D digital microscopy imaging system, oCelloScope. Results: Three E. coli strains...

  17. Automated rice leaf disease detection using color image analysis

    Science.gov (United States)

    Pugoy, Reinald Adrian D. L.; Mariano, Vladimir Y.

    2011-06-01

    In rice-related institutions such as the International Rice Research Institute, assessing the health condition of a rice plant through its leaves, which is usually done as a manual eyeball exercise, is important to come up with good nutrient and disease management strategies. In this paper, an automated system that can detect diseases present in a rice leaf using color image analysis is presented. In the system, the outlier region is first obtained from a rice leaf image to be tested using histogram intersection between the test and healthy rice leaf images. Upon obtaining the outlier, it is then subjected to a threshold-based K-means clustering algorithm to group related regions into clusters. Then, these clusters are subjected to further analysis to finally determine the suspected diseases of the rice leaf.

  18. The medical analysis of child sexual abuse images.

    Science.gov (United States)

    Cooper, Sharon W

    2011-11-01

    Analysis of child sexual abuse images, commonly referred to as pornography, requires a familiarity with the sexual maturation rating of children and an understanding of growth and development parameters. This article explains barriers that exist in working in this area of child abuse, the differences between subjective and objective analyses, methods used in working with this form of contraband, and recommendations that analysts document their findings in a format that allows for verbal descriptions of the images so that the content will be reflected in legal proceedings should there exist an aversion to visual review. Child sexual abuse images are a digital crime scene, and analysis requires a careful approach to assure that all victims may be identified.

  19. Analysis and Comparison of Objective Methods for Image Quality Assessment

    Directory of Open Access Journals (Sweden)

    P. S. Babkin

    2014-01-01

    Full Text Available The purpose of this work is research and modification of the reference objective methods for image quality assessment. The ultimate goal is to obtain a modification of formal assessments that more closely corresponds to the subjective expert estimates (MOS.In considering the formal reference objective methods for image quality assessment we used the results of other authors, which offer results and comparative analyzes of the most effective algorithms. Based on these investigations we have chosen two of the most successful algorithm for which was made a further analysis in the MATLAB 7.8 R 2009 a (PQS and MSSSIM. The publication focuses on the features of the algorithms, which have great importance in practical implementation, but are insufficiently covered in the publications by other authors.In the implemented modification of the algorithm PQS boundary detector Kirsch was replaced by the boundary detector Canny. Further experiments were carried out according to the method of the ITU-R VT.500-13 (01/2012 using monochrome images treated with different types of filters (should be emphasized that an objective assessment of image quality PQS is applicable only to monochrome images. Images were obtained with a thermal imaging surveillance system. The experimental results proved the effectiveness of this modification.In the specialized literature in the field of formal to evaluation methods pictures, this type of modification was not mentioned.The method described in the publication can be applied to various practical implementations of digital image processing.Advisability and effectiveness of using the modified method of PQS to assess the structural differences between the images are shown in the article and this will be used in solving the problems of identification and automatic control.

  20. Particle image identification and correlation analysis in microscopic holographic particle image velocimetry

    International Nuclear Information System (INIS)

    Wormald, S. Andrew; Coupland, Jeremy

    2009-01-01

    This paper discusses the different analysis methods used in holographic particle image velocimetry to measure particle displacement and compares their relative performance. A digital holographic microscope is described and is used to record the light scattered by particles deposited on cover slides that are displaced between exposures. In this way, particle position and displacement are controlled and a numerical data set is generated. Data extraction using nearest neighbor analysis and correlation of either the reconstructed complex amplitude or intensity fields is then investigated.

  1. Particle image identification and correlation analysis in microscopic holographic particle image velocimetry

    Energy Technology Data Exchange (ETDEWEB)

    Wormald, S. Andrew; Coupland, Jeremy

    2009-11-20

    This paper discusses the different analysis methods used in holographic particle image velocimetry to measure particle displacement and compares their relative performance. A digital holographic microscope is described and is used to record the light scattered by particles deposited on cover slides that are displaced between exposures. In this way, particle position and displacement are controlled and a numerical data set is generated. Data extraction using nearest neighbor analysis and correlation of either the reconstructed complex amplitude or intensity fields is then investigated.

  2. Way Forward in the Twenty-First Century in Content-Based Instruction: Moving towards Integration

    Science.gov (United States)

    Ruiz de Zarobe, Yolanda; Cenoz, Jasone

    2015-01-01

    The aim of this paper is to reflect on the theoretical and methodological underpinnings that provide the basis for an understanding of Content-Based Instruction/Content and Language Integrated Learning (CBI/CLIL) in the field and its relevance in education in the twenty-first century. It is argued that the agenda of CBI/CLIL needs to move towards…

  3. Content-Based Instruction Understood in Terms of Connectionism and Constructivism

    Science.gov (United States)

    Lain, Stephanie

    2016-01-01

    Despite the number of articles devoted to the topic of content-based instruction (CBI), little attempt has been made to link the claims for CBI to research in cognitive science. In this article, I review the CBI model of foreign language (FL) instruction in the context of its close alignment with two emergent frameworks in cognitive science:…

  4. Teacher-Learner Negotiation in Content-Based Instruction: Communication at Cross-Purposes?

    Science.gov (United States)

    Musumeci, Diane

    1996-01-01

    Examines teacher-student exchanges in three content-based language classrooms. Data reveal persistent archetypal patterns of classroom interaction; teachers speak most of the time, and they initiate the majority of the exchanges by asking display questions, whereas student-initiated requests are referential. (30 references) (Author/CK)

  5. Developing Content and Form: Encouraging Evidence from Italian Content-Based Instruction

    Science.gov (United States)

    Rodgers, Daryl M.

    2006-01-01

    Swain (1985) pointed out the need for increased modified output in the classroom in order to encourage learners to engage in more syntactic processing and, thus, make more form-meaning connections. Research in content-based instruction (CBI) ( Musumeci, 1996; Pica, 2002) has revealed few occasions of pushed modified output from learners.…

  6. The Impact of Content-Based Network Technologies on Perceptions of Nutrition Literacy

    Science.gov (United States)

    Brewer, Hannah; Church, E. Mitchell; Brewer, Steven L.

    2016-01-01

    Background: Consumers are exposed to obesogenic environments on a regular basis. Building nutrition literacy is critical for sustaining healthy dietary habits for a lifetime and reducing the prevalence of chronic disease. Purpose: There is a need to investigate the impact of content-based network (CBN) technologies on perceptions of nutrition…

  7. Implementing Task-Oriented Content-Based Instruction for First- and Second-Generation Immigrant Students

    Science.gov (United States)

    Santana-Williamson, Eliana

    2013-01-01

    This article discusses how the ESL program at an ethnically/linguistically diverse community college (between San Diego and the Mexican border) moved from a general, grammar-based ESL curriculum to a content-based instruction (CBI) curriculum. The move was designed to better prepare 1st- and 2nd-generation immigrant students for freshman…

  8. The Grammar of History: Enhancing Content-Based Instruction through a Functional Focus on Language

    Science.gov (United States)

    Schleppegrell, Mary J.; Achugar, Mariana; Oteiza, Teresa

    2004-01-01

    In K-12 contexts, the teaching of English language learners (ELLs) has been greatly influenced by the theory and practice of content-based instruction (CBI). A focus on content can help students achieve grade-level standards in school subjects while they develop English proficiency, but CBI practices have focused primarily on vocabulary and the…

  9. Roles of Frequency, Attitudes, and Multiple Intelligence Modality Surrounding Electricity Content-Based Reader's Theatre

    Science.gov (United States)

    Hosier, Julie Winchester

    2009-01-01

    Integration of subjects is something elementary teachers must do to insure required objectives are covered. Science-based Reader's Theatre is one way to weave reading into science. This study examined the roles of frequency, attitudes, and Multiple Intelligence modalities surrounding Electricity Content-Based Reader's Theatre. This study used…

  10. Flipping Every Student? A Case Study of Content-Based Flipped Language Classrooms

    Science.gov (United States)

    Sun, Yu-Chih

    2017-01-01

    The study aims to explore university-level foreign language learners' perceptions of the content-based flipped classroom approach and factors influencing their perceptions. The research questions guiding the study are three-fold. (a) What attitudes and perceptions do students have about language and knowledge acquisition in the content-based…

  11. Incorporating Active Learning with PowerPoint-Based Lectures Using Content-Based Questions

    Science.gov (United States)

    Gier, Vicki S.; Kreiner, David S.

    2009-01-01

    Instructors often use Microsoft PowerPoint lectures and handouts as support tools to provide students with the main concepts of the lectures. Some instructors and researchers believe that PowerPoint encourages student passivity. We conducted 2 studies to determine whether the use of content-based questions (CBQs) would enhance learning when…

  12. An Overview of Data Models and Query Languages for Content-based Video Retrieval

    NARCIS (Netherlands)

    Petkovic, M.; Jonker, Willem

    As a large amount of video data becomes publicly available, the need to model and query this data efficiently becomes significant. Consequently, content-based retrieval of video data turns out to be a challenging and important problem addressing areas such as video modelling, indexing, querying,

  13. Thai EFL Learners' Attitudes and Motivation towards Learning English through Content-Based Instruction

    Science.gov (United States)

    Lai Yuanxing; Aksornjarung, Prachamon

    2018-01-01

    This study examined EFL learners' attitudes and motivation towards learning English through content-based instruction (CBI) at a university in Thailand. Seventy-one (71) university students, the majority sophomores, answered a 6-point Likert scale questionnaire on attitudes and motivation together with six open-ended questions regarding learning…

  14. Diagnosis of cutaneous thermal burn injuries by multispectral imaging analysis

    Science.gov (United States)

    Anselmo, V. J.; Zawacki, B. E.

    1978-01-01

    Special photographic or television image analysis is shown to be a potentially useful technique to assist the physician in the early diagnosis of thermal burn injury. A background on the medical and physiological problems of burns is presented. The proposed methodology for burns diagnosis from both the theoretical and clinical points of view is discussed. The television/computer system constructed to accomplish this analysis is described, and the clinical results are discussed.

  15. From Image Analysis to Computer Vision: Motives, Methods, and Milestones.

    Science.gov (United States)

    1998-07-01

    valuable information can be derived about agriculture, natural resources, hydrology, geology , geography, and cartography, and for use in military...the statistical analysis of spatial data see [15], and on fractal models see [157]. An early meeting on texture analysis was [300], and a workshop...natural texture, Proc. DARPA Image Understanding Workshop, October 1977, 19-27. [157] B.B. Mandelbrot, Fractals : Form, Chance, and Dimension, W.H

  16. SEMI-SUPERVISED MARGINAL FISHER ANALYSIS FOR HYPERSPECTRAL IMAGE CLASSIFICATION

    OpenAIRE

    H. Huang; J. Liu; Y. Pan

    2012-01-01

    The problem of learning with both labeled and unlabeled examples arises frequently in Hyperspectral image (HSI) classification. While marginal Fisher analysis is a supervised method, which cannot be directly applied for Semi-supervised classification. In this paper, we proposed a novel method, called semi-supervised marginal Fisher analysis (SSMFA), to process HSI of natural scenes, which uses a combination of semi-supervised learning and manifold learning. In SSMFA, a new difference...

  17. Evaluating wood failure in plywood shear by optical image analysis

    Science.gov (United States)

    Charles W. McMillin

    1984-01-01

    This exploratory study evaulates the potential of using an automatic image analysis method to measure percent wood failure in plywood shear specimens. The results suggest that this method my be as accurate as the visual method in tracking long-term gluebond quality. With further refinement, the method could lead to automated equipment replacing the subjective visual...

  18. Automated image analysis in the study of collagenous colitis

    DEFF Research Database (Denmark)

    Fiehn, Anne-Marie Kanstrup; Kristensson, Martin; Engel, Ulla

    2016-01-01

    PURPOSE: The aim of this study was to develop an automated image analysis software to measure the thickness of the subepithelial collagenous band in colon biopsies with collagenous colitis (CC) and incomplete CC (CCi). The software measures the thickness of the collagenous band on microscopic...

  19. Training for thorax diagnostics. Systematic cardiopulmonary image analysis

    International Nuclear Information System (INIS)

    Kirchner, Johannes

    2010-01-01

    The training book on thorax diagnostics using image analysis is supposed to be a supplement to the usual textbooks based on comprehensive experiences of radiologists. The covered issues are the following: heart insufficiency, acute/ chronic bronchitis and pulmonary emphysema; pneumonia and tuberculosis; bronchial carcinoma; lung fibrosis, sarcoidosis and pneumoconiosis, pleural effusion and pneumothorax.

  20. The Medical Analysis of Child Sexual Abuse Images

    Science.gov (United States)

    Cooper, Sharon W.

    2011-01-01

    Analysis of child sexual abuse images, commonly referred to as pornography, requires a familiarity with the sexual maturation rating of children and an understanding of growth and development parameters. This article explains barriers that exist in working in this area of child abuse, the differences between subjective and objective analyses,…

  1. Measuring onion cultivars with image analysis using inflection points

    NARCIS (Netherlands)

    Heijden, van der G.W.A.M.; Vossepoel, A.M.; Polder, G.

    1996-01-01

    The suitability of image analysis was studied to measure bulb characteristics for varietal testing of onions (Allium cepa L.). Eighteen genotypes were used, which covered a whole range of onion shapes, including some quite identical ones. The characteristic height and diameter were measured both by

  2. Identification of Trichoderma strains by image analysis of HPLC chromatograms

    DEFF Research Database (Denmark)

    Thrane, Ulf; Poulsen, S.B.; Nirenberg, H.I.

    2001-01-01

    Forty-four Trichoderma strains from water-damaged building materials or indoor dust were classified with chromatographic image analysis on full chromatographic matrices obtained by high performance liquid chromatography with UV detection of culture extracts. The classes were compared with morphol...

  3. Quantifying biodiversity using digital cameras and automated image analysis.

    Science.gov (United States)

    Roadknight, C. M.; Rose, R. J.; Barber, M. L.; Price, M. C.; Marshall, I. W.

    2009-04-01

    Monitoring the effects on biodiversity of extensive grazing in complex semi-natural habitats is labour intensive. There are also concerns about the standardization of semi-quantitative data collection. We have chosen to focus initially on automating the most time consuming aspect - the image analysis. The advent of cheaper and more sophisticated digital camera technology has lead to a sudden increase in the number of habitat monitoring images and information that is being collected. We report on the use of automated trail cameras (designed for the game hunting market) to continuously capture images of grazer activity in a variety of habitats at Moor House National Nature Reserve, which is situated in the North of England at an average altitude of over 600m. Rainfall is high, and in most areas the soil consists of deep peat (1m to 3m), populated by a mix of heather, mosses and sedges. The cameras have been continuously in operation over a 6 month period, daylight images are in full colour and night images (IR flash) are black and white. We have developed artificial intelligence based methods to assist in the analysis of the large number of images collected, generating alert states for new or unusual image conditions. This paper describes the data collection techniques, outlines the quantitative and qualitative data collected and proposes online and offline systems that can reduce the manpower overheads and increase focus on important subsets in the collected data. By converting digital image data into statistical composite data it can be handled in a similar way to other biodiversity statistics thus improving the scalability of monitoring experiments. Unsupervised feature detection methods and supervised neural methods were tested and offered solutions to simplifying the process. Accurate (85 to 95%) categorization of faunal content can be obtained, requiring human intervention for only those images containing rare animals or unusual (undecidable) conditions, and

  4. Automated three-dimensional analysis of particle measurements using an optical profilometer and image analysis software.

    Science.gov (United States)

    Bullman, V

    2003-07-01

    The automated collection of topographic images from an optical profilometer coupled with existing image analysis software offers the unique ability to quantify three-dimensional particle morphology. Optional software available with most optical profilers permits automated collection of adjacent topographic images of particles dispersed onto a suitable substrate. Particles are recognized in the image as a set of continuous pixels with grey-level values above the grey level assigned to the substrate, whereas particle height or thickness is represented in the numerical differences between these grey levels. These images are loaded into remote image analysis software where macros automate image processing, and then distinguish particles for feature analysis, including standard two-dimensional measurements (e.g. projected area, length, width, aspect ratios) and third-dimensional measurements (e.g. maximum height, mean height). Feature measurements from each calibrated image are automatically added to cumulative databases and exported to a commercial spreadsheet or statistical program for further data processing and presentation. An example is given that demonstrates the superiority of quantitative three-dimensional measurements by optical profilometry and image analysis in comparison with conventional two-dimensional measurements for the characterization of pharmaceutical powders with plate-like particles.

  5. Image Processing Tools for Improved Visualization and Analysis of Remotely Sensed Images for Agriculture and Forest Classifications

    OpenAIRE

    SINHA G. R.

    2017-01-01

    This paper suggests Image Processing tools for improved visualization and better analysis of remotely sensed images. There are methods already available in literature for the purpose but the most important challenge among the limitations is lack of robustness. We propose an optimal method for image enhancement of the images using fuzzy based approaches and few optimization tools. The segmentation images subsequently obtained after de-noising will be classified into distinct information and th...

  6. IMAGE EXPLORER: Astronomical Image Analysis on an HTML5-based Web Application

    Science.gov (United States)

    Gopu, A.; Hayashi, S.; Young, M. D.

    2014-05-01

    Large datasets produced by recent astronomical imagers cause the traditional paradigm for basic visual analysis - typically downloading one's entire image dataset and using desktop clients like DS9, Aladin, etc. - to not scale, despite advances in desktop computing power and storage. This paper describes Image Explorer, a web framework that offers several of the basic visualization and analysis functionality commonly provided by tools like DS9, on any HTML5 capable web browser on various platforms. It uses a combination of the modern HTML5 canvas, JavaScript, and several layers of lossless PNG tiles producted from the FITS image data. Astronomers are able to rapidly and simultaneously open up several images on their web-browser, adjust the intensity min/max cutoff or its scaling function, and zoom level, apply color-maps, view position and FITS header information, execute typically used data reduction codes on the corresponding FITS data using the FRIAA framework, and overlay tiles for source catalog objects, etc.

  7. Automatic analysis of image quality control for Image Guided Radiation Therapy (IGRT) devices in external radiotherapy

    International Nuclear Information System (INIS)

    Torfeh, Tarraf

    2009-01-01

    On-board imagers mounted on a radiotherapy treatment machine are very effective devices that improve the geometric accuracy of radiation delivery. However, a precise and regular quality control program is required in order to achieve this objective. Our purpose consisted of developing software tools dedicated to an automatic image quality control of IGRT devices used in external radiotherapy: 2D-MV mode for measuring patient position during the treatment using high energy images, 2D-kV mode (low energy images) and 3D Cone Beam Computed Tomography (CBCT) MV or kV mode, used for patient positioning before treatment. Automated analysis of the Winston and Lutz test was also proposed. This test is used for the evaluation of the mechanical aspects of treatment machines on which additional constraints are carried out due to the on-board imagers additional weights. Finally, a technique of generating digital phantoms in order to assess the performance of the proposed software tools is described. Software tools dedicated to an automatic quality control of IGRT devices allow reducing by a factor of 100 the time spent by the medical physics team to analyze the results of controls while improving their accuracy by using objective and reproducible analysis and offering traceability through generating automatic monitoring reports and statistical studies. (author) [fr

  8. Automated Nanofiber Diameter Measurement in SEM Images Using a Robust Image Analysis Method

    Directory of Open Access Journals (Sweden)

    Ertan Öznergiz

    2014-01-01

    Full Text Available Due to the high surface area, porosity, and rigidity, applications of nanofibers and nanosurfaces have developed in recent years. Nanofibers and nanosurfaces are typically produced by electrospinning method. In the production process, determination of average fiber diameter is crucial for quality assessment. Average fiber diameter is determined by manually measuring the diameters of randomly selected fibers on scanning electron microscopy (SEM images. However, as the number of the images increases, manual fiber diameter determination becomes a tedious and time consuming task as well as being sensitive to human errors. Therefore, an automated fiber diameter measurement system is desired. In the literature, this task is achieved by using image analysis algorithms. Typically, these methods first isolate each fiber in the image and measure the diameter of each isolated fiber. Fiber isolation is an error-prone process. In this study, automated calculation of nanofiber diameter is achieved without fiber isolation using image processing and analysis algorithms. Performance of the proposed method was tested on real data. The effectiveness of the proposed method is shown by comparing automatically and manually measured nanofiber diameter values.

  9. Positron emission tomography: Physics, instrumentation, and image analysis

    International Nuclear Information System (INIS)

    Porenta, G.

    1994-01-01

    Positron emission tomography (PET) is a noninvasive diagnostic technique that permits reconstruction of cross-sectional images of the human body which depict the biodistribution of PET tracer substances. A large variety of physiological PET tracers, mostly based on isotopes of carbon, nitrogen, oxygen, and fluorine is available and allows the in vivo investigation of organ perfusion, metabolic pathways and biomolecular processes in normal and diseased states. PET cameras utilize the physical characteristics of positron decay to derive quantitative measurements of tracer concentrations, a capability that has so far been elusive for conventional SPECT (single photon emission computed tomography) imaging techniques. Due to the short half lives of most PET isotopes, an on-site cyclotron and a radiochemistry unit are necessary to provide an adequate supply of PET tracers. While operating a PET center in the past was a complex procedure restricted to few academic centers with ample resources. PET technology has rapidly advanced in recent years and has entered the commercial nuclear medicine market. To date, the availability of compact cyclotrons with remote computer control, automated synthesis units for PET radiochemistry, high-performance PET cameras, and userfriendly analysis workstations permits installation of a clinical PET center within most nuclear medicine facilities. This review provides simple descriptions of important aspects concerning physics, instrumentation, and image analysis in PET imaging which should be understood by medical personnel involved in the clinical operation of a PET imaging center. (author)

  10. Image analysis and machine learning for detecting malaria.

    Science.gov (United States)

    Poostchi, Mahdieh; Silamut, Kamolrat; Maude, Richard J; Jaeger, Stefan; Thoma, George

    2018-04-01

    Malaria remains a major burden on global health, with roughly 200 million cases worldwide and more than 400,000 deaths per year. Besides biomedical research and political efforts, modern information technology is playing a key role in many attempts at fighting the disease. One of the barriers toward a successful mortality reduction has been inadequate malaria diagnosis in particular. To improve diagnosis, image analysis software and machine learning methods have been used to quantify parasitemia in microscopic blood slides. This article gives an overview of these techniques and discusses the current developments in image analysis and machine learning for microscopic malaria diagnosis. We organize the different approaches published in the literature according to the techniques used for imaging, image preprocessing, parasite detection and cell segmentation, feature computation, and automatic cell classification. Readers will find the different techniques listed in tables, with the relevant articles cited next to them, for both thin and thick blood smear images. We also discussed the latest developments in sections devoted to deep learning and smartphone technology for future malaria diagnosis. Published by Elsevier Inc.

  11. Hyperspectral imaging and quantitative analysis for prostate cancer detection

    Science.gov (United States)

    Akbari, Hamed; Halig, Luma V.; Schuster, David M.; Osunkoya, Adeboye; Master, Viraj; Nieh, Peter T.; Chen, Georgia Z.

    2012-01-01

    Abstract. Hyperspectral imaging (HSI) is an emerging modality for various medical applications. Its spectroscopic data might be able to be used to noninvasively detect cancer. Quantitative analysis is often necessary in order to differentiate healthy from diseased tissue. We propose the use of an advanced image processing and classification method in order to analyze hyperspectral image data for prostate cancer detection. The spectral signatures were extracted and evaluated in both cancerous and normal tissue. Least squares support vector machines were developed and evaluated for classifying hyperspectral data in order to enhance the detection of cancer tissue. This method was used to detect prostate cancer in tumor-bearing mice and on pathology slides. Spatially resolved images were created to highlight the differences of the reflectance properties of cancer versus those of normal tissue. Preliminary results with 11 mice showed that the sensitivity and specificity of the hyperspectral image classification method are 92.8% to 2.0% and 96.9% to 1.3%, respectively. Therefore, this imaging method may be able to help physicians to dissect malignant regions with a safe margin and to evaluate the tumor bed after resection. This pilot study may lead to advances in the optical diagnosis of prostate cancer using HSI technology. PMID:22894488

  12. Analysis of RTM extended images for VTI media

    KAUST Repository

    Li, Vladimir

    2016-04-28

    Extended images obtained from reverse time migration (RTM) contain information about the accuracy of the velocity field and subsurface illumination at different incidence angles. Here, we evaluate the influence of errors in the anisotropy parameters on the shape of the residual moveout (RMO) in P-wave RTM extended images for VTI (transversely isotropic with a vertical symmetry axis) media. Using the actual spatial distribution of the zero-dip NMO velocity (Vnmo), which could be approximately estimated by conventional techniques, we analyze the extended images obtained with distorted fields of the parameters η and δ. Differential semblance optimization (DSO) and stack-power estimates are employed to study the sensitivity of focusing to the anisotropy parameters. We also build angle gathers to facilitate interpretation of the shape of RMO in the extended images. The results show that the signature of η is dip-dependent, whereas errors in δ cause defocusing only if that parameter is laterally varying. Hence, earlier results regarding the influence of η and δ on reflection moveout and migration velocity analysis remain generally valid in the extended image space for complex media. The dependence of RMO on errors in the anisotropy parameters provides essential insights for anisotropic wavefield tomography using extended images.

  13. Parallel multiple instance learning for extremely large histopathology image analysis.

    Science.gov (United States)

    Xu, Yan; Li, Yeshu; Shen, Zhengyang; Wu, Ziwei; Gao, Teng; Fan, Yubo; Lai, Maode; Chang, Eric I-Chao

    2017-08-03

    Histopathology images are critical for medical diagnosis, e.g., cancer and its treatment. A standard histopathology slice can be easily scanned at a high resolution of, say, 200,000×200,000 pixels. These high resolution images can make most existing imaging processing tools infeasible or less effective when operated on a single machine with limited memory, disk space and computing power. In this paper, we propose an algorithm tackling this new emerging "big data" problem utilizing parallel computing on High-Performance-Computing (HPC) clusters. Experimental results on a large-scale data set (1318 images at a scale of 10 billion pixels each) demonstrate the efficiency and effectiveness of the proposed algorithm for low-latency real-time applications. The framework proposed an effective and efficient system for extremely large histopathology image analysis. It is based on the multiple instance learning formulation for weakly-supervised learning for image classification, segmentation and clustering. When a max-margin concept is adopted for different clusters, we obtain further improvement in clustering performance.

  14. Infrared image acquisition system for vein pattern analysis

    Science.gov (United States)

    Castro-Ortega, R.; Toxqui-Quitl, C.; Padilla-Vivanco, A.; Solís-Villarreal, J.

    2016-09-01

    The physical shape of the hand vascular distribution contains useful information that can be used for identifying and authenticating purposes; which provide a high level of security as a biometric. Furthermore, this pattern can be used widely in health field such as venography and venipuncture. In this paper, we analyze different IR imaging systems in order to obtain high visibility images of the hand vein pattern. The images are acquired in the range of 400 nm to 1300 nm, using infrared and thermal cameras. For the first image acquisition system, we use a CCD camera and a light source with peak emission in the 880 nm obtaining the images by reflection. A second system consists only of a ThermaCAM P65 camera acquiring the naturally emanating infrared light from the hand. A method of digital image analysis is implemented using Contrast Limited Adaptive Histogram Equalization (CLAHE) to remove noise. Subsequently, adaptive thresholding and mathematical morphology operations are implemented to get the vein pattern distribution.

  15. Parallel imaging: is GRAPPA a useful acquisition tool for MR imaging intended for volumetric brain analysis?

    Directory of Open Access Journals (Sweden)

    Frank Anders

    2009-08-01

    Full Text Available Abstract Background The work presented here investigates parallel imaging applied to T1-weighted high resolution imaging for use in longitudinal volumetric clinical studies involving Alzheimer's disease (AD and Mild Cognitive Impairment (MCI patients. This was in an effort to shorten acquisition times to minimise the risk of motion artefacts caused by patient discomfort and disorientation. The principle question is, "Can parallel imaging be used to acquire images at 1.5 T of sufficient quality to allow volumetric analysis of patient brains?" Methods Optimisation studies were performed on a young healthy volunteer and the selected protocol (including the use of two different parallel imaging acceleration factors was then tested on a cohort of 15 elderly volunteers including MCI and AD patients. In addition to automatic brain segmentation, hippocampus volumes were manually outlined and measured in all patients. The 15 patients were scanned on a second occasion approximately one week later using the same protocol and evaluated in the same manner to test repeatability of measurement using images acquired with the GRAPPA parallel imaging technique applied to the MPRAGE sequence. Results Intraclass correlation tests show that almost perfect agreement between repeated measurements of both segmented brain parenchyma fraction and regional measurement of hippocampi. The protocol is suitable for both global and regional volumetric measurement dementia patients. Conclusion In summary, these results indicate that parallel imaging can be used without detrimental effect to brain tissue segmentation and volumetric measurement and should be considered for both clinical and research studies where longitudinal measurements of brain tissue volumes are of interest.

  16. Automated regional behavioral analysis for human brain images.

    Science.gov (United States)

    Lancaster, Jack L; Laird, Angela R; Eickhoff, Simon B; Martinez, Michael J; Fox, P Mickle; Fox, Peter T

    2012-01-01

    Behavioral categories of functional imaging experiments along with standardized brain coordinates of associated activations were used to develop a method to automate regional behavioral analysis of human brain images. Behavioral and coordinate data were taken from the BrainMap database (http://www.brainmap.org/), which documents over 20 years of published functional brain imaging studies. A brain region of interest (ROI) for behavioral analysis can be defined in functional images, anatomical images or brain atlases, if images are spatially normalized to MNI or Talairach standards. Results of behavioral analysis are presented for each of BrainMap's 51 behavioral sub-domains spanning five behavioral domains (Action, Cognition, Emotion, Interoception, and Perception). For each behavioral sub-domain the fraction of coordinates falling within the ROI was computed and compared with the fraction expected if coordinates for the behavior were not clustered, i.e., uniformly distributed. When the difference between these fractions is large behavioral association is indicated. A z-score ≥ 3.0 was used to designate statistically significant behavioral association. The left-right symmetry of ~100K activation foci was evaluated by hemisphere, lobe, and by behavioral sub-domain. Results highlighted the classic left-side dominance for language while asymmetry for most sub-domains (~75%) was not statistically significant. Use scenarios were presented for anatomical ROIs from the Harvard-Oxford cortical (HOC) brain atlas, functional ROIs from statistical parametric maps in a TMS-PET study, a task-based fMRI study, and ROIs from the ten "major representative" functional networks in a previously published resting state fMRI study. Statistically significant behavioral findings for these use scenarios were consistent with published behaviors for associated anatomical and functional regions.

  17. Spectral analysis of mammographic images using a multitaper method

    International Nuclear Information System (INIS)

    Wu Gang; Mainprize, James G.; Yaffe, Martin J.

    2012-01-01

    Purpose: Power spectral analysis in radiographic images is conventionally performed using a windowed overlapping averaging periodogram. This study describes an alternative approach using a multitaper technique and compares its performance with that of the standard method. This tool will be valuable in power spectrum estimation of images, whose content deviates significantly from uniform white noise. The performance of the multitaper approach will be evaluated in terms of spectral stability, variance reduction, bias, and frequency precision. The ultimate goal is the development of a useful tool for image quality assurance. Methods: A multitaper approach uses successive data windows of increasing order. This mitigates spectral leakage allowing one to calculate a reduced-variance power spectrum. The multitaper approach will be compared with the conventional power spectrum method in several typical situations, including the noise power spectra (NPS) measurements of simulated projection images of a uniform phantom, NPS measurement of real detector images of a uniform phantom for two clinical digital mammography systems, and the estimation of the anatomic noise in mammographic images (simulated images and clinical mammograms). Results: Examination of spectrum variance versus frequency resolution and bias indicates that the multitaper approach is superior to the conventional single taper methods in the prevention of spectrum leakage and variance reduction. More than four times finer frequency precision can be achieved with equivalent or less variance and bias. Conclusions: Without any shortening of the image data length, the bias is smaller and the frequency resolution is higher with the multitaper method, and the need to compromise in the choice of regions of interest size to balance between the reduction of variance and the loss of frequency resolution is largely eliminated.

  18. Structural Image Analysis of the Brain in Neuropsychology Using Magnetic Resonance Imaging (MRI) Techniques.

    Science.gov (United States)

    Bigler, Erin D

    2015-09-01

    Magnetic resonance imaging (MRI) of the brain provides exceptional image quality for visualization and neuroanatomical classification of brain structure. A variety of image analysis techniques provide both qualitative as well as quantitative methods to relate brain structure with neuropsychological outcome and are reviewed herein. Of particular importance are more automated methods that permit analysis of a broad spectrum of anatomical measures including volume, thickness and shape. The challenge for neuropsychology is which metric to use, for which disorder and the timing of when image analysis methods are applied to assess brain structure and pathology. A basic overview is provided as to the anatomical and pathoanatomical relations of different MRI sequences in assessing normal and abnormal findings. Some interpretive guidelines are offered including factors related to similarity and symmetry of typical brain development along with size-normalcy features of brain anatomy related to function. The review concludes with a detailed example of various quantitative techniques applied to analyzing brain structure for neuropsychological outcome studies in traumatic brain injury.

  19. Proceedings of the Second Annual Symposium on Mathematical Pattern Recognition and Image Analysis Program

    Science.gov (United States)

    Guseman, L. F., Jr. (Principal Investigator)

    1984-01-01

    Several papers addressing image analysis and pattern recognition techniques for satellite imagery are presented. Texture classification, image rectification and registration, spatial parameter estimation, and surface fitting are discussed.

  20. Identification of Fusarium damaged wheat kernels using image analysis

    Directory of Open Access Journals (Sweden)

    Ondřej Jirsa

    2011-01-01

    Full Text Available Visual evaluation of kernels damaged by Fusarium spp. pathogens is labour intensive and due to a subjective approach, it can lead to inconsistencies. Digital imaging technology combined with appropriate statistical methods can provide much faster and more accurate evaluation of the visually scabby kernels proportion. The aim of the present study was to develop a discrimination model to identify wheat kernels infected by Fusarium spp. using digital image analysis and statistical methods. Winter wheat kernels from field experiments were evaluated visually as healthy or damaged. Deoxynivalenol (DON content was determined in individual kernels using an ELISA method. Images of individual kernels were produced using a digital camera on dark background. Colour and shape descriptors were obtained by image analysis from the area representing the kernel. Healthy and damaged kernels differed significantly in DON content and kernel weight. Various combinations of individual shape and colour descriptors were examined during the development of the model using linear discriminant analysis. In addition to basic descriptors of the RGB colour model (red, green, blue, very good classification was also obtained using hue from the HSL colour model (hue, saturation, luminance. The accuracy of classification using the developed discrimination model based on RGBH descriptors was 85 %. The shape descriptors themselves were not specific enough to distinguish individual kernels.

  1. Image segmentation and particles classification using texture analysis method

    Directory of Open Access Journals (Sweden)

    Mayar Aly Atteya

    Full Text Available Introduction: Ingredients of oily fish include a large amount of polyunsaturated fatty acids, which are important elements in various metabolic processes of humans, and have also been used to prevent diseases. However, in an attempt to reduce cost, recent developments are starting a replace the ingredients of fish oil with products of microalgae, that also produce polyunsaturated fatty acids. To do so, it is important to closely monitor morphological changes in algae cells and monitor their age in order to achieve the best results. This paper aims to describe an advanced vision-based system to automatically detect, classify, and track the organic cells using a recently developed SOPAT-System (Smart On-line Particle Analysis Technology, a photo-optical image acquisition device combined with innovative image analysis software. Methods The proposed method includes image de-noising, binarization and Enhancement, as well as object recognition, localization and classification based on the analysis of particles’ size and texture. Results The methods allowed for correctly computing cell’s size for each particle separately. By computing an area histogram for the input images (1h, 18h, and 42h, the variation could be observed showing a clear increase in cell. Conclusion The proposed method allows for algae particles to be correctly identified with accuracies up to 99% and classified correctly with accuracies up to 100%.

  2. Quantitative imaging analysis of posterior fossa ependymoma location in children.

    Science.gov (United States)

    Sabin, Noah D; Merchant, Thomas E; Li, Xingyu; Li, Yimei; Klimo, Paul; Boop, Frederick A; Ellison, David W; Ogg, Robert J

    2016-08-01

    Imaging descriptions of posterior fossa ependymoma in children have focused on magnetic resonance imaging (MRI) signal and local anatomic relationships with imaging location only recently used to classify these neoplasms. We developed a quantitative method for analyzing the location of ependymoma in the posterior fossa, tested its effectiveness in distinguishing groups of tumors, and examined potential associations of distinct tumor groups with treatment and prognostic factors. Pre-operative MRI examinations of the brain for 38 children with histopathologically proven posterior fossa ependymoma were analyzed. Tumor margin contours and anatomic landmarks were manually marked and used to calculate the centroid of each tumor. Landmarks were used to calculate a transformation to align, scale, and rotate each patient's image coordinates to a common coordinate space. Hierarchical cluster analysis of the location and morphological variables was performed to detect multivariate patterns in tumor characteristics. The ependymomas were also characterized as "central" or "lateral" based on published radiological criteria. Therapeutic details and demographic, recurrence, and survival information were obtained from medical records and analyzed with the tumor location and morphology to identify prognostic tumor characteristics. Cluster analysis yielded two distinct tumor groups based on centroid location The cluster groups were associated with differences in PFS (p = .044), "central" vs. "lateral" radiological designation (p = .035), and marginally associated with multiple operative interventions (p = .064). Posterior fossa ependymoma can be objectively classified based on quantitative analysis of tumor location, and these classifications are associated with prognostic and treatment factors.

  3. Registration and analysis for images couple : application to mammograms

    OpenAIRE

    Boucher, Arnaud

    2014-01-01

    Advisor: Nicole Vincent. Date and location of PhD thesis defense: 10 January 2013, University of Paris Descartes In this thesis, the problem addressed is the development of a computer-aided diagnosis system (CAD) based on conjoint analysis of several images, and therefore on the comparison of these medical images. The particularity of our approach is to look for evolutions or aberrant new tissues in a given set, rather than attempting to characterize, with a strong a priori, the type of ti...

  4. Vibration factors impact analysis on aerial film camera imaging quality

    Science.gov (United States)

    Xie, Jun; Han, Wei; Xu, Zhonglin; Tan, Haifeng; Yang, Mingquan

    2017-08-01

    Aerial film camera can acquire ground target image information advantageous, but meanwhile the change of aircraft attitude, the film features and the work of camera inside system could result in a vibration which could depress the image quality greatly. This paper presented a design basis of vibration mitigation stabilized platform based on the vibration characteristic of the aerial film camera and indicated the application analysis that stabilized platform could support aerial camera to realize the shoot demand of multi-angle and large scale. According to the technique characteristics of stabilized platform, the development direction are high precision, more agility, miniaturization and low power.

  5. Diffraction imaging and velocity analysis using oriented velocity continuation

    KAUST Repository

    Decker, Luke

    2014-08-05

    We perform seismic diffraction imaging and velocity analysis by separating diffractions from specular reflections and decomposing them into slope components. We image slope components using extrapolation in migration velocity in time-space-slope coordinates. The extrapolation is described by a convection-type partial differential equation and implemented efficiently in the Fourier domain. Synthetic and field data experiments show that the proposed algorithm is able to detect accurate time-migration velocities by automatically measuring the flatness of events in dip-angle gathers.

  6. Sparse Superpixel Unmixing for Exploratory Analysis of CRISM Hyperspectral Images

    Science.gov (United States)

    Thompson, David R.; Castano, Rebecca; Gilmore, Martha S.

    2009-01-01

    Fast automated analysis of hyperspectral imagery can inform observation planning and tactical decisions during planetary exploration. Products such as mineralogical maps can focus analysts' attention on areas of interest and assist data mining in large hyperspectral catalogs. In this work, sparse spectral unmixing drafts mineral abundance maps with Compact Reconnaissance Imaging Spectrometer (CRISM) images from the Mars Reconnaissance Orbiter. We demonstrate a novel "superpixel" segmentation strategy enabling efficient unmixing in an interactive session. Tests correlate automatic unmixing results based on redundant spectral libraries against hand-tuned summary products currently in use by CRISM researchers.

  7. Image Chunking: Defining Spatial Building Blocks for Scene Analysis.

    Science.gov (United States)

    1987-04-01

    mumgs0.USmusa 7.AUWOJO 4. CIUTAC Rm6ANT Wuugme*j James V/. Mlahoney DACA? 6-85-C-00 10 NOQ 1 4-85-K-O 124 Artificial Inteligence Laboratory US USS 545...0197 672 IMAGE CHUWING: DEINING SPATIAL UILDING PLOCKS FOR 142 SCENE ANRLYSIS(U) MASSACHUSETTS INST OF TECH CAIIAIDGE ARTIFICIAL INTELLIGENCE LAO J...Technical Report 980 F-Image Chunking: Defining Spatial Building Blocks for Scene DTm -Analysis S ELECTED James V. Mahoney’ MIT Artificial Intelligence

  8. Statistical analysis of muscle contraction based on MR images

    International Nuclear Information System (INIS)

    Horio, Hideyuki; Kuroda, Yoshihiro; Imura, Masataka; Oshiro, Osamu

    2011-01-01

    The purpose of this study was to distinguish the changes of MR signals during relaxation and contraction of muscles. First, MR images were acquired in relaxation and contraction states. The subject clasped his hands in relaxation state and unclasped in contraction state. Next, the images were segmented using mixture Gaussian distributions and expectation-maximization (EM) algorithm. Finally, we evaluated statistical values gotten from mixture Gaussian distributions. As a result, mixing coefficients were different during relaxation and contraction. The experimental results indicated that the proposed analysis has the potential to discriminate between two states. (author)

  9. Quantitative methods for the analysis of electron microscope images

    DEFF Research Database (Denmark)

    Skands, Peter Ulrik Vallø

    1996-01-01

    in a number work cases. These mainly falls in the three categories: (i) Description of coarse scale measures to quantify surface structure or texture (topography); (ii) Characterization of fracture surfaces in steels (fractography); (iii) Grain boundary segmentation in sintered ceramics. The theoretical...... foundation of the thesis fall in the areas of: 1) Mathematical Morphology; 2) Distance transforms and applications; and 3) Fractal geometry. Image analysis opens in general the possibility of a quantitative and statistical well founded measurement of digital microscope images. Herein lies also the conditions...

  10. Reduction and analysis techniques for infrared imaging data

    Science.gov (United States)

    Mccaughrean, Mark

    1989-01-01

    Infrared detector arrays are becoming increasingly available to the astronomy community, with a number of array cameras already in use at national observatories, and others under development at many institutions. As the detector technology and imaging instruments grow more sophisticated, more attention is focussed on the business of turning raw data into scientifically significant information. Turning pictures into papers, or equivalently, astronomy into astrophysics, both accurately and efficiently, is discussed. Also discussed are some of the factors that can be considered at each of three major stages; acquisition, reduction, and analysis, concentrating in particular on several of the questions most relevant to the techniques currently applied to near infrared imaging.

  11. Occupancy Analysis of Sports Arenas Using Thermal Imaging

    DEFF Research Database (Denmark)

    Gade, Rikke; Jørgensen, Anders; Moeslund, Thomas B.

    2012-01-01

    This paper presents a system for automatic analysis of the occupancy of sports arenas. By using a thermal camera for image capturing the number of persons and their location on the court are found without violating any privacy issues. The images are binarised with an automatic threshold method....... Reflections due to shiny surfaces are eliminated by analysing symmetric patterns. Occlusions are dealt with through a concavity anal- ysis of the binary regions. The system is tested in five different sports arenas, for more than three full weeks altogether. These tests showed that after a short...

  12. Software Tools for the Analysis of Functional Magnetic Resonance Imaging

    Directory of Open Access Journals (Sweden)

    Mehdi Behroozi

    2012-09-01

    Full Text Available Functional magnetic resonance imaging (fMRI has become the most popular method for imaging of brain functions. Currently, there is a large variety of software packages for the analysis of fMRI data, each providing many features for users. Since there is no single package that can provide all the necessary analyses for the fMRI data, it is helpful to know the features of each software package. In this paper, several software tools have been introduced and they have been evaluated for comparison of their functionality and their features. The description of each program has been discussed and summarized.

  13. Software Tools for the Analysis of functional Magnetic Resonance Imaging

    Directory of Open Access Journals (Sweden)

    Mehdi Behroozi

    2012-12-01

    Full Text Available Functional magnetic resonance imaging (fMRI has become the most popular method for imaging of brain functions. Currently, there is a large variety of software packages for the analysis of fMRI data, each providing many features for users. Since there is no single package that can provide all the necessary analyses for the fMRI data, it is helpful to know the features of each software package. In this paper, several software tools have been introduced and they have been evaluated for comparison of their functionality and their features. The description of each program has been discussed and summarized

  14. Extracted image analysis: a technique for deciphering mediated portrayals.

    Science.gov (United States)

    Berg, D H; Coutts, L B

    1995-01-01

    A technique for analyzing print media that we have developed as a consequence of our interest in the portrayal of women in menstrual product advertising is reported. The technique, which we call extracted image analysis, involves a unique application of grounded theory and the concomitant heuristic use of the concept of ideal type (Weber, 1958). It provides a means of heuristically conceptualizing the answer to a variant of the "What is going on here?" question asked in analysis of print communication, that is, "Who is being portrayed/addressed here?" Extracted image analysis involves the use of grounded theory to develop ideal typologies. Because the technique re-constructs the ideal types embedded in a communication, it possesses considerable potential as a means of identifying the profiles of members of identifiable groups held by the producers of the directed messages. In addition, the analysis of such portrayals over time would be particularly well suited to extracted image analysis. A number of other possible applications are also suggested.

  15. Whole slide image with image analysis of atypical bile duct brushing: Quantitative features predictive of malignancy.

    Science.gov (United States)

    Collins, Brian T; Weimholt, R Cody

    2015-01-01

    Whole slide images (WSIs) involve digitally capturing glass slides for microscopic computer-based viewing and these are amenable to quantitative image analysis. Bile duct (BD) brushing can show morphologic features that are categorized as indeterminate for malignancy. The study aims to evaluate quantitative morphologic features of atypical categories of BD brushing by WSI analysis for the identification of criteria predictive of malignancy. Over a 3-year period, BD brush specimens with indeterminate diagnostic categorization (atypical to suspicious) were subjected to WSI analysis. Ten well-visualized groups with morphologic atypical features were selected per case and had the quantitative analysis performed for group area, individual nuclear area, the number of nuclei per group, N: C ratio and nuclear size differential. There were 28 cases identified with 17 atypical and 11 suspicious. The average nuclear area was 63.7 µm(2) for atypical and 80.1 µm(2) for suspicious (+difference 16.4 µm(2); P = 0.002). The nuclear size differential was 69.7 µm(2) for atypical and 88.4 µm(2) for suspicious (+difference 18.8 µm(2); P = 0.009). An average nuclear area >70 µm(2) had a 3.2 risk ratio for suspicious categorization. The quantitative criteria findings as measured by image analysis on WSI showed that cases categorized as suspicious had more nuclear size pleomorphism (+18.8 µm(2)) and larger nuclei (+16.4 µm(2)) than those categorized as atypical. WSI with morphologic image analysis can demonstrate quantitative statistically significant differences between atypical and suspicious BD brushings and provide objective criteria that support the diagnosis of carcinoma.

  16. Mammographic quantitative image analysis and biologic image composition for breast lesion characterization and classification

    Energy Technology Data Exchange (ETDEWEB)

    Drukker, Karen, E-mail: kdrukker@uchicago.edu; Giger, Maryellen L.; Li, Hui [Department of Radiology, University of Chicago, Chicago, Illinois 60637 (United States); Duewer, Fred; Malkov, Serghei; Joe, Bonnie; Kerlikowske, Karla; Shepherd, John A. [Radiology Department, University of California, San Francisco, California 94143 (United States); Flowers, Chris I. [Department of Radiology, University of South Florida, Tampa, Florida 33612 (United States); Drukteinis, Jennifer S. [Department of Radiology, H. Lee Moffitt Cancer Center and Research Institute, Tampa, Florida 33612 (United States)

    2014-03-15

    Purpose: To investigate whether biologic image composition of mammographic lesions can improve upon existing mammographic quantitative image analysis (QIA) in estimating the probability of malignancy. Methods: The study population consisted of 45 breast lesions imaged with dual-energy mammography prior to breast biopsy with final diagnosis resulting in 10 invasive ductal carcinomas, 5 ductal carcinomain situ, 11 fibroadenomas, and 19 other benign diagnoses. Analysis was threefold: (1) The raw low-energy mammographic images were analyzed with an established in-house QIA method, “QIA alone,” (2) the three-compartment breast (3CB) composition measure—derived from the dual-energy mammography—of water, lipid, and protein thickness were assessed, “3CB alone”, and (3) information from QIA and 3CB was combined, “QIA + 3CB.” Analysis was initiated from radiologist-indicated lesion centers and was otherwise fully automated. Steps of the QIA and 3CB methods were lesion segmentation, characterization, and subsequent classification for malignancy in leave-one-case-out cross-validation. Performance assessment included box plots, Bland–Altman plots, and Receiver Operating Characteristic (ROC) analysis. Results: The area under the ROC curve (AUC) for distinguishing between benign and malignant lesions (invasive and DCIS) was 0.81 (standard error 0.07) for the “QIA alone” method, 0.72 (0.07) for “3CB alone” method, and 0.86 (0.04) for “QIA+3CB” combined. The difference in AUC was 0.043 between “QIA + 3CB” and “QIA alone” but failed to reach statistical significance (95% confidence interval [–0.17 to + 0.26]). Conclusions: In this pilot study analyzing the new 3CB imaging modality, knowledge of the composition of breast lesions and their periphery appeared additive in combination with existing mammographic QIA methods for the distinction between different benign and malignant lesion types.

  17. Multispectral magnetic resonance image analysis using principal component and linear discriminant analysis.

    NARCIS (Netherlands)

    Witjes, H.; Rijpkema, M.J.P.; Graaf, M. van der; Melssen, W.J.; Heerschap, A.; Buydens, L.M.C.

    2003-01-01

    PURPOSE: To explore the possibilities of combining multispectral magnetic resonance (MR) images of different patients within one data matrix. MATERIALS AND METHODS: Principal component and linear discriminant analysis was applied to multispectral MR images of 12 patients with different brain tumors.

  18. Nondestructive water imaging by neutron beam analysis in living plants

    International Nuclear Information System (INIS)

    Nakanishi, T.M.; Matsubayashi, M.

    1997-01-01

    Analysis of biological activity in intact cells or tissues is essential to understand many life processes. Techniques for these in vivo measurements have not been well developed. We present here a nondestructive method to image water in living plants using a neutron beam. This technique provides the highest resolution for water in tissue yet obtainable. With high specificity to water, this neutron beam technique images water movement in seeds or in roots imbedded in soil, as well as in wood and meristems during development. The resolution of the image attainable now is about 15 μm. We also describe how this new technique will allow new investigations in the field of plant research. (author)

  19. Imaging hydrated microbial extracellular polymers: Comparative analysis by electron microscopy

    Energy Technology Data Exchange (ETDEWEB)

    Dohnalkova, A.C.; Marshall, M. J.; Arey, B. W.; Williams, K. H.; Buck, E. C.; Fredrickson, J. K.

    2011-01-01

    Microbe-mineral and -metal interactions represent a major intersection between the biosphere and geosphere but require high-resolution imaging and analytical tools for investigating microscale associations. Electron microscopy has been used extensively for geomicrobial investigations and although used bona fide, the traditional methods of sample preparation do not preserve the native morphology of microbiological components, especially extracellular polymers. Herein, we present a direct comparative analysis of microbial interactions using conventional electron microscopy approaches of imaging at room temperature and a suite of cryogenic electron microscopy methods providing imaging in the close-to-natural hydrated state. In situ, we observed an irreversible transformation of the hydrated bacterial extracellular polymers during the traditional dehydration-based sample preparation that resulted in their collapse into filamentous structures. Dehydration-induced polymer collapse can lead to inaccurate spatial relationships and hence could subsequently affect conclusions regarding nature of interactions between microbial extracellular polymers and their environment.

  20. Cluster Method Analysis of K. S. C. Image

    Science.gov (United States)

    Rodriguez, Joe, Jr.; Desai, M.

    1997-01-01

    Information obtained from satellite-based systems has moved to the forefront as a method in the identification of many land cover types. Identification of different land features through remote sensing is an effective tool for regional and global assessment of geometric characteristics. Classification data acquired from remote sensing images have a wide variety of applications. In particular, analysis of remote sensing images have special applications in the classification of various types of vegetation. Results obtained from classification studies of a particular area or region serve towards a greater understanding of what parameters (ecological, temporal, etc.) affect the region being analyzed. In this paper, we make a distinction between both types of classification approaches although, focus is given to the unsupervised classification method using 1987 Thematic Mapped (TM) images of Kennedy Space Center.

  1. Plant phenotyping: from bean weighing to image analysis.

    Science.gov (United States)

    Walter, Achim; Liebisch, Frank; Hund, Andreas

    2015-01-01

    Plant phenotyping refers to a quantitative description of the plant's anatomical, ontogenetical, physiological and biochemical properties. Today, rapid developments are taking place in the field of non-destructive, image-analysis -based phenotyping that allow for a characterization of plant traits in high-throughput. During the last decade, 'the field of image-based phenotyping has broadened its focus from the initial characterization of single-plant traits in controlled conditions towards 'real-life' applications of robust field techniques in plant plots and canopies. An important component of successful phenotyping approaches is the holistic characterization of plant performance that can be achieved with several methodologies, ranging from multispectral image analyses via thermographical analyses to growth measurements, also taking root phenotypes into account.

  2. Crowdsourcing and Automated Retinal Image Analysis for Diabetic Retinopathy.

    Science.gov (United States)

    Mudie, Lucy I; Wang, Xueyang; Friedman, David S; Brady, Christopher J

    2017-09-23

    As the number of people with diabetic retinopathy (DR) in the USA is expected to increase threefold by 2050, the need to reduce health care costs associated with screening for this treatable disease is ever present. Crowdsourcing and automated retinal image analysis (ARIA) are two areas where new technology has been applied to reduce costs in screening for DR. This paper reviews the current literature surrounding these new technologies. Crowdsourcing has high sensitivity for normal vs abnormal images; however, when multiple categories for severity of DR are added, specificity is reduced. ARIAs have higher sensitivity and specificity, and some commercial ARIA programs are already in use. Deep learning enhanced ARIAs appear to offer even more improvement in ARIA grading accuracy. The utilization of crowdsourcing and ARIAs may be a key to reducing the time and cost burden of processing images from DR screening.

  3. Progress on retinal image analysis for age related macular degeneration.

    Science.gov (United States)

    Kanagasingam, Yogesan; Bhuiyan, Alauddin; Abràmoff, Michael D; Smith, R Theodore; Goldschmidt, Leonard; Wong, Tien Y

    2014-01-01

    Age-related macular degeneration (AMD) is the leading cause of vision loss in those over the age of 50 years in the developed countries. The number is expected to increase by ∼1.5 fold over the next ten years due to an increase in aging population. One of the main measures of AMD severity is the analysis of drusen, pigmentary abnormalities, geographic atrophy (GA) and choroidal neovascularization (CNV) from imaging based on color fundus photograph, optical coherence tomography (OCT) and other imaging modalities. Each of these imaging modalities has strengths and weaknesses for extracting individual AMD pathology and different imaging techniques are used in combination for capturing and/or quantification of different pathologies. Current dry AMD treatments cannot cure or reverse vision loss. However, the Age-Related Eye Disease Study (AREDS) showed that specific anti-oxidant vitamin supplementation reduces the risk of progression from intermediate stages (defined as the presence of either many medium-sized drusen or one or more large drusen) to late AMD which allows for preventative strategies in properly identified patients. Thus identification of people with early stage AMD is important to design and implement preventative strategies for late AMD, and determine their cost-effectiveness. A mass screening facility with teleophthalmology or telemedicine in combination with computer-aided analysis for large rural-based communities may identify more individuals suitable for early stage AMD prevention. In this review, we discuss different imaging modalities that are currently being considered or used for screening AMD. In addition, we look into various automated and semi-automated computer-aided grading systems and related retinal image analysis techniques for drusen, geographic atrophy and choroidal neovascularization detection and/or quantification for measurement of AMD severity using these imaging modalities. We also review the existing telemedicine studies which

  4. Image analysis for beef quality prediction from serial scan ultrasound images

    Science.gov (United States)

    Zhang, Hui L.; Wilson, Doyle E.; Rouse, Gene H.; Izquierdo, Mercedes M.

    1995-01-01

    The prediction of intramuscular fat (or marbling) of live beef animals using serially scanned ultrasound images was studied in this paper. Image analysis, both in gray scale intensity domain and in frequency spectrum domain, were used to extract image features of tissue characters to get useful parameters for prediction models. One, 2 and 3 order multi-variable prediction models were developed from randomly selected data sets and tested using the remained data sets. The comparisons of prediction results between using serially scanned images and only final scanned ones showed good improvement of prediction accuracy. The correlation of predicted percent fat and actual percent fat increase from .68 to .80 and from .72 to .76 separately for two groups of data, the R squares increase from .65 to .68 and from .68 to .72, and the root of mean square errors decrease from 1.70 to 1.52 and from 1.22 to 1.12 separately. This study indicates that serially obtained ultrasound images from live beef animals have good potential for improving the prediction accuracy of percent fat.

  5. Public-domain software for root image analysis

    Directory of Open Access Journals (Sweden)

    Mirian Cristina Gomes Costa

    2014-10-01

    Full Text Available In the search for high efficiency in root studies, computational systems have been developed to analyze digital images. ImageJ and Safira are public-domain systems that may be used for image analysis of washed roots. However, differences in root properties measured using ImageJ and Safira are supposed. This study compared values of root length and surface area obtained with public-domain systems with values obtained by a reference method. Root samples were collected in a banana plantation in an area of a shallower Typic Carbonatic Haplic Cambisol (CXk, and an area of a deeper Typic Haplic Ta Eutrophic Cambisol (CXve, at six depths in five replications. Root images were digitized and the systems ImageJ and Safira used to determine root length and surface area. The line-intersect method modified by Tennant was used as reference; values of root length and surface area measured with the different systems were analyzed by Pearson's correlation coefficient and compared by the confidence interval and t-test. Both systems ImageJ and Safira had positive correlation coefficients with the reference method for root length and surface area data in CXk and CXve. The correlation coefficient ranged from 0.54 to 0.80, with lowest value observed for ImageJ in the measurement of surface area of roots sampled in CXve. The IC (95 % revealed that root length measurements with Safira did not differ from that with the reference method in CXk (-77.3 to 244.0 mm. Regarding surface area measurements, Safira did not differ from the reference method for samples collected in CXk (-530.6 to 565.8 mm² as well as in CXve (-4231 to 612.1 mm². However, measurements with ImageJ were different from those obtained by the reference method, underestimating length and surface area in samples collected in CXk and CXve. Both ImageJ and Safira allow an identification of increases or decreases in root length and surface area. However, Safira results for root length and surface area are

  6. Automated Acquisition and Analysis of Digital Radiographic Images

    International Nuclear Information System (INIS)

    Poland, R.

    1999-01-01

    Engineers at the Savannah River Technology Center have designed, built, and installed a fully automated small field-of-view, lens-coupled, digital radiography imaging system. The system is installed in one of the Savannah River Site''s production facilities to be used for the evaluation of production components. Custom software routines developed for the system automatically acquire, enhance, and diagnostically evaluate critical geometric features of various components that have been captured radiographically. Resolution of the digital radiograms and accuracy of the acquired measurements approaches 0.001 inches. To date, there has been zero deviation in measurement repeatability. The automated image acquisition methodology will be discussed, unique enhancement algorithms will be explained, and the automated routines for measuring the critical component features will be presented. An additional feature discussed is the independent nature of the modular software components, which allows images to be automatically acquired, processed, and evaluated by the computer in the background, while the operator reviews other images on the monitor. System components were also a key in gaining the required image resolution. System factors such as scintillator selection, x-ray source energy, optical components and layout, as well as geometric unsharpness issues are considered in the paper. Finally the paper examines the numerous quality improvement factors and cost saving advantages that will be realized at the Savannah River Site due to the implementation of the Automated Pinch Weld Analysis System (APWAS)

  7. Analysis of RTM extended images for VTI media

    KAUST Repository

    Li, Vladimir

    2015-08-19

    Extended images obtained from reverse-time migration (RTM) contain information about the accuracy of the velocity field and subsurface illumination at different incidence angles. Here, we evaluate the influence of errors in the anisotropy parameters on the shape of the residual moveout (RMO) in P-wave RTM extended images for VTI (transversely isotropic with a vertical symmetry axis) media. Considering the actual spatial distribution of the zero-dip NMO velocity (Vnmo), which could be approximately estimated by conventional techniques, we analyze the extended images obtained with distorted fields of the parameters η and δ. Differential semblance optimization (DSO) and stack-power estimates are employed to study the sensitivity of focusing to the anisotropy parameters. The results show that the signature of η is dip-dependent, whereas errors in δ cause defocusing only if that parameter is laterally varying. Hence, earlier results regarding the influence of η and δ on reflection moveout and migration velocity analysis remain generally valid in the extended image space for complex media. The dependence of RMO on errors in the anisotropy parameters provides essential insights for anisotropic wavefield tomography using extended images.

  8. Nanoscale imaging and analysis of fully hydrated materials

    Science.gov (United States)

    Jungjohann, Katherine Leigh

    The study of nanomaterials in a liquid environment can provide insight to processes and dynamics with applications to energy storage materials, catalysis, nanomaterial growth and biological structures. For these applications we have developed techniques for the use of a dedicated in situ fluid holder in combination with aberration corrected scanning transmission electron microscopy (STEM) and dynamic transmission electron microscopy for imaging nanomaterials at atomic-scale resolution within a fluid layer. The abilities of the in situ fluid holder for STEM have been tested by comparing the SiN window thicknesses to optimize imaging conditions and the use electron energy loss spectroscopy to accurately measure the fluid path length within the cell and provide chemical analysis. The imaging artifacts caused by the high energy scanning electron beam have been investigated to determine the causes of bubbling, contamination and charging within the fluid cell for strategies to mitigate these effects. The use of the DTEM has demonstrated the growth of lead sulfide nanoparticles from a precursor solution by the sample drive laser separate from the imaging electrons. These techniques present the ideal platform for future studies of biological structures and dynamics at physiological conditions under low dose imaging with high temporal and spatial resolution.

  9. Automatic comic page image understanding based on edge segment analysis

    Science.gov (United States)

    Liu, Dong; Wang, Yongtao; Tang, Zhi; Li, Luyuan; Gao, Liangcai

    2013-12-01

    Comic page image understanding aims to analyse the layout of the comic page images by detecting the storyboards and identifying the reading order automatically. It is the key technique to produce the digital comic documents suitable for reading on mobile devices. In this paper, we propose a novel comic page image understanding method based on edge segment analysis. First, we propose an efficient edge point chaining method to extract Canny edge segments (i.e., contiguous chains of Canny edge points) from the input comic page image; second, we propose a top-down scheme to detect line segments within each obtained edge segment; third, we develop a novel method to detect the storyboards by selecting the border lines and further identify the reading order of these storyboards. The proposed method is performed on a data set consisting of 2000 comic page images from ten printed comic series. The experimental results demonstrate that the proposed method achieves satisfactory results on different comics and outperforms the existing methods.

  10. Within-subject template estimation for unbiased longitudinal image analysis.

    Science.gov (United States)

    Reuter, Martin; Schmansky, Nicholas J; Rosas, H Diana; Fischl, Bruce

    2012-07-16

    Longitudinal image analysis has become increasingly important in clinical studies of normal aging and neurodegenerative disorders. Furthermore, there is a growing appreciation of the potential utility of longitudinally acquired structural images and reliable image processing to evaluate disease modifying therapies. Challenges have been related to the variability that is inherent in the available cross-sectional processing tools, to the introduction of bias in longitudinal processing and to potential over-regularization. In this paper we introduce a novel longitudinal image processing framework, based on unbiased, robust, within-subject template creation, for automatic surface reconstruction and segmentation of brain MRI of arbitrarily many time points. We demonstrate that it is essential to treat all input images exactly the same as removing only interpolation asymmetries is not sufficient to remove processing bias. We successfully reduce variability and avoid over-regularization by initializing the processing in each time point with common information from the subject template. The presented results show a significant increase in precision and discrimination power while preserving the ability to detect large anatomical deviations; as such they hold great potential in clinical applications, e.g. allowing for smaller sample sizes or shorter trials to establish disease specific biomarkers or to quantify drug effects. Copyright © 2012 Elsevier Inc. All rights reserved.

  11. Error analysis of large aperture static interference imaging spectrometer

    Science.gov (United States)

    Li, Fan; Zhang, Guo

    2015-12-01

    Large Aperture Static Interference Imaging Spectrometer is a new type of spectrometer with light structure, high spectral linearity, high luminous flux and wide spectral range, etc ,which overcomes the contradiction between high flux and high stability so that enables important values in science studies and applications. However, there're different error laws in imaging process of LASIS due to its different imaging style from traditional imaging spectrometers, correspondingly, its data processing is complicated. In order to improve accuracy of spectrum detection and serve for quantitative analysis and monitoring of topographical surface feature, the error law of LASIS imaging is supposed to be learned. In this paper, the LASIS errors are classified as interferogram error, radiometric correction error and spectral inversion error, and each type of error is analyzed and studied. Finally, a case study of Yaogan-14 is proposed, in which the interferogram error of LASIS by time and space combined modulation is mainly experimented and analyzed, as well as the errors from process of radiometric correction and spectral inversion.

  12. Monitoring polymorphic transformations by using in situ Raman hyperspectral imaging and image multiset analysis.

    Science.gov (United States)

    Piqueras, S; Duponchel, L; Tauler, R; de Juan, A

    2014-03-28

    Polymorphism is often encountered in many crystalline compounds. To control the quality of the products, it is relevant knowing the potential presence of polymorph transformations induced by different agents, such as light exposure or temperature changes. Raman images offer a great potential to identify polymorphs involved in a process and to accurately describe this kind of solid-state transformation in the surface scanned. As a way of example, this work proposes the use of multiset analysis on the series of Raman hyperspectral images acquired during a thermal induced transformation of carbamazepine as the optimal way to extract useful information about polymorphic or any other kind of dynamic transformation among process compounds. Image multiset analysis, performed by using Multivariate Curve Resolution-Alternating Least Squares (MCR-ALS), will furnish pure spectra and distribution maps of the compounds involved in the process and, hence, will allow the identification of polymorphs and, more important, the description of the process evolution at a global and local (pixel) level. Thus, process will be defined from a spatial point of view and by means of a set of global process profiles dependent on the process control variable. The results obtained confirm the power of this methodology and show the crucial role of the spatial information contained in the image (absent in conventional spectroscopy) for a correct process description. Copyright © 2014 Elsevier B.V. All rights reserved.

  13. Digital image analysis in breast pathology-from image processing techniques to artificial intelligence.

    Science.gov (United States)

    Robertson, Stephanie; Azizpour, Hossein; Smith, Kevin; Hartman, Johan

    2018-04-01

    Breast cancer is the most common malignant disease in women worldwide. In recent decades, earlier diagnosis and better adjuvant therapy have substantially improved patient outcome. Diagnosis by histopathology has proven to be instrumental to guide breast cancer treatment, but new challenges have emerged as our increasing understanding of cancer over the years has revealed its complex nature. As patient demand for personalized breast cancer therapy grows, we face an urgent need for more precise biomarker assessment and more accurate histopathologic breast cancer diagnosis to make better therapy decisions. The digitization of pathology data has opened the door to faster, more reproducible, and more precise diagnoses through computerized image analysis. Software to assist diagnostic breast pathology through image processing techniques have been around for years. But recent breakthroughs in artificial intelligence (AI) promise to fundamentally change the way we detect and treat breast cancer in the near future. Machine learning, a subfield of AI that applies statistical methods to learn from data, has seen an explosion of interest in recent years because of its ability to recognize patterns in data with less need for human instruction. One technique in particular, known as deep learning, has produced groundbreaking results in many important problems including image classification and speech recognition. In this review, we will cover the use of AI and deep learning in diagnostic breast pathology, and other recent developments in digital image analysis. Copyright © 2017 Elsevier Inc. All rights reserved.

  14. GRETNA: a graph theoretical network analysis toolbox for imaging connectomics

    Directory of Open Access Journals (Sweden)

    Jinhui eWang

    2015-06-01

    Full Text Available Recent studies have suggested that the brain’s structural and functional networks (i.e., connectomics can be constructed by various imaging technologies (e.g., EEG/MEG; structural, diffusion and functional MRI and further characterized by graph theory. Given the huge complexity of network construction, analysis and statistics, toolboxes incorporating these functions are largely lacking. Here, we developed the GRaph thEoreTical Network Analysis (GRETNA toolbox for imaging connectomics. The GRETNA contains several key features as follows: (i an open-source, Matlab-based, cross-platform (Windows and UNIX OS package with a graphical user interface; (ii allowing topological analyses of global and local network properties with parallel computing ability, independent of imaging modality and species; (iii providing flexible manipulations in several key steps during network construction and analysis, which include network node definition, network connectivity processing, network type selection and choice of thresholding procedure; (iv allowing statistical comparisons of global, nodal and connectional network metrics and assessments of relationship between these network metrics and clinical or behavioral variables of interest; and (v including functionality in image preprocessing and network construction based on resting-state functional MRI (R-fMRI data. After applying the GRETNA to a publicly released R-fMRI dataset of 54 healthy young adults, we demonstrated that human brain functional networks exhibit efficient small-world, assortative, hierarchical and modular organizations and possess highly connected hubs and that these findings are robust against different analytical strategies. With these efforts, we anticipate that GRETNA will accelerate imaging connectomics in an easy, quick and flexible manner. GRETNA is freely available on the NITRC website (http://www.nitrc.org/projects/gretna/.

  15. GRETNA: a graph theoretical network analysis toolbox for imaging connectomics.

    Science.gov (United States)

    Wang, Jinhui; Wang, Xindi; Xia, Mingrui; Liao, Xuhong; Evans, Alan; He, Yong

    2015-01-01

    Recent studies have suggested that the brain's structural and functional networks (i.e., connectomics) can be constructed by various imaging technologies (e.g., EEG/MEG; structural, diffusion and functional MRI) and further characterized by graph theory. Given the huge complexity of network construction, analysis and statistics, toolboxes incorporating these functions are largely lacking. Here, we developed the GRaph thEoreTical Network Analysis (GRETNA) toolbox for imaging connectomics. The GRETNA contains several key features as follows: (i) an open-source, Matlab-based, cross-platform (Windows and UNIX OS) package with a graphical user interface (GUI); (ii) allowing topological analyses of global and local network properties with parallel computing ability, independent of imaging modality and species; (iii) providing flexible manipulations in several key steps during network construction and analysis, which include network node definition, network connectivity processing, network type selection and choice of thresholding procedure; (iv) allowing statistical comparisons of global, nodal and connectional network metrics and assessments of relationship between these network metrics and clinical or behavioral variables of interest; and (v) including functionality in image preprocessing and network construction based on resting-state functional MRI (R-fMRI) data. After applying the GRETNA to a publicly released R-fMRI dataset of 54 healthy young adults, we demonstrated that human brain functional networks exhibit efficient small-world, assortative, hierarchical and modular organizations and possess highly connected hubs and that these findings are robust against different analytical strategies. With these efforts, we anticipate that GRETNA will accelerate imaging connectomics in an easy, quick and flexible manner. GRETNA is freely available on the NITRC website.

  16. AUTOMATED DATA ANALYSIS FOR CONSECUTIVE IMAGES FROM DROPLET COMBUSTION EXPERIMENTS

    Directory of Open Access Journals (Sweden)

    Christopher Lee Dembia

    2012-09-01

    Full Text Available A simple automated image analysis algorithm has been developed that processes consecutive images from high speed, high resolution digital images of burning fuel droplets. The droplets burn under conditions that promote spherical symmetry. The algorithm performs the tasks of edge detection of the droplet’s boundary using a grayscale intensity threshold, and shape fitting either a circle or ellipse to the droplet’s boundary. The results are compared to manual measurements of droplet diameters done with commercial software. Results show that it is possible to automate data analysis for consecutive droplet burning images even in the presence of a significant amount of noise from soot formation. An adaptive grayscale intensity threshold provides the ability to extract droplet diameters for the wide range of noise encountered. In instances where soot blocks portions of the droplet, the algorithm manages to provide accurate measurements if a circle fit is used instead of an ellipse fit, as an ellipse can be too accommodating to the disturbance.

  17. Analysis of Scanned Probe Images for Magnetic Focusing in Graphene

    Science.gov (United States)

    Bhandari, Sagar; Lee, Gil-Ho; Kim, Philip; Westervelt, Robert M.

    2017-07-01

    We have used cooled scanning probe microscopy (SPM) to study electron motion in nanoscale devices. The charged tip of the microscope was raster-scanned at constant height above the surface as the conductance of the device was measured. The image charge scatters electrons away, changing the path of electrons through the sample. Using this technique, we imaged cyclotron orbits that flow between two narrow contacts in the magnetic focusing regime for ballistic hBN-graphene-hBN devices. We present herein an analysis of our magnetic focusing imaging results based on the effects of the tip-created charge density dip on the motion of ballistic electrons. The density dip locally reduces the Fermi energy, creating a force that pushes electrons away from the tip. When the tip is above the cyclotron orbit, electrons are deflected away from the receiving contact, creating an image by reducing the transmission between contacts. The data and our analysis suggest that the graphene edge is rather rough, and electrons scattering off the edge bounce in random directions. However, when the tip is close to the edge, it can enhance transmission by bouncing electrons away from the edge, toward the receiving contact. Our results demonstrate that cooled SPM is a promising tool to investigate the motion of electrons in ballistic graphene devices.

  18. Multiple-Instance Learning for Medical Image and Video Analysis.

    Science.gov (United States)

    Quellec, Gwenole; Cazuguel, Guy; Cochener, Beatrice; Lamard, Mathieu

    2017-01-01

    Multiple-instance learning (MIL) is a recent machine-learning paradigm that is particularly well suited to medical image and video analysis (MIVA) tasks. Based solely on class labels assigned globally to images or videos, MIL algorithms learn to detect relevant patterns locally in images or videos. These patterns are then used for classification at a global level. Because supervision relies on global labels, manual segmentations are not needed to train MIL algorithms, unlike traditional single-instance learning (SIL) algorithms. Consequently, these solutions are attracting increasing interest from the MIVA community: since the term was coined by Dietterich et al. in 1997, 73 research papers about MIL have been published in the MIVA literature. This paper reviews the existing strategies for modeling MIVA tasks as MIL problems, recommends general-purpose MIL algorithms for each type of MIVA tasks, and discusses MIVA-specific MIL algorithms. Various experiments performed in medical image and video datasets are compiled in order to back up these discussions. This meta-analysis shows that, besides being more convenient than SIL solutions, MIL algorithms are also more accurate in many cases. In other words, MIL is the ideal solution for many MIVA tasks. Recent trends are discussed, and future directions are proposed for this emerging paradigm.

  19. Four dimensional reconstruction and analysis of plume images

    Science.gov (United States)

    Dhawan, Atam P.; Peck, Charles, III; Disimile, Peter

    1991-01-01

    A number of methods have been investigated and are under current investigation for monitoring the health of the Space Shuttle Main Engine (SSME). Plume emission analysis has recently emerged as a potential technique for correlating the emission characteristics with the health of an engine. In order to correlate the visual and spectral signatures of the plume emission with the characteristic health monitoring features of the engine, the plume emission data must be acquired, stored, and analyzed in a manner similar to flame emission spectroscopy. The characteristic visual and spectral signatures of the elements vaporized in exhaust plume along with the features related to their temperature, pressure, and velocity can be analyzed once the images of plume emission are effectively acquired, digitized, and stored on a computer. Since the emission image varies with respect to time at a specified planar location, four dimensional visual and spectral analysis need to be performed on the plume emission data. In order to achieve this objective, feasibility research was conducted to digitize, store, analyze, and visualize the images of a subsonic jet in a cross flow. The jet structure was made visible using a direct injection flow visualization technique. The results of time-history based three dimensional reconstruction of the cross sectional images corresponding to a specific planar location of the jet structure are presented. The experimental set-up to acquire such data is described and three dimensional displays of time-history based reconstructions of the jet structure are discussed.

  20. Interactive Exploration for Image Retrieval

    Directory of Open Access Journals (Sweden)

    Jérôme Fournier

    2005-08-01

    Full Text Available We present a new version of our content-based image retrieval system RETIN. It is based on adaptive quantization of the color space, together with new features aiming at representing the spatial relationship between colors. Color analysis is also extended to texture. Using these powerful indexes, an original interactive retrieval strategy is introduced. The process is based on two steps for handling the retrieval of very large image categories. First, a controlled exploration method of the database is presented. Second, a relevance feedback method based on statistical learning is proposed. All the steps are evaluated by experiments on a generalist database.