WorldWideScience

Sample records for web image retrieval

  1. Intelligent web image retrieval system

    Science.gov (United States)

    Hong, Sungyong; Lee, Chungwoo; Nah, Yunmook

    2001-07-01

    Recently, the web sites such as e-business sites and shopping mall sites deal with lots of image information. To find a specific image from these image sources, we usually use web search engines or image database engines which rely on keyword only retrievals or color based retrievals with limited search capabilities. This paper presents an intelligent web image retrieval system. We propose the system architecture, the texture and color based image classification and indexing techniques, and representation schemes of user usage patterns. The query can be given by providing keywords, by selecting one or more sample texture patterns, by assigning color values within positional color blocks, or by combining some or all of these factors. The system keeps track of user's preferences by generating user query logs and automatically add more search information to subsequent user queries. To show the usefulness of the proposed system, some experimental results showing recall and precision are also explained.

  2. A unified relevance feedback framework for web image retrieval.

    Science.gov (United States)

    Cheng, En; Jing, Feng; Zhang, Lei

    2009-06-01

    Although relevance feedback (RF) has been extensively studied in the content-based image retrieval community, no commercial Web image search engines support RF because of scalability, efficiency, and effectiveness issues. In this paper, we propose a unified relevance feedback framework for Web image retrieval. Our framework shows advantage over traditional RF mechanisms in the following three aspects. First, during the RF process, both textual feature and visual feature are used in a sequential way. To seamlessly combine textual feature-based RF and visual feature-based RF, a query concept-dependent fusion strategy is automatically learned. Second, the textual feature-based RF mechanism employs an effective search result clustering (SRC) algorithm to obtain salient phrases, based on which we could construct an accurate and low-dimensional textual space for the resulting Web images. Thus, we could integrate RF into Web image retrieval in a practical way. Last, a new user interface (UI) is proposed to support implicit RF. On the one hand, unlike traditional RF UI which enforces users to make explicit judgment on the results, the new UI regards the users' click-through data as implicit relevance feedback in order to release burden from the users. On the other hand, unlike traditional RF UI which hardily substitutes subsequent results for previous ones, a recommendation scheme is used to help the users better understand the feedback process and to mitigate the possible waiting caused by RF. Experimental results on a database consisting of nearly three million Web images show that the proposed framework is wieldy, scalable, and effective.

  3. Dynamic "inline" images: context-sensitive retrieval and integration of images into Web documents.

    Science.gov (United States)

    Kahn, Charles E

    2008-09-01

    Integrating relevant images into web-based information resources adds value for research and education. This work sought to evaluate the feasibility of using "Web 2.0" technologies to dynamically retrieve and integrate pertinent images into a radiology web site. An online radiology reference of 1,178 textual web documents was selected as the set of target documents. The ARRS GoldMiner image search engine, which incorporated 176,386 images from 228 peer-reviewed journals, retrieved images on demand and integrated them into the documents. At least one image was retrieved in real-time for display as an "inline" image gallery for 87% of the web documents. Each thumbnail image was linked to the full-size image at its original web site. Review of 20 randomly selected Collaborative Hypertext of Radiology documents found that 69 of 72 displayed images (96%) were relevant to the target document. Users could click on the "More" link to search the image collection more comprehensively and, from there, link to the full text of the article. A gallery of relevant radiology images can be inserted easily into web pages on any web server. Indexing by concepts and keywords allows context-aware image retrieval, and searching by document title and subject metadata yields excellent results. These techniques allow web developers to incorporate easily a context-sensitive image gallery into their documents.

  4. Dynamic “Inline” Images: Context-Sensitive Retrieval and Integration of Images into Web Documents

    OpenAIRE

    Kahn, Charles E.

    2008-01-01

    Integrating relevant images into web-based information resources adds value for research and education. This work sought to evaluate the feasibility of using “Web 2.0” technologies to dynamically retrieve and integrate pertinent images into a radiology web site. An online radiology reference of 1,178 textual web documents was selected as the set of target documents. The ARRS GoldMiner™ image search engine, which incorporated 176,386 images from 228 peer-reviewed journals, retrieved images on ...

  5. A web service for enabling medical image retrieval integrated into a social medical image sharing platform.

    Science.gov (United States)

    Niinimäki, Marko; Zhou, Xin; de la Vega, Enrique; Cabrer, Miguel; Müller, Henning

    2010-01-01

    Content-based visual image access is in the process from a research domain towards real applications. So far, most image retrieval applications have been in one specialized domain such as lung CTs as diagnosis aid or for classification of general images based on anatomic region, modality, and view. This article describes the use of a content-based image retrieval system in connection with the medical image sharing platform MEDTING, so a data set with a very large variety. Similarity retrieval is possible for all cases of the social image sharing platform, so cases can be linked by either visual similarity or similarity in keywords. The visual retrieval search is based on the GIFT (GNU Image Finding Tool). The technology for updating the index with new images added by users employs RSS (Really Simple Syndication) feeds. The ARC (Advanced Resource Connector) middleware is used for the implementation of a web service for similarity retrieval, simplifying the integration of this service. Novelty of this article is the application/integration and image updating strategy. Retrieval methods themselves employ existing techniques that are all open source and can easily be reproduced.

  6. Improving Concept-Based Web Image Retrieval by Mixing Semantically Similar Greek Queries

    Science.gov (United States)

    Lazarinis, Fotis

    2008-01-01

    Purpose: Image searching is a common activity for web users. Search engines offer image retrieval services based on textual queries. Previous studies have shown that web searching is more demanding when the search is not in English and does not use a Latin-based language. The aim of this paper is to explore the behaviour of the major search…

  7. Creating a Web-based image database for benchmarking image retrieval systems

    Science.gov (United States)

    Joergensen, Corinne; Srihari, Rohini K.

    1999-05-01

    There is, at present, a critical need within image retrieval research for an image testbed which would enable the objective evaluation of different content-based search engines, indexing and metadata schemes, and search heuristics, as well as research and evaluation in image- based knowledge structures and system architectures, user's needs in image retrieval and the cognitive processes involved in image searching. This paper discusses a pilot project specifying and establishing a prototype testbed for the evaluation of image retrieval techniques. A feasibility study is underway focusing on the development of a large set of standardized test images accessible through a web interface, and researchers in the field are being surveyed for input. Areas being addressed in the feasibility study include technical specifications as well as content issues such as: which specific image domains to include; the useful proportion of imags belonging to specific domains to images belonging to a general 'world' domain; types of image attributes and baseline and 'advanced' levels of image description needed, and research needs to be accommodated, as well as development of a standardized set of test queries and the establishment of methods for 'truthing' the database and test queries.

  8. Indexing and retrieving Web documents as direct manipulation of images

    Science.gov (United States)

    Ferri, Fernando; Grifoni, Patrizia; Mussio, Piero; Padula, Marco

    2000-12-01

    The rapid growth of network communication through the World Wide Web has encouraged a large diffusion of connections to Internet, due to the heavily interactive services which are offered for accessing, using and producing the incredible mass of information and more general resources which is now available. People communicating in this environment are usually end users whom are not skilled in computer science and are experienced in a specific area; they are generally interested in search, producing information, and accessibility. The phenomenon of the World Wide Web is producing a significant change in the concept of document, which is becoming strongly visual and dynamically arranged. A document is an image, and an image is a document. This change requires a new approach in presenting, authoring, indexing and querying a web document. In the paper we propose visual language defined to reach the previously introduced goals, discussing the case of an Information Base containing clinical data. Notwithstanding the amount and the heterogeneity of the data available, it is quite difficult to access truly interesting information and to suitably exploit it; this is due to the poor usability of tools which offer and interaction style still limited with respect to the interfaces WIMP (Window, Icons, Menu, Pointers) and to the indexing techniques usually adopted to organize the web pages by means of robots and search engines.

  9. A web-accessible content-based cervicographic image retrieval system

    Science.gov (United States)

    Xue, Zhiyun; Long, L. Rodney; Antani, Sameer; Jeronimo, Jose; Thoma, George R.

    2008-03-01

    Content-based image retrieval (CBIR) is the process of retrieving images by directly using image visual characteristics. In this paper, we present a prototype system implemented for CBIR for a uterine cervix image (cervigram) database. This cervigram database is a part of data collected in a multi-year longitudinal effort by the National Cancer Institute (NCI), and archived by the National Library of Medicine (NLM), for the study of the origins of, and factors related to, cervical precancer/cancer. Users may access the system with any Web browser. The system is built with a distributed architecture which is modular and expandable; the user interface is decoupled from the core indexing and retrieving algorithms, and uses open communication standards and open source software. The system tries to bridge the gap between a user's semantic understanding and image feature representation, by incorporating the user's knowledge. Given a user-specified query region, the system returns the most similar regions from the database, with respect to attributes of color, texture, and size. Experimental evaluation of the retrieval performance of the system on "groundtruth" test data illustrates its feasibility to serve as a possible research tool to aid the study of the visual characteristics of cervical neoplasia.

  10. SPIRS: a Web-based image retrieval system for large biomedical databases.

    Science.gov (United States)

    Hsu, William; Antani, Sameer; Long, L Rodney; Neve, Leif; Thoma, George R

    2009-04-01

    With the increasing use of images in disease research, education, and clinical medicine, the need for methods that effectively archive, query, and retrieve these images by their content is underscored. This paper describes the implementation of a Web-based retrieval system called SPIRS (Spine Pathology & Image Retrieval System), which permits exploration of a large biomedical database of digitized spine X-ray images and data from a national health survey using a combination of visual and textual queries. SPIRS is a generalizable framework that consists of four components: a client applet, a gateway, an indexing and retrieval system, and a database of images and associated text data. The prototype system is demonstrated using text and imaging data collected as part of the second U.S. National Health and Nutrition Examination Survey (NHANES II). Users search the image data by providing a sketch of the vertebral outline or selecting an example vertebral image and some relevant text parameters. Pertinent pathology on the image/sketch can be annotated and weighted to indicate importance. During the course of development, we explored different algorithms to perform functions such as segmentation, indexing, and retrieval. Each algorithm was tested individually and then implemented as part of SPIRS. To evaluate the overall system, we first tested the system's ability to return similar vertebral shapes from the database given a query shape. Initial evaluations using visual queries only (no text) have shown that the system achieves up to 68% accuracy in finding images in the database that exhibit similar abnormality type and severity. Relevance feedback mechanisms have been shown to increase accuracy by an additional 22% after three iterations. While we primarily demonstrate this system in the context of retrieving vertebral shape, our framework has also been adapted to search a collection of 100,000 uterine cervix images to study the progression of cervical cancer. SPIRS is

  11. Web tools for effective retrieval, visualization, and evaluation of cardiology medical images and records

    Science.gov (United States)

    Masseroli, Marco; Pinciroli, Francesco

    2000-12-01

    To provide easy retrieval, integration and evaluation of multimodal cardiology images and data in a web browser environment, distributed application technologies and java programming were used to implement a client-server architecture based on software agents. The server side manages secure connections and queries to heterogeneous remote databases and file systems containing patient personal and clinical data. The client side is a Java applet running in a web browser and providing a friendly medical user interface to perform queries on patient and medical test dat and integrate and visualize properly the various query results. A set of tools based on Java Advanced Imaging API enables to process and analyze the retrieved cardiology images, and quantify their features in different regions of interest. The platform-independence Java technology makes the developed prototype easy to be managed in a centralized form and provided in each site where an intranet or internet connection can be located. Giving the healthcare providers effective tools for querying, visualizing and evaluating comprehensively cardiology medical images and records in all locations where they can need them- i.e. emergency, operating theaters, ward, or even outpatient clinics- the developed prototype represents an important aid in providing more efficient diagnoses and medical treatments.

  12. Design of a web portal for interdisciplinary image retrieval from multiple online image resources.

    Science.gov (United States)

    Kammerer, F J; Frankewitsch, T; Prokosch, H-U

    2009-01-01

    Images play an important role in medicine. Finding the desired images within the multitude of online image databases is a time-consuming and frustrating process. Existing websites do not meet all the requirements for an ideal learning environment for medical students. This work intends to establish a new web portal providing a centralized access point to a selected number of online image databases. A back-end system locates images on given websites and extracts relevant metadata. The images are indexed using UMLS and the MetaMap system provided by the US National Library of Medicine. Specially developed functions allow to create individual navigation structures. The front-end system suits the specific needs of medical students. A navigation structure consisting of several medical fields, university curricula and the ICD-10 was created. The images may be accessed via the given navigation structure or using different search functions. Cross-references are provided by the semantic relations of the UMLS. Over 25,000 images were identified and indexed. A pilot evaluation among medical students showed good first results concerning the acceptance of the developed navigation structures and search features. The integration of the images from different sources into the UMLS semantic network offers a quick and an easy-to-use learning environment.

  13. Mobile medical image retrieval

    Science.gov (United States)

    Duc, Samuel; Depeursinge, Adrien; Eggel, Ivan; Müller, Henning

    2011-03-01

    Images are an integral part of medical practice for diagnosis, treatment planning and teaching. Image retrieval has gained in importance mainly as a research domain over the past 20 years. Both textual and visual retrieval of images are essential. In the process of mobile devices becoming reliable and having a functionality equaling that of formerly desktop clients, mobile computing has gained ground and many applications have been explored. This creates a new field of mobile information search & access and in this context images can play an important role as they often allow understanding complex scenarios much quicker and easier than free text. Mobile information retrieval in general has skyrocketed over the past year with many new applications and tools being developed and all sorts of interfaces being adapted to mobile clients. This article describes constraints of an information retrieval system including visual and textual information retrieval from the medical literature of BioMedCentral and of the RSNA journals Radiology and Radiographics. Solutions for mobile data access with an example on an iPhone in a web-based environment are presented as iPhones are frequently used and the operating system is bound to become the most frequent smartphone operating system in 2011. A web-based scenario was chosen to allow for a use by other smart phone platforms such as Android as well. Constraints of small screens and navigation with touch screens are taken into account in the development of the application. A hybrid choice had to be taken to allow for taking pictures with the cell phone camera and upload them for visual similarity search as most producers of smart phones block this functionality to web applications. Mobile information access and in particular access to images can be surprisingly efficient and effective on smaller screens. Images can be read on screen much faster and relevance of documents can be identified quickly through the use of images contained in

  14. Image retrieval

    DEFF Research Database (Denmark)

    Ørnager, Susanne

    1997-01-01

    from text, image to object. An empirical study, based on 17 newspaper archives, demonstrates user group requirements including archivists (creators), journalists (immediate users), and newspaper readers (end-users). A word association test is completed and the terms are used to build a user interface...

  15. Challenges in Web Information Retrieval

    Science.gov (United States)

    Arora, Monika; Kanjilal, Uma; Varshney, Dinesh

    The major challenge in information access is the rich data available for information retrieval, evolved to provide principle approaches or strategies for searching. The search has become the leading paradigm to find the information on World Wide Web. For building the successful web retrieval search engine model, there are a number of challenges that arise at the different levels where techniques, such as Usenet, support vector machine are employed to have a significant impact. The present investigations explore the number of problems identified its level and related to finding information on web. This paper attempts to examine the issues by applying different methods such as web graph analysis, the retrieval and analysis of newsgroup postings and statistical methods for inferring meaning in text. We also discuss how one can have control over the vast amounts of data on web, by providing the proper address to the problems in innovative ways that can extremely improve on standard. The proposed model thus assists the users in finding the existing formation of data they need. The developed information retrieval model deals with providing access to information available in various modes and media formats and to provide the content is with facilitating users to retrieve relevant and comprehensive information efficiently and effectively as per their requirements. This paper attempts to discuss the parameters factors that are responsible for the efficient searching. These parameters can be distinguished in terms of important and less important based on the inputs that we have. The important parameters can be taken care of for the future extension or development of search engines

  16. Efficient Graffiti Image Retrieval

    Energy Technology Data Exchange (ETDEWEB)

    Yang, Chunlei; Wong, Pak C.; Ribarsky, William; Fan, Jianping

    2012-07-05

    Research of graffiti character recognition and retrieval, as a branch of traditional optical character recognition (OCR), has started to gain attention in recent years. We have investigated the special challenge of the graffiti image retrieval problem and propose a series of novel techniques to overcome the challenges. The proposed bounding box framework locates the character components in the graffiti images to construct meaningful character strings and conduct image-wise and semantic-wise retrieval on the strings rather than the entire image. Using real world data provided by the law enforcement community to the Pacific Northwest National Laboratory, we show that the proposed framework outperforms the traditional image retrieval framework with better retrieval results and improved computational efficiency.

  17. Indexing and Retrieval for the Web.

    Science.gov (United States)

    Rasmussen, Edie M.

    2003-01-01

    Explores current research on indexing and ranking as retrieval functions of search engines on the Web. Highlights include measuring search engine stability; evaluation of Web indexing and retrieval; Web crawlers; hyperlinks for indexing and ranking; ranking for metasearch; document structure; citation indexing; relevance; query evaluation;…

  18. Emergent web intelligence advanced information retrieval

    CERN Document Server

    Badr, Youakim; Abraham, Ajith; Hassanien, Aboul-Ella

    2010-01-01

    Web Intelligence explores the impact of artificial intelligence and advanced information technologies representing the next generation of Web-based systems, services, and environments, and designing hybrid web systems that serve wired and wireless users more efficiently. Multimedia and XML-based data are produced regularly and in increasing way in our daily digital activities, and their retrieval must be explored and studied in this emergent web-based era. 'Emergent Web Intelligence: Advanced information retrieval, provides reviews of the related cutting-edge technologies and insights. It is v

  19. IMAGE RETRIEVAL: A STATE OF THE ART APPROACH FOR CBIR

    OpenAIRE

    AMANDEEP KHOKHER; DR. RAJNEESH TALWAR

    2011-01-01

    The emergence of multimedia, the availability of large image archives, and the rapid growth of the World Wide Web Web (WWW) have attracted significant research efforts in providing tools for effective retrieval and management of visual data. A major approach directed towards achieving this goal is to use visual contents ofthe image data to segment, index and retrieve relevant images from the image database. The commonest approaches use the so-called Content-Based Image Retrieval (CBIR) system...

  20. Neutrosophic Features for Image Retrieval

    Directory of Open Access Journals (Sweden)

    A.A. Salama

    2016-12-01

    Full Text Available The goal of an Image Retrieval System is to retrieve images that are relevant to the user's request from a large image collection. In this paper, we present texture features for images embedded in the neutrosophic domain. The aim is to extract a set of features to represent the content of each image in the training database to be used for the purpose of retrieving images from the database similar to the image under consideration.

  1. Web-based multimedia information retrieval for clinical application research

    Science.gov (United States)

    Cao, Xinhua; Hoo, Kent S., Jr.; Zhang, Hong; Ching, Wan; Zhang, Ming; Wong, Stephen T. C.

    2001-08-01

    We described a web-based data warehousing method for retrieving and analyzing neurological multimedia information. The web-based method supports convenient access, effective search and retrieval of clinical textual and image data, and on-line analysis. To improve the flexibility and efficiency of multimedia information query and analysis, a three-tier, multimedia data warehouse for epilepsy research has been built. The data warehouse integrates clinical multimedia data related to epilepsy from disparate sources and archives them into a well-defined data model.

  2. Multilingual retrieval of radiology images.

    Science.gov (United States)

    Kahn, Charles E

    2009-01-01

    The multilingual search engine ARRS GoldMiner Global was created to facilitate broad international access to a richly indexed collection of more than 200,000 radiologic images. Images are indexed according to key-words and medical concepts that appear in the unstructured text of their English-language image captions. GoldMiner Global exploits the Unicode standard, which allows the accurate representation of characters and ideographs from virtually any language and which supports both left-to-right and right-to-left text directions. The user interface supports queries in Arabic, Chinese, French, German, Italian, Japanese, Korean, Portuguese, Russian, or Spanish. GoldMiner Global incorporates an interface to the United States National Library of Medicine that translates queries into English-language Medical Subject Headings (MeSH) terms. The translated MeSH terms are then used to search the image index and retrieve relevant images. Explanatory text, pull-down menu choices, and navigational guides are displayed in the selected language; search results are displayed in English. GoldMiner Global is freely available on the World Wide Web. (c) RSNA, 2008.

  3. Metadata for Content-Based Image Retrieval

    Directory of Open Access Journals (Sweden)

    Adrian Sterca

    2010-12-01

    Full Text Available This paper presents an image retrieval technique that combines content based image retrieval with pre-computed metadata-based image retrieval. The resulting system will have the advantages of both approaches: the speed/efficiency of metadata-based image retrieval and the accuracy/power of content-based image retrieval.

  4. An Analysis of Web Image Queries for Search.

    Science.gov (United States)

    Pu, Hsiao-Tieh

    2003-01-01

    Examines the differences between Web image and textual queries, and attempts to develop an analytic model to investigate their implications for Web image retrieval systems. Provides results that give insight into Web image searching behavior and suggests implications for improvement of current Web image search engines. (AEF)

  5. Analyzing web log files of the health on the net HONmedia search engine to define typical image search tasks for image retrieval evaluation.

    Science.gov (United States)

    Müller, Henning; Boyer, Célia; Gaudinat, Arnaud; Hersh, William; Geissbuhler, Antoine

    2007-01-01

    Medical institutions produce ever-increasing amount of diverse information. The digital form makes these data available for the use on more than a single patient. Images are no exception to this. However, less is known about how medical professionals search for visual medical information and how they want to use it outside of the context of a single patient. This article analyzes ten months of usage log files of the Health on the Net (HON) medical media search engine. Key words were extracted from all queries and the most frequent terms and subjects were identified. The dataset required much pre-treatment. Problems included national character sets, spelling errors and the use of terms in several languages. The results show that media search, particularly for images, was frequently used. The most common queries were for general concepts (e.g., heart, lung). To define realistic information needs for the ImageCLEFmed challenge evaluation (Cross Language Evaluation Forum medical image retrieval), we used frequent queries that were still specific enough to at least cover two of the three axes on modality, anatomic region, and pathology. Several research groups evaluated their image retrieval algorithms based on these defined topics.

  6. Interactive Exploration for Image Retrieval

    Directory of Open Access Journals (Sweden)

    Jérôme Fournier

    2005-08-01

    Full Text Available We present a new version of our content-based image retrieval system RETIN. It is based on adaptive quantization of the color space, together with new features aiming at representing the spatial relationship between colors. Color analysis is also extended to texture. Using these powerful indexes, an original interactive retrieval strategy is introduced. The process is based on two steps for handling the retrieval of very large image categories. First, a controlled exploration method of the database is presented. Second, a relevance feedback method based on statistical learning is proposed. All the steps are evaluated by experiments on a generalist database.

  7. Image Searching on the Excite Web Search Engine.

    Science.gov (United States)

    Goodrum, Abby; Spink, Amanda

    2001-01-01

    Examines visual information needs as expressed in users' Web image queries on the Excite search engine. Discusses metadata; content-based image retrieval; user interaction with images; terms per query; term frequency; and implications for the development of models for visual information retrieval and for the design of Web search engines.…

  8. Image migration: measured retrieval rates

    Science.gov (United States)

    Witt, Robert M.

    2007-03-01

    When the Indianapolis Veterans Affairs Medical Center changed Picture Archiving and Communication Systems (PACS) vendors, we chose to use "on demand" image migration as the more cost effective solution. The legacy PACS stores the image data on optical disks in multi-platter jukeboxes. The estimated size of the legacy image data is about 5 terabytes containing studies from ~1997 to ~2003. Both the legacy and the new PACS support a manual DICOM query/retrieve. We implemented workflow rules to determine when to fetch the relevant priors from the legacy PACS. When a patient presents for a new radiology study, we used the following rules to initiate the manual DICOM query/retrieve. For general radiography we retrieved the two most recent prior examinations and for the modalities MR and CT we retrieved the clinically relevant prior examinations. We monitored the number of studies retrieved each week for about a 12 month period. For our facility which performs about 70,000 radiology examinations per year, we observed an essentially constant retrieval rate of slightly less than 50 studies per week. Some explanations for what may be considered an anomalous result maybe related to the fact that we are a tertiary care facility and a teaching hospital.

  9. Intelligent image retrieval based on radiology reports

    Energy Technology Data Exchange (ETDEWEB)

    Gerstmair, Axel; Langer, Mathias; Kotter, Elmar [University Medical Center Freiburg, Department of Diagnostic Radiology, Freiburg (Germany); Daumke, Philipp; Simon, Kai [Averbis GmbH, Freiburg (Germany)

    2012-12-15

    To create an advanced image retrieval and data-mining system based on in-house radiology reports. Radiology reports are semantically analysed using natural language processing (NLP) techniques and stored in a state-of-the-art search engine. Images referenced by sequence and image number in the reports are retrieved from the picture archiving and communication system (PACS) and stored for later viewing. A web-based front end is used as an interface to query for images and show the results with the retrieved images and report text. Using a comprehensive radiological lexicon for the underlying terminology, the search algorithm also finds results for synonyms, abbreviations and related topics. The test set was 108 manually annotated reports analysed by different system configurations. Best results were achieved using full syntactic and semantic analysis with a precision of 0.929 and recall of 0.952. Operating successfully since October 2010, 258,824 reports have been indexed and a total of 405,146 preview images are stored in the database. Data-mining and NLP techniques provide quick access to a vast repository of images and radiology reports with both high precision and recall values. Consequently, the system has become a valuable tool in daily clinical routine, education and research. (orig.)

  10. Blueprint of a Cross-Lingual Web Retrieval Collection

    NARCIS (Netherlands)

    Sigurbjörnsson, B.; Kamps, J.; de Rijke, M.; van Zwol, R.

    2005-01-01

    The world wide web is a natural setting for cross-lingual information retrieval; web content is essentially multilingual, and web searchers are often polyglots. Even though English has emerged as the lingua franca of the web, planning for a business trip or holiday usually involves digesting pages

  11. Intelligent image retrieval based on radiology reports.

    Science.gov (United States)

    Gerstmair, Axel; Daumke, Philipp; Simon, Kai; Langer, Mathias; Kotter, Elmar

    2012-12-01

    To create an advanced image retrieval and data-mining system based on in-house radiology reports. Radiology reports are semantically analysed using natural language processing (NLP) techniques and stored in a state-of-the-art search engine. Images referenced by sequence and image number in the reports are retrieved from the picture archiving and communication system (PACS) and stored for later viewing. A web-based front end is used as an interface to query for images and show the results with the retrieved images and report text. Using a comprehensive radiological lexicon for the underlying terminology, the search algorithm also finds results for synonyms, abbreviations and related topics. The test set was 108 manually annotated reports analysed by different system configurations. Best results were achieved using full syntactic and semantic analysis with a precision of 0.929 and recall of 0.952. Operating successfully since October 2010, 258,824 reports have been indexed and a total of 405,146 preview images are stored in the database. Data-mining and NLP techniques provide quick access to a vast repository of images and radiology reports with both high precision and recall values. Consequently, the system has become a valuable tool in daily clinical routine, education and research. Radiology reports can now be analysed using sophisticated natural language-processing techniques. Semantic text analysis is backed by terminology of a radiological lexicon. The search engine includes results for synonyms, abbreviations and compositions. Key images are automatically extracted from radiology reports and fetched from PACS. Such systems help to find diagnoses, improve report quality and save time.

  12. The Use of QBIC Content-Based Image Retrieval System

    Directory of Open Access Journals (Sweden)

    Ching-Yi Wu

    2004-03-01

    Full Text Available The fast increase in digital images has caught increasing attention on the development of image retrieval technologies. Content-based image retrieval (CBIR has become an important approach in retrieving image data from a large collection. This article reports our results on the use and users study of a CBIR system. Thirty-eight students majored in art and design were invited to use the IBM’s OBIC (Query by Image Content system through the Internet. Data from their information needs, behaviors, and retrieval strategies were collected through an in-depth interview, observation, and self-described think-aloud process. Important conclusions are:(1)There are four types of information needs for image data: implicit, inspirational, ever-changing, and purposive. The types of needs may change during the retrieval process. (2)CBIR is suitable for the example-type query, text retrieval is suitable for the scenario-type query, and image browsing is suitable for the symbolic query. (3)Different from text retrieval, detailed description of the query condition may lead to retrieval failure more easily. (4)CBIR is suitable for the domain-specific image collection, not for the images on the Word-Wide Web.[Article content in Chinese

  13. Language engineering for the Semantic Web: a digital library for endangered languages. Endangered languages, Ontology, Digital library, Multimedia, EMELD, Intelligent querying and retrieval, ImageSpace

    Directory of Open Access Journals (Sweden)

    Lu Shiyong

    2004-01-01

    Full Text Available In this paper, we describe the effort undertaken at Wayne State University to preserve endangered languages using the state-of-the-art information technologies. In particular, we discuss the issues involved in such an effort, and present the architecture of a distributed digital library for endangered languages which will contain various data of endangered languages in the forms of text, image, video, audio and include advanced tools for intelligent cataloguing, indexing, searching and browsing information on languages and language analysis. We use various Semantic Web technologies such as XML, OLAC, ontologies so that our digital library becomes a useful linguistic resource on the Semantic Web.

  14. SEMANTIC IMAGE RETRIEVAL USING MULTIPLE FEATURES

    OpenAIRE

    Nishant Singh; Shiv Ram Dubey; Pushkar Dixit; Jay Prakash Gupta

    2012-01-01

    In Content Based Image Retrieval (CBIR) some problem such as recognizing the similar images, the need for databases, the semantic gap, and retrieving the desired images from huge collections are the keys to improve. CBIR system analyzes the image content for indexing, management, extraction and retrieval via low-level features such as color, texture and shape. To achieve higher semantic performance, recent system seeks to combine the low-level features of images with high-level...

  15. Towards an Intelligent Possibilistic Web Information Retrieval Using Multiagent System

    Science.gov (United States)

    Elayeb, Bilel; Evrard, Fabrice; Zaghdoud, Montaceur; Ahmed, Mohamed Ben

    2009-01-01

    Purpose: The purpose of this paper is to make a scientific contribution to web information retrieval (IR). Design/methodology/approach: A multiagent system for web IR is proposed based on new technologies: Hierarchical Small-Worlds (HSW) and Possibilistic Networks (PN). This system is based on a possibilistic qualitative approach which extends the…

  16. Transformation invariant image indexing and retrieval for image databases

    NARCIS (Netherlands)

    Gevers, Th.; Smeulders, A.W.M.

    1994-01-01

    This paper presents a novel design of an image database system which supports storage, indexing and retrieval of images by content. The image retrieval methodology is based on the observation that images can be discriminated by the presence of image objects and their spatial relations. Images in the

  17. Web Information Retrieval System for Technological Forecasting

    OpenAIRE

    Montiel, Raúl; Lezcano Airaldi, Luis; Favret, Fabián; Eckert, Karina

    2017-01-01

    Technological Forecasting and Competitive Intelligence are two different disciplines that, used together, provide the organizations with an invaluable analytic tool for the environment and the competing companies’ behavior. This kind of technology can be used for extracting useful information to make strategic decisions. This paper describes a Web mining system which gathers the users’ information requirements through a series of guided questions, constructs various search keys with the answe...

  18. WISE: a content-based Web image search engine

    Science.gov (United States)

    Qiu, Guoping; Palmer, R. D.

    2000-12-01

    This paper describes the development of a prototype of a Web Image Search Engine (WISE), which allows users to search for images on the WWW by image examples, in a similar fashion to current search engines that allow users to find related Web pages using text matching on keywords. The system takes an image specified by the user and finds similar images available on the WWW by comparing the image contents using low level image features. The current version of the WISE system consists of a graphical user interface (GUI), an autonomous Web agent, an image comparison program and a query processing program. The users specify the URL of a target image and the URL of the starting Web page from where the program will 'crawl' the Web, finding images along the way and retrieve those satisfying a certain constraints. The program then computes the visual features of the retrieved images and performs content-based comparison with the target image. The results of the comparison are then sorted according to a certain similarity measure, which along with thumbnails and information associated with the images, such as the URLs; image size, etc. are then written to an HTML page. The resultant page is stored on a Web server and is outputted onto the user's Web browser once the search process is complete. A unique feature of the current version of WISE is its image content comparison algorithm. It is based on the comparison of image palettes and it therefore very efficient in retrieving one of the two universally accepted image formats on the Web, 'gif.' In gif images, the color palette is contained in its header and therefore it is only necessary to retrieve the header information rather than the whole images, thus making it very efficient.

  19. World Wide Web Based Image Search Engine Using Text and Image Content Features

    Science.gov (United States)

    Luo, Bo; Wang, Xiaogang; Tang, Xiaoou

    2003-01-01

    Using both text and image content features, a hybrid image retrieval system for Word Wide Web is developed in this paper. We first use a text-based image meta-search engine to retrieve images from the Web based on the text information on the image host pages to provide an initial image set. Because of the high-speed and low cost nature of the text-based approach, we can easily retrieve a broad coverage of images with a high recall rate and a relatively low precision. An image content based ordering is then performed on the initial image set. All the images are clustered into different folders based on the image content features. In addition, the images can be re-ranked by the content features according to the user feedback. Such a design makes it truly practical to use both text and image content for image retrieval over the Internet. Experimental results confirm the efficiency of the system.

  20. Image Searching across the Web.

    Science.gov (United States)

    Pack, Thomas

    2002-01-01

    Discusses how to find digital images on the Web. Considers images and copyright; provides an overview of the search capabilities of six search engines, including AltaVista, Google, AllTheWeb.com, Ditto.com, Picsearch, and Lycos; and describes specialized image search engines. (LRW)

  1. Introduction to Web Information Retrieval: A User Perspective

    Indian Academy of Sciences (India)

    Home; Journals; Resonance – Journal of Science Education; Volume 7; Issue 6. Introduction to Web Information Retrieval: A User Perspective - How to get ... Srinath Srinivasa1 Pramod Chandra P Bhatt1. Indian Institute of Information Technology International Technology Park Whitefield Road Bangalore 560066, India.

  2. Large-scale tattoo image retrieval

    OpenAIRE

    Manger, Daniel

    2012-01-01

    In current biometric-based identification systems, tattoos and other body modifications have shown to provide a useful source of information. Besides manual category label assignment, approaches utilizing state-of-the-art content-based image retrieval (CBIR) techniques have become increasingly popular. While local feature-based similarities of tattoo images achieve excellent retrieval accuracy, scalability to large image databases can be addressed with the popular bag-of-word model. In this p...

  3. MR imaging of carotid webs

    Energy Technology Data Exchange (ETDEWEB)

    Boesen, Mari E. [University of Calgary, Department of Biomedical Engineering, Calgary (Canada); Foothills Medical Centre, Seaman Family MR Research Centre, Calgary (Canada); Eswaradass, Prasanna Venkatesan; Singh, Dilip; Mitha, Alim P.; Menon, Bijoy K. [University of Calgary, Department of Clinical Neurosciences, Calgary (Canada); Foothills Medical Centre, Calgary Stroke Program, Calgary (Canada); Goyal, Mayank [Foothills Medical Centre, Calgary Stroke Program, Calgary (Canada); University of Calgary, Department of Radiology, Calgary (Canada); Frayne, Richard [Foothills Medical Centre, Seaman Family MR Research Centre, Calgary (Canada); University of Calgary, Hotchkiss Brain Institute, Calgary (Canada)

    2017-04-15

    We propose a magnetic resonance (MR) imaging protocol for the characterization of carotid web morphology, composition, and vessel wall dynamics. The purpose of this case series was to determine the feasibility of imaging carotid webs with MR imaging. Five patients diagnosed with carotid web on CT angiography were recruited to undergo a 30-min MR imaging session. MR angiography (MRA) images of the carotid artery bifurcation were acquired. Multi-contrast fast spin echo (FSE) images were acquired axially about the level of the carotid web. Two types of cardiac phase resolved sequences (cineFSE and cine phase contrast) were acquired to visualize the elasticity of the vessel wall affected by the web. Carotid webs were identified on MRA in 5/5 (100%) patients. Multi-contrast FSE revealed vessel wall thickening and cineFSE demonstrated regional changes in distensibility surrounding the webs in these patients. Our MR imaging protocol enables an in-depth evaluation of patients with carotid webs: morphology (by MRA), composition (by multi-contrast FSE), and wall dynamics (by cineFSE). (orig.)

  4. Multi region based image retrieval system

    Indian Academy of Sciences (India)

    with images. CBIR also referred as Query By Image Content (QBIC) is the application of auto- matic retrieval of images from a database based on the visual content such as colour, texture or shape. CBIR exploits techniques from computer vision, machine learning, database systems, data mining, information theory, statistics ...

  5. Web User Profile Using XUL and Information Retrieval Techniques

    Directory of Open Access Journals (Sweden)

    Dan MUNTEANU

    2008-12-01

    Full Text Available This paper presents the importance of user profile in information retrieval, information filtering and recommender systems using explicit and implicit feedback. A Firefox extension (based on XUL used for gathering data needed to infer a web user profile and an example file with collected data are presented. Also an algorithm for creating and updating the user profile and keeping track of a fixed number k of subjects of interest is presented.

  6. Latest Trends in Web Information Retrieval and in SEO Factors

    Directory of Open Access Journals (Sweden)

    Carlos Gonzalo

    2015-07-01

    Full Text Available Latest trends in web information retrieval and in  SEO factors, increasingly focused on signals from users as: profile of who performs the search and the interpretation of user intent. The objective of search engines is twofold: focusing at the maximum in the users and make ever less predictable the composition of the search engine result page (SERP , and  combating spam.

  7. Improving life sciences information retrieval using semantic web technology.

    Science.gov (United States)

    Quan, Dennis

    2007-05-01

    The ability to retrieve relevant information is at the heart of every aspect of research and development in the life sciences industry. Information is often distributed across multiple systems and recorded in a way that makes it difficult to piece together the complete picture. Differences in data formats, naming schemes and network protocols amongst information sources, both public and private, must be overcome, and user interfaces not only need to be able to tap into these diverse information sources but must also assist users in filtering out extraneous information and highlighting the key relationships hidden within an aggregated set of information. The Semantic Web community has made great strides in proposing solutions to these problems, and many efforts are underway to apply Semantic Web techniques to the problem of information retrieval in the life sciences space. This article gives an overview of the principles underlying a Semantic Web-enabled information retrieval system: creating a unified abstraction for knowledge using the RDF semantic network model; designing semantic lenses that extract contextually relevant subsets of information; and assembling semantic lenses into powerful information displays. Furthermore, concrete examples of how these principles can be applied to life science problems including a scenario involving a drug discovery dashboard prototype called BioDash are provided.

  8. Simultenious binary hash and features learning for image retrieval

    Science.gov (United States)

    Frantc, V. A.; Makov, S. V.; Voronin, V. V.; Marchuk, V. I.; Semenishchev, E. A.; Egiazarian, K. O.; Agaian, S.

    2016-05-01

    Content-based image retrieval systems have plenty of applications in modern world. The most important one is the image search by query image or by semantic description. Approaches to this problem are employed in personal photo-collection management systems, web-scale image search engines, medical systems, etc. Automatic analysis of large unlabeled image datasets is virtually impossible without satisfactory image-retrieval technique. It's the main reason why this kind of automatic image processing has attracted so much attention during recent years. Despite rather huge progress in the field, semantically meaningful image retrieval still remains a challenging task. The main issue here is the demand to provide reliable results in short amount of time. This paper addresses the problem by novel technique for simultaneous learning of global image features and binary hash codes. Our approach provide mapping of pixel-based image representation to hash-value space simultaneously trying to save as much of semantic image content as possible. We use deep learning methodology to generate image description with properties of similarity preservation and statistical independence. The main advantage of our approach in contrast to existing is ability to fine-tune retrieval procedure for very specific application which allow us to provide better results in comparison to general techniques. Presented in the paper framework for data- dependent image hashing is based on use two different kinds of neural networks: convolutional neural networks for image description and autoencoder for feature to hash space mapping. Experimental results confirmed that our approach has shown promising results in compare to other state-of-the-art methods.

  9. Content based image retrieval using unclean positive examples.

    Science.gov (United States)

    Zhang, Jun; Ye, Lei

    2009-10-01

    Conventional content-based image retrieval (CBIR) schemes employing relevance feedback may suffer from some problems in the practical applications. First, most ordinary users would like to complete their search in a single interaction especially on the web. Second, it is time consuming and difficult to label a lot of negative examples with sufficient variety. Third, ordinary users may introduce some noisy examples into the query. This correspondence explores solutions to a new issue that image retrieval using unclean positive examples. In the proposed scheme, multiple feature distances are combined to obtain image similarity using classification technology. To handle the noisy positive examples, a new two-step strategy is proposed by incorporating the methods of data cleaning and noise tolerant classifier. The extensive experiments carried out on two different real image collections validate the effectiveness of the proposed scheme.

  10. Toward privacy-preserving JPEG image retrieval

    Science.gov (United States)

    Cheng, Hang; Wang, Jingyue; Wang, Meiqing; Zhong, Shangping

    2017-07-01

    This paper proposes a privacy-preserving retrieval scheme for JPEG images based on local variance. Three parties are involved in the scheme: the content owner, the server, and the authorized user. The content owner encrypts JPEG images for privacy protection by jointly using permutation cipher and stream cipher, and then, the encrypted versions are uploaded to the server. With an encrypted query image provided by an authorized user, the server may extract blockwise local variances in different directions without knowing the plaintext content. After that, it can calculate the similarity between the encrypted query image and each encrypted database image by a local variance-based feature comparison mechanism. The authorized user with the encryption key can decrypt the returned encrypted images with plaintext content similar to the query image. The experimental results show that the proposed scheme not only provides effective privacy-preserving retrieval service but also ensures both format compliance and file size preservation for encrypted JPEG images.

  11. Multi region based image retrieval system

    Indian Academy of Sciences (India)

    /fulltext/sadh/039/02/0333-0344 ... The paramount challenge is to translate or convert a visual query from a human and find similar images or videos in large digital collection. In this paper, a technique of region based image retrieval, a branch ...

  12. Retrieval Architecture with Classified Query for Content Based Image Recognition

    Directory of Open Access Journals (Sweden)

    Rik Das

    2016-01-01

    Full Text Available The consumer behavior has been observed to be largely influenced by image data with increasing familiarity of smart phones and World Wide Web. Traditional technique of browsing through product varieties in the Internet with text keywords has been gradually replaced by the easy accessible image data. The importance of image data has portrayed a steady growth in application orientation for business domain with the advent of different image capturing devices and social media. The paper has described a methodology of feature extraction by image binarization technique for enhancing identification and retrieval of information using content based image recognition. The proposed algorithm was tested on two public datasets, namely, Wang dataset and Oliva and Torralba (OT-Scene dataset with 3688 images on the whole. It has outclassed the state-of-the-art techniques in performance measure and has shown statistical significance.

  13. Structuring a sharded image retrieval database

    Science.gov (United States)

    Liang, Eric; Zakhor, Avideh

    2013-03-01

    In previous work we described an approach to localization based in image retrieval. Specifically, we assume coarse localization based on GPS or cell tower and refine it by matching a user generated image query to a geotagged image database. We partition the image dataset into overlapping cells, each of which contains its own approximate nearest-neighbors search structure. By combining search results from multiple cells as specified by coarse localization, we have demonstrated superior retrieval accuracy on a large image database covering downtown Berkeley. In this paper, we investigate how to select the parameters of such a system e.g. size and spacing of the cells, and show how the combination of many cells outperforms a single search structure over a large region.

  14. A specialized framework for data retrieval Web applications

    Energy Technology Data Exchange (ETDEWEB)

    Jerzy Nogiec; Kelley Trombly-Freytag; Dana Walbridge

    2004-07-12

    Although many general-purpose frameworks have been developed to aid in web application development, they typically tend to be both comprehensive and complex. To address this problem, a specialized server-side Java framework designed specifically for data retrieval and visualization has been developed. The framework's focus is on maintainability and data security. The functionality is rich with features necessary for simplifying data display design, deployment, user management and application debugging, yet the scope is deliberately kept limited to allow for easy comprehension and rapid application development. The system clearly decouples the application processing and visualization, which in turn allows for clean separation of layout and processing development. Duplication of standard web page features such as toolbars and navigational aids is therefore eliminated. The framework employs the popular Model-View-Controller (MVC) architecture, but it also uses the filter mechanism for several of its base functionalities, which permits easy extension of the provided core functionality of the system.

  15. A Specialized Framework for Data Retrieval Web Applications

    Directory of Open Access Journals (Sweden)

    Jerzy Nogiec

    2005-06-01

    Full Text Available Although many general-purpose frameworks have been developed to aid in web application development, they typically tend to be both comprehensive and complex. To address this problem, a specialized server-side Java framework designed specifically for data retrieval and visualization has been developed. The framework's focus is on maintainability and data security. The functionality is rich with features necessary for simplifying data display design, deployment, user management and application debugging, yet the scope is deliberately kept limited to allow for easy comprehension and rapid application development. The system clearly decouples the application processing and visualization, which in turn allows for clean separation of layout and processing development. Duplication of standard web page features such as toolbars and navigational aids is therefore eliminated. The framework employs the popular Model-View-Controller (MVC architecture, but it also uses the filter mechanism for several of its base functionalities, which permits easy extension of the provided core functionality of the system.

  16. Image and video search engine for the World Wide Web

    Science.gov (United States)

    Smith, John R.; Chang, Shih-Fu

    1997-01-01

    We describe a visual information system prototype for searching for images and videos on the World-Wide Web. New visual information in the form of images, graphics, animations and videos is being published on the Web at an incredible rate. However, cataloging this visual data is beyond the capabilities of current text-based Web search engines. In this paper, we describe a complete system by which visual information on the Web is (1) collected by automated agents, (2) processed in both text and visual feature domains, (3) catalogued and (4) indexed for fast search and retrieval. We introduce an image and video search engine which utilizes both text-based navigation and content-based technology for searching visually through the catalogued images and videos. Finally, we provide an initial evaluation based upon the cataloging of over one half million images and videos collected from the Web.

  17. Image Information Retrieval: An Overview of Current Research

    OpenAIRE

    Goodrum, Abby A.

    2000-01-01

    This paper provides an overview of current research in image information retrieval and provides an outline of areas for future research. The approach is broad and interdisciplinary and focuses on three aspects of image research (IR): text-based retrieval, content-based retrieval, and user interactions with image information retrieval systems. The review concludes with a call for image retrieval evaluation studies similar to TREC.

  18. Contextual Distance Refining for Image Retrieval

    KAUST Repository

    Islam, Almasri

    2014-09-16

    Recently, a number of methods have been proposed to improve image retrieval accuracy by capturing context information. These methods try to compensate for the fact that a visually less similar image might be more relevant because it depicts the same object. We propose a new quick method for refining any pairwise distance metric, it works by iteratively discovering the object in the image from the most similar images, and then refine the distance metric accordingly. Test show that our technique improves over the state of art in terms of accuracy over the MPEG7 dataset.

  19. Managing biomedical image metadata for search and retrieval of similar images.

    Science.gov (United States)

    Korenblum, Daniel; Rubin, Daniel; Napel, Sandy; Rodriguez, Cesar; Beaulieu, Chris

    2011-08-01

    Radiology images are generally disconnected from the metadata describing their contents, such as imaging observations ("semantic" metadata), which are usually described in text reports that are not directly linked to the images. We developed a system, the Biomedical Image Metadata Manager (BIMM) to (1) address the problem of managing biomedical image metadata and (2) facilitate the retrieval of similar images using semantic feature metadata. Our approach allows radiologists, researchers, and students to take advantage of the vast and growing repositories of medical image data by explicitly linking images to their associated metadata in a relational database that is globally accessible through a Web application. BIMM receives input in the form of standard-based metadata files using Web service and parses and stores the metadata in a relational database allowing efficient data query and maintenance capabilities. Upon querying BIMM for images, 2D regions of interest (ROIs) stored as metadata are automatically rendered onto preview images included in search results. The system's "match observations" function retrieves images with similar ROIs based on specific semantic features describing imaging observation characteristics (IOCs). We demonstrate that the system, using IOCs alone, can accurately retrieve images with diagnoses matching the query images, and we evaluate its performance on a set of annotated liver lesion images. BIMM has several potential applications, e.g., computer-aided detection and diagnosis, content-based image retrieval, automating medical analysis protocols, and gathering population statistics like disease prevalences. The system provides a framework for decision support systems, potentially improving their diagnostic accuracy and selection of appropriate therapies.

  20. Printing images from the Web

    Science.gov (United States)

    Eschbach, Reiner

    2000-12-01

    Images have become ubiquitous, virtually every personal computer and every Internet WEB site contains at least some number of images. Professional as well as amateur artists display their work on the net and large image collections exist, spanning all aspects of art, hobby and daily life. The images vary widely in content, but more importantly, they also vary widely in general image quality attributes. Some of the quality variation is caused by the actual image creation or internet posting process, whereas some other quality variations are caused by the lack of defining image data description, like color-space definition. This paper describes problems and possible solutions associated with the printing of images from unknown sources.

  1. A Survey on the Image Retrieval via Site Operator

    Directory of Open Access Journals (Sweden)

    Saleh Rahimi

    2015-12-01

    Full Text Available The purpose of the present study is to investigate the Impact of Image Indexing on Optimizing Image Retrieval via site operator. Using quasi-experimental method 100 images, each image was uploaded 9 times with concept-based characteristics on iiproject.ir. Analysis consists of images which retrieved from the site operator. Number of images retrieved by the site operator is 151 images of 900 ones that are used in this study. The minimum number of retrieved images is related to "image titles" and the maximum ones to the criteria images which entitled with Q code. Chi-square statistics showed that the number of images retrieved in various codes was different. The best ranking is related to “image title” and the weakest one related to “image caption in Farsi”. Images average ranking retrieved in 9 groups were different.

  2. Retrieval and classification of food images.

    Science.gov (United States)

    Farinella, Giovanni Maria; Allegra, Dario; Moltisanti, Marco; Stanco, Filippo; Battiato, Sebastiano

    2016-10-01

    Automatic food understanding from images is an interesting challenge with applications in different domains. In particular, food intake monitoring is becoming more and more important because of the key role that it plays in health and market economies. In this paper, we address the study of food image processing from the perspective of Computer Vision. As first contribution we present a survey of the studies in the context of food image processing from the early attempts to the current state-of-the-art methods. Since retrieval and classification engines able to work on food images are required to build automatic systems for diet monitoring (e.g., to be embedded in wearable cameras), we focus our attention on the aspect of the representation of the food images because it plays a fundamental role in the understanding engines. The food retrieval and classification is a challenging task since the food presents high variableness and an intrinsic deformability. To properly study the peculiarities of different image representations we propose the UNICT-FD1200 dataset. It was composed of 4754 food images of 1200 distinct dishes acquired during real meals. Each food plate is acquired multiple times and the overall dataset presents both geometric and photometric variabilities. The images of the dataset have been manually labeled considering 8 categories: Appetizer, Main Course, Second Course, Single Course, Side Dish, Dessert, Breakfast, Fruit. We have performed tests employing different representations of the state-of-the-art to assess the related performances on the UNICT-FD1200 dataset. Finally, we propose a new representation based on the perceptual concept of Anti-Textons which is able to encode spatial information between Textons outperforming other representations in the context of food retrieval and Classification. Copyright © 2016 Elsevier Ltd. All rights reserved.

  3. Polar Embedding for Aurora Image Retrieval.

    Science.gov (United States)

    Yang, Xi; Gao, Xinbo; Tian, Qi

    2015-11-01

    Exploring the multimedia techniques to assist scientists for their research is an interesting and meaningful topic. In this paper, we focus on the large-scale aurora image retrieval by leveraging the bag-of-visual words (BoVW) framework. To refine the unsuitable representation and improve the retrieval performance, the BoVW model is modified by embedding the polar information. The superiority of the proposed polar embedding method lies in two aspects. On the one hand, the polar meshing scheme is conducted to determine the interest points, which is more suitable for images captured by circular fisheye lens. Especially for the aurora image, the extracted polar scale-invariant feature transform (polar-SIFT) feature can also reflect the geomagnetic longitude and latitude, and thus facilitates the further data analysis. On the other hand, a binary polar deep local binary pattern (polar-DLBP) descriptor is proposed to enhance the discriminative power of visual words. Together with the 64-bit polar-SIFT code obtained via Hamming embedding, the multifeature index is performed to reduce the impact of false positive matches. Extensive experiments are conducted on the large-scale aurora image data set. The experimental result indicates that the proposed method improves the retrieval accuracy significantly with acceptable efficiency and memory cost. In addition, the effectiveness of the polar-SIFT scheme and polar-DLBP integration are separately demonstrated.

  4. EFFICIENT RETRIEVAL TECHNIQUES FOR IMAGES USING ENHANCED UNIVARIATE TRANSFORMATION APPROACH

    OpenAIRE

    DR.S.P.VICTOR,; MRS.V.NARAYANI,; MR.S.RAJKUMAR

    2010-01-01

    Image mining is a process to find valid, useful, and understandable knowledge from large image sets or image databases. Image mining combines the areas of content-based image retrieval, image understanding, data mining and databases. Image mining deals with the extraction of knowledge, image data relationship, or other patterns not explicitly stored in the images. It uses methods from computer vision, image processing, image retrieval, data mining, machine learning, database, and artificial i...

  5. Retrieving high-resolution images over the Internet from an anatomical image database

    Science.gov (United States)

    Strupp-Adams, Annette; Henderson, Earl

    1999-12-01

    The Visible Human Data set is an important contribution to the national collection of anatomical images. To enhance the availability of these images, the National Library of Medicine has supported the design and development of a prototype object-oriented image database which imports, stores, and distributes high resolution anatomical images in both pixel and voxel formats. One of the key database modules is its client-server Internet interface. This Web interface provides a query engine with retrieval access to high-resolution anatomical images that range in size from 100KB for browser viewable rendered images, to 1GB for anatomical structures in voxel file formats. The Web query and retrieval client-server system is composed of applet GUIs, servlets, and RMI application modules which communicate with each other to allow users to query for specific anatomical structures, and retrieve image data as well as associated anatomical images from the database. Selected images can be downloaded individually as single files via HTTP or downloaded in batch-mode over the Internet to the user's machine through an applet that uses Netscape's Object Signing mechanism. The image database uses ObjectDesign's object-oriented DBMS, ObjectStore that has a Java interface. The query and retrieval systems has been tested with a Java-CDE window system, and on the x86 architecture using Windows NT 4.0. This paper describes the Java applet client search engine that queries the database; the Java client module that enables users to view anatomical images online; the Java application server interface to the database which organizes data returned to the user, and its distribution engine that allow users to download image files individually and/or in batch-mode.

  6. A Survey on the Image Retrieval via Site Operator

    OpenAIRE

    Saleh Rahimi; Mehran Farhadi

    2015-01-01

    The purpose of the present study is to investigate the Impact of Image Indexing on Optimizing Image Retrieval via site operator. Using quasi-experimental method 100 images, each image was uploaded 9 times with concept-based characteristics on iiproject.ir. Analysis consists of images which retrieved from the site operator. Number of images retrieved by the site operator is 151 images of 900 ones that are used in this study. The minimum number of retrieved images is related to "image titles" a...

  7. An approach to image retrieval for image databases

    NARCIS (Netherlands)

    Gevers, T.; Smeulders, A.W.M.

    1993-01-01

    In this paper, a method is discussed to store and retrieve images efficiently from an image database on the basis of the data structure called E() representation. The E() representation is a spatial knowledge representation preserving the spatial information between objects embedded in symbolic

  8. A novel image retrieval algorithm based on PHOG and LSH

    Science.gov (United States)

    Wu, Hongliang; Wu, Weimin; Peng, Jiajin; Zhang, Junyuan

    2017-08-01

    PHOG can describe the local shape of the image and its relationship between the spaces. The using of PHOG algorithm to extract image features in image recognition and retrieval and other aspects have achieved good results. In recent years, locality sensitive hashing (LSH) algorithm has been superior to large-scale data in solving near-nearest neighbor problems compared with traditional algorithms. This paper presents a novel image retrieval algorithm based on PHOG and LSH. First, we use PHOG to extract the feature vector of the image, then use L different LSH hash table to reduce the dimension of PHOG texture to index values and map to different bucket, and finally extract the corresponding value of the image in the bucket for second image retrieval using Manhattan distance. This algorithm can adapt to the massive image retrieval, which ensures the high accuracy of the image retrieval and reduces the time complexity of the retrieval. This algorithm is of great significance.

  9. Interactive layout mechanisms for image database retrieval

    Energy Technology Data Exchange (ETDEWEB)

    MacCuish, J.; McPherson, A.; Barros, J.; Kelly, P.

    1996-01-29

    In this paper we present a user interface, CANDID Camera, for image retrieval using query-by-example technology. Included in the interface are several new layout algorithms based on multidimensional scaling techniques that visually display global and local relationships between images within a large image database. We use the CANDID project algorithms to create signatures of the images, and then measure the dissimilarity between the signatures. The layout algorithms are of two types. The first are those that project the all-pairs dissimilarities to two dimensions, presenting a many-to-many relationship for a global view of the entire database. The second are those that relate a query image to a small set of matched images for a one-to-many relationship that provides a local inspection of the image relationships. Both types are based on well-known multidimensional scaling techniques that have been modified and used together for efficiency and effectiveness. They include nonlinear projection and classical projection. The global maps are hybrid algorithms using classical projection together with nonlinear projection. We have developed several one-to-many layouts based on a radial layout, also using modified nonlinear and classical projection.

  10. Automatic medical image annotation and keyword-based image retrieval using relevance feedback

    OpenAIRE

    Ko, Byoung Chul; Lee, Jihyeon; Nam, Jae-Yeal

    2011-01-01

    This paper presents novel multiple keywords annotation for medical images, keyword-based medical image retrieval, and relevance feedback method for image retrieval for enhancing image retrieval performance. For semantic keyword annotation, this study proposes a novel medical image classification method combining local wavelet-based center symmetric–local binary patterns with random forests. For keyword-based image retrieval, our retrieval system use the confidence score that is assigned to ea...

  11. Introduction to the JASIST Special Topic Issue on Web Retrieval and Mining: A Machine Learning Perspective.

    Science.gov (United States)

    Chen, Hsinchun

    2003-01-01

    Discusses information retrieval techniques used on the World Wide Web. Topics include machine learning in information extraction; relevance feedback; information filtering and recommendation; text classification and text clustering; Web mining, based on data mining techniques; hyperlink structure; and Web size. (LRW)

  12. Research on image retrieval algorithm based on LBP and LSH

    Science.gov (United States)

    Wu, Hongliang; Wu, Weimin; Zhang, Junyuan; Peng, Jiajin

    2017-08-01

    Using LBP (local binary pattern) to extract texture feature in the area of image recognition and retrieval has achieved good results. LSH (locality sensitive hashing) in the information retrieval, especially to solve the ANN (approximate nearest neighbor) problem has a more important Status. LSH has a solid theoretical basis and excellent performance in high-dimensional data space. Under the trend of cloud computing and Big Data, this paper proposes an image retrieval algorithm based on LBP and LSH. Firstly, LBP is used to extract the texture feature vector of the image. Then, the LBP texture feature is reduced dimensionally and indexed into different buckets using LSH. Finally, the image corresponding to the index value in the bucket is extracted for second retrieval by using LBP. This algorithm can adapt to the massive image retrieval and ensures the high accuracy of the image retrieval and reduces the time complexity. This algorithm is of great significance.

  13. Survey paper on Sketch Based and Content Based Image Retrieval

    OpenAIRE

    Gaidhani, Prachi A.; Bagal, S.B.

    2015-01-01

    International audience; This survey paper presents an overview of development of Sketch Based Image Retrieval (SBIR) and Content based image retrieval (CBIR) in the past few years. There is awful growth in bulk of images as well as the far-flung application in too many fields. The main attributes to represent as well index the images are color, shape, texture, spatial layout. These features of images are extracted to check similarity among the images. Generation of special query is the main p...

  14. Platform for distributed image processing and image retrieval

    Science.gov (United States)

    Gueld, Mark O.; Thies, Christian J.; Fischer, Benedikt; Keysers, Daniel; Wein, Berthold B.; Lehmann, Thomas M.

    2003-06-01

    We describe a platform for the implementation of a system for content-based image retrieval in medical applications (IRMA). To cope with the constantly evolving medical knowledge, the platform offers a flexible feature model to store and uniformly access all feature types required within a multi-step retrieval approach. A structured generation history for each feature allows the automatic identification and re-use of already computed features. The platform uses directed acyclic graphs composed of processing steps and control elements to model arbitrary retrieval algorithms. This visually intuitive, data-flow oriented representation vastly improves the interdisciplinary communication between computer scientists and physicians during the development of new retrieval algorithms. The execution of the graphs is fully automated within the platform. Each processing step is modeled as a feature transformation. Due to a high degree of system transparency, both the implementation and the evaluation of retrieval algorithms are accelerated significantly. The platform uses a client-server architecture consisting of a central database, a central job scheduler, instances of a daemon service, and clients which embed user-implemented feature ansformations. Automatically distributed batch processing and distributed feature storage enable the cost-efficient use of an existing workstation cluster.

  15. A Web-Based Search Engine for Chinese Calligraphic Manuscript Images

    Science.gov (United States)

    Zhuang, Yi; Jiang, Nan; Hu, Haiyang

    In this paper, we propose a novel framework for the web-based retrieval of Chinese calligraphic manuscript images which includes two main components: 1). A Shape- Similarity (SS)-based method which is to effectively support a retrieval over large Chinese calligraphic manuscript databases [19]. In this retrieval method, shapes of calligraphic characters are represented by their approximate contour points extracted from the character images; 2). To speed up the retrieval efficiency, we then propose a Composite - Distance- Tree(CD-Tree)-based high-dimensional indexing scheme for it. Comprehensive experiments are conducted to testify the effectiveness and efficiency of our proposed retrieval and indexing methods respectively.

  16. Low Quality Image Retrieval System For Generic Databases

    Directory of Open Access Journals (Sweden)

    W.A.D.N. Wijesekera

    2015-08-01

    Full Text Available Abstract Content Based Image Retrieval CBIR systems have become the trend in image retrieval technologies as the index or notation based image retrieval algorithms give less efficient results in high usage of images. These CBIR systems are mostly developed considering the availability of high or normal quality images. High availability of low quality images in databases due to usage of different quality equipment to capture images and different environmental conditions the photos are being captured has opened up a new path in image retrieval research area. The algorithms which are developed for low quality image based image retrieval are only a few and have been performed only for specific domains. Low quality image based image retrieval algorithm on a generic database with a considerable accuracy level for different industries is an area which remains unsolved. Through this study an algorithm has been developed to achieve above mentioned gaps. By using images with inappropriate brightness and compressed images as low quality images the proposed algorithm is tested on a generic database which includes many categories of data instead of using a specific domain. The new algorithm developed gives better precision and recall values when they are clustered into the most appropriate number of clusters which changes according to the level of quality of the image. As the quality of the image decreases the accuracy of the algorithm also tends to be reduced a space for further improvement.

  17. Biased discriminant euclidean embedding for content-based image retrieval.

    Science.gov (United States)

    Bian, Wei; Tao, Dacheng

    2010-02-01

    With many potential multimedia applications, content-based image retrieval (CBIR) has recently gained more attention for image management and web search. A wide variety of relevance feedback (RF) algorithms have been developed in recent years to improve the performance of CBIR systems. These RF algorithms capture user's preferences and bridge the semantic gap. However, there is still a big room to further the RF performance, because the popular RF algorithms ignore the manifold structure of image low-level visual features. In this paper, we propose the biased discriminative Euclidean embedding (BDEE) which parameterises samples in the original high-dimensional ambient space to discover the intrinsic coordinate of image low-level visual features. BDEE precisely models both the intraclass geometry and interclass discrimination and never meets the undersampled problem. To consider unlabelled samples, a manifold regularization-based item is introduced and combined with BDEE to form the semi-supervised BDEE, or semi-BDEE for short. To justify the effectiveness of the proposed BDEE and semi-BDEE, we compare them against the conventional RF algorithms and show a significant improvement in terms of accuracy and stability based on a subset of the Corel image gallery.

  18. Engineering a Multi-Purpose Test Collection for Web Retrieval Experiments.

    Science.gov (United States)

    Bailey, Peter; Craswell, Nick; Hawking, David

    2003-01-01

    Describes a test collection that was developed as a multi-purpose testbed for experiments on the Web in distributed information retrieval, hyperlink algorithms, and conventional ad hoc retrieval. Discusses inter-server connectivity, integrity of server holdings, inclusion of documents related to a wide spread of likely queries, and distribution of…

  19. Image retrieval based on multi-instance saliency model

    Science.gov (United States)

    Wan, Shouhong; Jin, Peiquan; Yue, Lihua; Yan, Li

    2017-07-01

    Existing methods for visual saliency based image retrieval typically aim at single instance image. However, without any prior knowledge, the content of single instance image is ambiguous and these methods cannot effectively reflect the object of interest. In this paper, we propose a novel image retrieval framework based on multi-instance saliency model. First, the feature saliency is computed based on global contrast, local contrast and sparsity, and the synthesize saliency map is obtained by using Multi-instance Learning (MIL) algorithm to dynamically weight the feature saliency. Then we employ a fuzzy region-growth algorithm on the synthesize saliency map to extract the saliency object. We finally extract color and texture feature as the retrieval feature and measure feature similarity by Euclidean distance. In the experiments, the proposed method can achieve higher multi-instance image retrieval accuracy than the other single instance image retrieval methods based on saliency model.

  20. Comparison of texture models for efficient ultrasound image retrieval

    Science.gov (United States)

    Bansal, Maggi; Sharma, Vipul; Singh, Sukhwinder

    2013-02-01

    Due to availability of inexpensive and easily available image capturing devices, the size of digital image collection is increasing rapidly. Thus, there is need to create efficient access methods or retrieval tools to search, browse and retrieve images from large multimedia repositories. More specifically, researchers have been engaged on different ways of retrieving images based on their actual content. In particular, Content Based Image Retrieval (CBIR) systems have attracted considerable research and commercial interest in the recent years. In CBIR, visual features characterizing the image content are color, shape and texture. Currently, texture is used to quantify the image content of medical images as it is the most prominent feature that contains information about the spatial distribution of gray levels and variations in brightness. Various texture models like Haralick's Spatial Gray Level Co-occurence Matrix (SGLCM), Gray Level Difference Statistics (GLDS), First-order Statistics (FoS), Statistical Feature Matrix (SFM), Law's Texture Energy Measures (TEM), Fractal features and Fourier Power Spectrum (FPS) features exists in literature. Each of these models visualizes texture in a different way. Retrieval performance depends upon the choice of texture algorithm. Unfortunately, there is no texture model known to work best for encoding texture properties of liver ultrasound images or retrieving most similar images. An experimental comparison of different texture models for Content Based Medical Image Retrieval (CBMIR) is presented in this paper. For the experiments, liver ultrasound image database is used and the retrieval performance of the various texture models is analyzed in detail. The paper concludes with recommendations which texture model performs better for liver ultrasound images. Interestingly, FPS and SGLCM based Haralick's features perform well for liver ultrasound retrieval and thus can be recommended as a simple baseline for such images.

  1. System refinement for content based satellite image retrieval

    Directory of Open Access Journals (Sweden)

    NourElDin Laban

    2012-06-01

    Full Text Available We are witnessing a large increase in satellite generated data especially in the form of images. Hence intelligent processing of the huge amount of data received by dozens of earth observing satellites, with specific satellite image oriented approaches, presents itself as a pressing need. Content based satellite image retrieval (CBSIR approaches have mainly been driven so far by approaches dealing with traditional images. In this paper we introduce a novel approach that refines image retrieval process using the unique properties to satellite images. Our approach uses a Query by polygon (QBP paradigm for the content of interest instead of using the more conventional rectangular query by image approach. First, we extract features from the satellite images using multiple tiling sizes. Accordingly the system uses these multilevel features within a multilevel retrieval system that refines the retrieval process. Our multilevel refinement approach has been experimentally validated against the conventional one yielding enhanced precision and recall rates.

  2. A framework for efficient spatial web object retrieval

    DEFF Research Database (Denmark)

    Wu, Dinging; Cong, Gao; Jensen, Christian S.

    2012-01-01

    into account both location proximity and text relevancy. This paper proposes a new indexing framework for top-k spatial text retrieval. The framework leverages the inverted file for text retrieval and the R-tree for spatial proximity querying. Several indexing approaches are explored within this framework....... The framework encompasses algorithms that utilize the proposed indexes for computing location-aware as well as region-aware top-k text retrieval queries, thus taking into account both text relevancy and spatial proximity to prune the search space. Results of empirical studies with an implementation...

  3. Millennial Undergraduate Research Strategies in Web and Library Information Retrieval Systems

    Science.gov (United States)

    Porter, Brandi

    2011-01-01

    This article summarizes the author's dissertation regarding search strategies of millennial undergraduate students in Web and library online information retrieval systems. Millennials bring a unique set of search characteristics and strategies to their research since they have never known a world without the Web. Through the use of search engines,…

  4. Image Retrieval and Re-Ranking Techniques - A Survey

    OpenAIRE

    Mayuri D. Joshi; Revati M. Deshmukh; Kalashree N.Hemke; Ashwini Bhake; Rakhi Wajgi

    2014-01-01

    There is a huge amount of research work focusing on the searching, retrieval and re-ranking of images in the image database. The diverse and scattered work in this domain needs to be collected and organized for easy and quick reference. Relating to the above context, this paper gives a brief overview of various image retrieval and re-ranking techniques. Starting with the introduction to existing system the paper proceeds through the core architecture of image harvesti...

  5. Mobile object retrieval in server-based image databases

    Science.gov (United States)

    Manger, D.; Pagel, F.; Widak, H.

    2013-05-01

    The increasing number of mobile phones equipped with powerful cameras leads to huge collections of user-generated images. To utilize the information of the images on site, image retrieval systems are becoming more and more popular to search for similar objects in an own image database. As the computational performance and the memory capacity of mobile devices are constantly increasing, this search can often be performed on the device itself. This is feasible, for example, if the images are represented with global image features or if the search is done using EXIF or textual metadata. However, for larger image databases, if multiple users are meant to contribute to a growing image database or if powerful content-based image retrieval methods with local features are required, a server-based image retrieval backend is needed. In this work, we present a content-based image retrieval system with a client server architecture working with local features. On the server side, the scalability to large image databases is addressed with the popular bag-of-word model with state-of-the-art extensions. The client end of the system focuses on a lightweight user interface presenting the most similar images of the database highlighting the visual information which is common with the query image. Additionally, new images can be added to the database making it a powerful and interactive tool for mobile contentbased image retrieval.

  6. A Picture is Worth a Thousand Keywords: Exploring Mobile Image-Based Web Searching

    OpenAIRE

    Konrad Tollmar; Ted Möller; Björn Nilsved

    2008-01-01

    Using images of objects as queries is a new approach to search for information on the Web. Image-based information retrieval goes beyond only matching images, as information in other modalities also can be extracted from data collections using an image search. We have developed a new system that uses images to search for web-based information. This paper has a particular focus on exploring users' experience of general mobile image-based web searches to find what issues and phenomena it contai...

  7. VICAR - VIDEO IMAGE COMMUNICATION AND RETRIEVAL

    Science.gov (United States)

    Wall, R. J.

    1994-01-01

    VICAR (Video Image Communication and Retrieval) is a general purpose image processing software system that has been under continuous development since the late 1960's. Originally intended for data from the NASA Jet Propulsion Laboratory's unmanned planetary spacecraft, VICAR is now used for a variety of other applications including biomedical image processing, cartography, earth resources, and geological exploration. The development of this newest version of VICAR emphasized a standardized, easily-understood user interface, a shield between the user and the host operating system, and a comprehensive array of image processing capabilities. Structurally, VICAR can be divided into roughly two parts; a suite of applications programs and an executive which serves as the interfaces between the applications, the operating system, and the user. There are several hundred applications programs ranging in function from interactive image editing, data compression/decompression, and map projection, to blemish, noise, and artifact removal, mosaic generation, and pattern recognition and location. An information management system designed specifically for handling image related data can merge image data with other types of data files. The user accesses these programs through the VICAR executive, which consists of a supervisor and a run-time library. From the viewpoint of the user and the applications programs, the executive is an environment that is independent of the operating system. VICAR does not replace the host computer's operating system; instead, it overlays the host resources. The core of the executive is the VICAR Supervisor, which is based on NASA Goddard Space Flight Center's Transportable Applications Executive (TAE). Various modifications and extensions have been made to optimize TAE for image processing applications, resulting in a user friendly environment. The rest of the executive consists of the VICAR Run-Time Library, which provides a set of subroutines (image

  8. Evaluation of Multi Layers Web-based GIS Approach in Retrieving Tourist Related Information

    OpenAIRE

    Rosilawati Zainol; Zainab Abu Bakar

    2014-01-01

    Geo-based information is getting greater importance among tourists. However, retrieving this information on the web depends heavily on the methods of dissemination. Therefore, this study intends to evaluate methods used in disseminating tourist related geo-based information on the web using partial match query, firstly, in default system which is a single layer approach and secondly, using multi layer web-based Geographic Information System (GIS) approaches. Shah Alam tourist related data are...

  9. Content-based image retrieval in homomorphic encryption domain.

    Science.gov (United States)

    Bellafqira, Reda; Coatrieux, Gouenou; Bouslimi, Dalel; Quellec, Gwenole

    2015-08-01

    In this paper, we propose a secure implementation of a content-based image retrieval (CBIR) method that makes possible diagnosis aid systems to work in externalized environment and with outsourced data as in cloud computing. This one works with homomorphic encrypted images from which it extracts wavelet based image features next used for subsequent image comparison. By doing so, our system allows a physician to retrieve the most similar images to a query image in an outsourced database while preserving data confidentiality. Our Secure CBIR is the first one that proposes to work with global image features extracted from encrypted images and does not induce extra communications in-between the client and the server. Experimental results show it achieves retrieval performance as good as if images were processed non-encrypted.

  10. Sparse color interest points for image retrieval and object categorization

    NARCIS (Netherlands)

    Stöttinger, J.; Hanbury, A.; Sebe, N.; Gevers, T.

    2012-01-01

    Interest point detection is an important research area in the field of image processing and computer vision. In particular, image retrieval and object categorization heavily rely on interest point detection from which local image descriptors are computed for image matching. In general, interest

  11. Improved image retrieval based on fuzzy colour feature vector

    Science.gov (United States)

    Ben-Ahmeida, Ahlam M.; Ben Sasi, Ahmed Y.

    2013-03-01

    One of Image indexing techniques is the Content-Based Image Retrieval which is an efficient way for retrieving images from the image database automatically based on their visual contents such as colour, texture, and shape. In this paper will be discuss how using content-based image retrieval (CBIR) method by colour feature extraction and similarity checking. By dividing the query image and all images in the database into pieces and extract the features of each part separately and comparing the corresponding portions in order to increase the accuracy in the retrieval. The proposed approach is based on the use of fuzzy sets, to overcome the problem of curse of dimensionality. The contribution of colour of each pixel is associated to all the bins in the histogram using fuzzy-set membership functions. As a result, the Fuzzy Colour Histogram (FCH), outperformed the Conventional Colour Histogram (CCH) in image retrieving, due to its speedy results, where were images represented as signatures that took less size of memory, depending on the number of divisions. The results also showed that FCH is less sensitive and more robust to brightness changes than the CCH with better retrieval recall values.

  12. [Content-based automatic retinal image recognition and retrieval system].

    Science.gov (United States)

    Zhang, Jiumei; Du, Jianjun; Cheng, Xia; Cao, Hongliang

    2013-04-01

    This paper is aimed to fulfill a prototype system used to classify and retrieve retinal image automatically. With the content-based image retrieval (CBIR) technology, a method to represent the retinal characteristics mixing the fundus image color (gray) histogram with bright, dark region features and other local comprehensive information was proposed. The method uses kernel principal component analysis (KPCA) to further extract nonlinear features and dimensionality reduced. It also puts forward a measurement method using support vector machine (SVM) on KPCA weighted distance in similarity measure aspect. Testing 300 samples with this prototype system randomly, we obtained the total image number of wrong retrieved 32, and the retrieval rate 89.33%. It showed that the identification rate of the system for retinal image was high.

  13. Density-based retrieval from high-similarity image databases

    DEFF Research Database (Denmark)

    Hansen, Michael Edberg; Carstensen, Jens Michael

    2004-01-01

    Many image classification problems can fruitfully be thought of as image retrieval in a "high similarity image database" (HSID) characterized by being tuned towards a specific application and having a high degree of visual similarity between entries that should be distinguished. We introduce...... a method for HSID retrieval using a similarity measure based on a linear combination of Jeffreys-Matusita distances between distributions of local (pixelwise) features estimated from a set of automatically and consistently defined image regions. The weight coefficients are estimated based on optimal...... retrieval performance. Experimental results on the difficult task of visually identifying clones of fungal colonies grown in a petri dish and categorization of pelts show a high retrieval accuracy of the method when combined with standardized sample preparation and image acquisition....

  14. Pareto-depth for multiple-query image retrieval.

    Science.gov (United States)

    Hsiao, Ko-Jen; Calder, Jeff; Hero, Alfred O

    2015-02-01

    Most content-based image retrieval systems consider either one single query, or multiple queries that include the same object or represent the same semantic information. In this paper, we consider the content-based image retrieval problem for multiple query images corresponding to different image semantics. We propose a novel multiple-query information retrieval algorithm that combines the Pareto front method with efficient manifold ranking. We show that our proposed algorithm outperforms state of the art multiple-query retrieval algorithms on real-world image databases. We attribute this performance improvement to concavity properties of the Pareto fronts, and prove a theoretical result that characterizes the asymptotic concavity of the fronts.

  15. Probabilistic Information Integration and Retrieval in the Semantic Web

    Science.gov (United States)

    Predoiu, Livia

    The Semantic Web (SW) has been envisioned to enable software tools or Web Services, respectively, to process information provided on the Web automatically. For this purpose, languages for representing the semantics of data by means of ontologies have been proposed such as RDF(S) and OWL. While the semantics of RDF(S) requires a non-standard model-theory that goes beyond first order logics, OWL is intended to model subsets of first order logics. OWL consists of three variants that are layered on each other. The less expressive variants OWL-Light and OWL-DL correspond to the Description Logics {SHIF}(D) and {SHOIN}(D) [1], respectively, and thus to subsets of First Order Logics [2].

  16. Retrieving top-k prestige-based relevant spatial web objects

    DEFF Research Database (Denmark)

    Cao, Xin; Cong, Gao; Jensen, Christian S.

    2010-01-01

    The location-aware keyword query returns ranked objects that are near a query location and that have textual descriptions that match query keywords. This query occurs inherently in many types of mobile and traditional web services and applications, e.g., Yellow Pages and Maps services. Previous...... of prestige-based relevance to capture both the textual relevance of an object to a query and the effects of nearby objects. Based on this, a new type of query, the Location-aware top-k Prestige-based Text retrieval (LkPT) query, is proposed that retrieves the top-k spatial web objects ranked according...... to both prestige-based relevance and location proximity. We propose two algorithms that compute LkPT queries. Empirical studies with real-world spatial data demonstrate that LkPT queries are more effective in retrieving web objects than a previous approach that does not consider the effects of nearby...

  17. 4D reconstruction of the past: the image retrieval and 3D model construction pipeline

    Science.gov (United States)

    Hadjiprocopis, Andreas; Ioannides, Marinos; Wenzel, Konrad; Rothermel, Mathias; Johnsons, Paul S.; Fritsch, Dieter; Doulamis, Anastasios; Protopapadakis, Eftychios; Kyriakaki, Georgia; Makantasis, Kostas; Weinlinger, Guenther; Klein, Michael; Fellner, Dieter; Stork, Andre; Santos, Pedro

    2014-08-01

    One of the main characteristics of the Internet era we are living in, is the free and online availability of a huge amount of data. This data is of varied reliability and accuracy and exists in various forms and formats. Often, it is cross-referenced and linked to other data, forming a nexus of text, images, animation and audio enabled by hypertext and, recently, by the Web3.0 standard. Our main goal is to enable historians, architects, archaeolo- gists, urban planners and affiliated professionals to reconstruct views of historical monuments from thousands of images floating around the web. This paper aims to provide an update of our progress in designing and imple- menting a pipeline for searching, filtering and retrieving photographs from Open Access Image Repositories and social media sites and using these images to build accurate 3D models of archaeological monuments as well as enriching multimedia of cultural / archaeological interest with metadata and harvesting the end products to EU- ROPEANA. We provide details of how our implemented software searches and retrieves images of archaeological sites from Flickr and Picasa repositories as well as strategies on how to filter the results, on two levels; a) based on their built-in metadata including geo-location information and b) based on image processing and clustering techniques. We also describe our implementation of a Structure from Motion pipeline designed for producing 3D models using the large collection of 2D input images (>1000) retrieved from Internet Repositories.

  18. A Robust Color Object Analysis Approach to Efficient Image Retrieval

    Directory of Open Access Journals (Sweden)

    Ruofei Zhang

    2004-06-01

    Full Text Available We describe a novel indexing and retrieval methodology integrating color, texture, and shape information for content-based image retrieval in image databases. This methodology, we call CLEAR, applies unsupervised image segmentation to partition an image into a set of objects. Fuzzy color histogram, fuzzy texture, and fuzzy shape properties of each object are then calculated to be its signature. The fuzzification procedures effectively resolve the recognition uncertainty stemming from color quantization and human perception of colors. At the same time, the fuzzy scheme incorporates segmentation-related uncertainties into the retrieval algorithm. An adaptive and effective measure for the overall similarity between images is developed by integrating properties of all the objects in every image. In an effort to further improve the retrieval efficiency, a secondary clustering technique is developed and employed, which significantly saves query processing time without compromising retrieval precision. A prototypical system of CLEAR, we developed, demonstrated the promising retrieval performance and robustness in color variations and segmentation-related uncertainties for a test database containing 10 000 general-purpose color images, as compared with its peer systems in the literature.

  19. A novel architecture for information retrieval system based on semantic web

    Science.gov (United States)

    Zhang, Hui

    2011-12-01

    Nowadays, the web has enabled an explosive growth of information sharing (there are currently over 4 billion pages covering most areas of human endeavor) so that the web has faced a new challenge of information overhead. The challenge that is now before us is not only to help people locating relevant information precisely but also to access and aggregate a variety of information from different resources automatically. Current web document are in human-oriented formats and they are suitable for the presentation, but machines cannot understand the meaning of document. To address this issue, Berners-Lee proposed a concept of semantic web. With semantic web technology, web information can be understood and processed by machine. It provides new possibilities for automatic web information processing. A main problem of semantic web information retrieval is that when these is not enough knowledge to such information retrieval system, the system will return to a large of no sense result to uses due to a huge amount of information results. In this paper, we present the architecture of information based on semantic web. In addiction, our systems employ the inference Engine to check whether the query should pose to Keyword-based Search Engine or should pose to the Semantic Search Engine.

  20. Comparing the Scale of Web Subject Directories Precision in Technical-Engineering Information Retrieval

    Directory of Open Access Journals (Sweden)

    Mehrdokht Wazirpour Keshmiri

    2012-07-01

    Full Text Available The main purpose of this research was to compare the scale of web subject directories precision in information retrieval of technical-engineering science. Information gathering was documentary and webometric. Keywords of technical-engineering science were chosen at twenty different subjects from IEEE (Institute of Electrical and Electronics Engineers and engineering magazines that situated in sciencedirect site. These keywords are used at five subject directories Yahoo, Google, Infomine, Intute, Dmoz, that were web directories high-utilization. Usually first results in searching tools are connected to searching keywords. Because, first ten results was evaluated in every search. These assessments to consist of scale of precision, scale of error, scale retrieval items in technical-engineering categories to retrieval items entirely. The used criteria for determining the scale of precision that was according to high-utilization standards in different documents, to consist of presence of the keywords in title, appearance of keywords at the part of web retrieved pages, keywords adjacency, URL of page, page description and subject categories. Information analysis was according to Kruskal-Wallis Test and L.S.D fisher. Results revealed that there was meaningful difference about precision of web subject directories in information retrieval of technical-engineering science, Therefore this theory was confirmed.web subject directories ranked from point of precision as follows. Google, Yahoo, Intute, Dmoz, and Infomine. The scale of observed error at the first results was another criterion that was used for comparing web subject directories. In this research, Yahoo had minimum scale of error and Infomine had most of error. This research also compared the scale of retrieval items in all of categories web subject directories entirely to retrieval items in technical-engineering categories, results revealed that there was meaningful difference between them. And

  1. Supervised graph hashing for histopathology image retrieval and classification.

    Science.gov (United States)

    Shi, Xiaoshuang; Xing, Fuyong; Xu, KaiDi; Xie, Yuanpu; Su, Hai; Yang, Lin

    2017-12-01

    In pathology image analysis, morphological characteristics of cells are critical to grade many diseases. With the development of cell detection and segmentation techniques, it is possible to extract cell-level information for further analysis in pathology images. However, it is challenging to conduct efficient analysis of cell-level information on a large-scale image dataset because each image usually contains hundreds or thousands of cells. In this paper, we propose a novel image retrieval based framework for large-scale pathology image analysis. For each image, we encode each cell into binary codes to generate image representation using a novel graph based hashing model and then conduct image retrieval by applying a group-to-group matching method to similarity measurement. In order to improve both computational efficiency and memory requirement, we further introduce matrix factorization into the hashing model for scalable image retrieval. The proposed framework is extensively validated with thousands of lung cancer images, and it achieves 97.98% classification accuracy and 97.50% retrieval precision with all cells of each query image used. Copyright © 2017 Elsevier B.V. All rights reserved.

  2. A Learning State-Space Model for Image Retrieval

    Directory of Open Access Journals (Sweden)

    Lee Greg C

    2007-01-01

    Full Text Available This paper proposes an approach based on a state-space model for learning the user concepts in image retrieval. We first design a scheme of region-based image representation based on concept units, which are integrated with different types of feature spaces and with different region scales of image segmentation. The design of the concept units aims at describing similar characteristics at a certain perspective among relevant images. We present the details of our proposed approach based on a state-space model for interactive image retrieval, including likelihood and transition models, and we also describe some experiments that show the efficacy of our proposed model. This work demonstrates the feasibility of using a state-space model to estimate the user intuition in image retrieval.

  3. Teleconsultations using content-based retrieval of parametric images.

    Science.gov (United States)

    Ruminski, J

    2004-01-01

    The problem of medical teleconsultations with intelligent computer system rather than with a human expert is analyzed. System for content-based retrieval of images is described and presented as a use case of a passive teleconsultation. Selected features, crucial for retrieval quality, are introduced including: synthesis of parametric images, regions of interest detection and extraction, definition of content-based features, generation of descriptors, query algebra, system architecture and performance. Additionally, electronic business pattern is proposed to generalize teleconsultation services like content-based retrieval systems.

  4. The Nuclear Science References (NSR) Database and Web Retrieval System

    CERN Document Server

    Pritychenko, B; Kellett, M A; Singh, B; Totans, J

    2011-01-01

    The Nuclear Science References (NSR) database, and associated Web inter- face, is the world's only comprehensive source of easily accessible low- and intermediate-energy nuclear physics bibliographic information for more than 200,000 articles since the beginning of nuclear science. The weekly-updated NSR database provides essential support for nuclear data evaluation, com- pilation and research activities. The principles of the database and Web application development and maintenance are described. Examples of nuclear structure, reaction and decay applications are specifically included. The complete NSR database is freely available at the websites of the National Nuclear Data Center http://www.nndc.bnl.gov/nsr and the International Atomic Energy Agency http://www-nds.iaea.org/nsr.

  5. The Nuclear Science References (NSR) database and Web Retrieval System

    Science.gov (United States)

    Pritychenko, B.; Běták, E.; Kellett, M. A.; Singh, B.; Totans, J.

    2011-06-01

    The Nuclear Science References (NSR) database together with its associated Web interface is the world's only comprehensive source of easily accessible low- and intermediate-energy nuclear physics bibliographic information for more than 200,000 articles since the beginning of nuclear science. The weekly updated NSR database provides essential support for nuclear data evaluation, compilation and research activities. The principles of the database and Web application development and maintenance are described. Examples of nuclear structure, reaction and decay applications are specifically included. The complete NSR database is freely available at the websites of the National Nuclear Data Center http://www.nndc.bnl.gov/nsr and the International Atomic Energy Agency http://www-nds.iaea.org/nsr.

  6. An Intelligent Web Digital Image Metadata Service Platform for Social Curation Commerce Environment

    Directory of Open Access Journals (Sweden)

    Seong-Yong Hong

    2015-01-01

    Full Text Available Information management includes multimedia data management, knowledge management, collaboration, and agents, all of which are supporting technologies for XML. XML technologies have an impact on multimedia databases as well as collaborative technologies and knowledge management. That is, e-commerce documents are encoded in XML and are gaining much popularity for business-to-business or business-to-consumer transactions. Recently, the internet sites, such as e-commerce sites and shopping mall sites, deal with a lot of image and multimedia information. This paper proposes an intelligent web digital image information retrieval platform, which adopts XML technology for social curation commerce environment. To support object-based content retrieval on product catalog images containing multiple objects, we describe multilevel metadata structures representing the local features, global features, and semantics of image data. To enable semantic-based and content-based retrieval on such image data, we design an XML-Schema for the proposed metadata. We also describe how to automatically transform the retrieval results into the forms suitable for the various user environments, such as web browser or mobile device, using XSLT. The proposed scheme can be utilized to enable efficient e-catalog metadata sharing between systems, and it will contribute to the improvement of the retrieval correctness and the user’s satisfaction on semantic-based web digital image information retrieval.

  7. A novel multi-manifold classification model via path-based clustering for image retrieval

    Science.gov (United States)

    Zhu, Rong; Yuan, Zhijun; Xuan, Junying

    2011-12-01

    Nowadays, with digital cameras and mass storage devices becoming increasingly affordable, each day thousands of pictures are taken and images on the Internet are emerged at an astonishing rate. Image retrieval is a process of searching valuable information that user demanded from huge images. However, it is hard to find satisfied results due to the well known "semantic gap". Image classification plays an essential role in retrieval process. But traditional methods will encounter problems when dealing with high-dimensional and large-scale image sets in applications. Here, we propose a novel multi-manifold classification model for image retrieval. Firstly, we simplify the classification of images from high-dimensional space into the one on low-dimensional manifolds, largely reducing the complexity of classification process. Secondly, considering that traditional distance measures often fail to find correct visual semantics of manifolds, especially when dealing with the images having complex data distribution, we also define two new distance measures based on path-based clustering, and further applied to the construction of a multi-class image manifold. One experiment was conducted on 2890 Web images. The comparison results between three methods show that the proposed method achieves the highest classification accuracy.

  8. PAMS photo image retrieval prototype system requirements specification

    Energy Technology Data Exchange (ETDEWEB)

    Conner, M.L.

    1996-04-30

    This project is part of the Photo Audiovisual Management System (PAMS). The project was initially identified in 1989 and has since been has been worked on under various names such as Image Retrieval and Viewing System, Photo Image Retrieval Subsystem and Image Processing and Compression System. This document builds upon the information collected and the analysis performed in the earlier phases of this project. The PAMS Photo Imaging subsystem will provide the means of capturing low resolution digital images from Photography`s negative files and associating the digital images with a record in the PAMS photo database. The digital images and key photo identification information will be accessible to HAN users to assist in locating and identifying specific photographs. After identifying desired photographs, users may request photo prints or high resolution digital images directly from Photography. The digital images captured by this project are for identification purposes only and are not intended to be of sufficient quality for subsequent use.

  9. Sigma: Web Retrieval Interface for Nuclear Reaction Data

    Energy Technology Data Exchange (ETDEWEB)

    Pritychenko,B.; Sonzogni, A.A.

    2008-06-24

    The authors present Sigma, a Web-rich application which provides user-friendly access in processing and plotting of the evaluated and experimental nuclear reaction data stored in the ENDF-6 and EXFOR formats. The main interface includes browsing using a periodic table and a directory tree, basic and advanced search capabilities, interactive plots of cross sections, angular distributions and spectra, comparisons between evaluated and experimental data, computations between different cross section sets. Interactive energy-angle, neutron cross section uncertainties plots and visualization of covariance matrices are under development. Sigma is publicly available at the National Nuclear Data Center website at www.nndc.bnl.gov/sigma.

  10. Folksonomies indexing and retrieval in web 2.0

    CERN Document Server

    Peters, Isabella

    2009-01-01

    In Web 2.0 users not only make heavy use of Col-laborative Information Services in order to create, publish and share digital information resources - what is more, they index and represent these re-sources via own keywords, so-called tags. The sum of this user-generated metadata of a Collaborative Information Service is also called Folksonomy. In contrast to professionally created and highly struc-tured metadata, e.g. subject headings, thesauri, clas-sification systems or ontologies, which are applied in libraries, corporate information architectures or commercial databases and which were deve

  11. Web Information Seeking and Retrieval in Digital Library Contexts: Towards an Intelligent Agent Solution.

    Science.gov (United States)

    Detlor, Brian; Arsenault, Clement

    2002-01-01

    Discusses the role of intelligent agents in facilitating the seeking and retrieval of information in Web-based library environments. Highlights include an overview of agents; current applications in library domains; an agent-based model for libraries; the design of interface agents; and implications for library policy and digital collections.…

  12. Information Retrieval Strategies of Millennial Undergraduate Students in Web and Library Database Searches

    Science.gov (United States)

    Porter, Brandi

    2009-01-01

    Millennial students make up a large portion of undergraduate students attending colleges and universities, and they have a variety of online resources available to them to complete academically related information searches, primarily Web based and library-based online information retrieval systems. The content, ease of use, and required search…

  13. Histology image retrieval in optimised multi-feature spaces.

    Science.gov (United States)

    Zhang, Qianni; Izquierdo, Ebroul

    2013-01-01

    Content based histology image retrieval systems have shown great potential in supporting decision making in clinical activities, teaching, and biological research. In content based image retrieval, feature combination plays a key role. It aims at enhancing the descriptive power of visual features corresponding to semantically meaningful queries. It is particularly valuable in histology image analysis where intelligent mechanisms are needed for interpreting varying tissue composition and architecture into histological concepts. This paper presents an approach to automatically combine heterogeneous visual features for histology image retrieval. The aim is to obtain the most representative fusion model for a particular keyword that is associated to multiple query images. The core of this approach is a multi-objective learning method, which aims to understand an optimal visual-semantic matching function by jointly considering the different preferences of the group of query images. The task is posed as an optimisation problem, and a multi-objective optimisation strategy is employed in order to handle potential contradictions in the query images associated to the same keyword. Experiments were performed on two different collections of histology images. The results show that it is possible to improve a system for content based histology image retrieval by using an appropriately defined multi-feature fusion model, which takes careful consideration of the structure and distribution of visual features.

  14. Toward a taxonomy of textures for image retrieval

    Science.gov (United States)

    Payne, Janet S.; Stonham, T. John

    2006-02-01

    Image retrieval remains a difficult task, in spite of the many research efforts applied over the past decade or more, from IBM's QBIC onwards. Colour, texture and shape have all been used for content-based image retrieval (CBIR); texture is particularly effective, alone or with colour. Many researchers have expressed the hope that textures can be organised and classified in the way that colour can; however, it seems likely that such an ambition is unrealisable. While the goal of content-based retrieval is to retrieve "more images like this one," there is the difficulty of judging what is meant by similarity for images. It seems appropriate to search on what the images actually look like to potential users of such systems. No single computational method for textural classification matches human perceptual similarity judgements. However, since different methods are effective for different kinds of textures, a way of identifying or grouping such classes should lead to more effective retrievals. In this research, working with the Brodatz texture images, participants were asked to select up to four other textures which they considered similar to each of the Brodatz textures in turn. A principal components analysis was performed upon the correlations between their rankings, which was then used to derive a 'mental map' of the composite similarity ranking for each texture. These similarity measures can be considered as a matrix of distances in similarity space; hierarchical cluster analysis produces a perceptually appropriate dendrogram with eight distinct clusters.

  15. Efficient Compressed Domain Target Image Search and Retrieval

    OpenAIRE

    Bracamonte, Javier; Ansorge, Michael; Pellandini, Fausto; Farine, Pierre-André

    2005-01-01

    In this paper we introduce a low complexity and accurate technique for target image search and retrieval. This method, which oper- ates directly in the compressed JPEG domain, addresses two of the CBIR challenges stated by The Benchathlon Network regarding the search of a specific image: finding out if an exact same image exists in a database, and identifying this occurrence even when the database image has been compressed with a different coding bit-rate. The proposed technique can be applie...

  16. Automated diagnosis of retinopathy by content-based image retrieval.

    Science.gov (United States)

    Chaum, Edward; Karnowski, Thomas P; Govindasamy, V Priya; Abdelrahman, Mohamed; Tobin, Kenneth W

    2008-01-01

    To describe a novel computer-based image analysis method that is being developed to assist and automate the diagnosis of retinal disease. Content-based image retrieval is the process of retrieving related images from large database collections using their pictorial content. The content feature list becomes the index for storage, search, and retrieval of related images from a library based upon specific visual characteristics. Low-level analyses use feature description models and higher-level analyses use perceptual organization and spatial relationships, including clinical metadata, to extract semantic information. We defined, extracted, and tested a large number of region- and lesion-based features from a dataset of 395 retinal images. Using a statistical hold-one-out method, independent queries for each image were submitted to the system and a diagnostic prediction was formulated. The diagnostic sensitivity for all stratified levels of age-related macular degeneration ranged from 75% to 100%. Similarly, the sensitivity of detection and accuracy for proliferative diabetic retinopathy ranged from 75% to 91.7% and for nonproliferative diabetic retinopathy, ranged from 75% to 94.7%. The overall purity of the diagnosis (specificity) for all disease states in the dataset was 91.3%. The probabilistic nature of content-based image retrieval permits us to make statistically relevant predictions regarding the presence, severity, and manifestations of common retinal diseases from digital images in an automated and deterministic manner.

  17. Image Retrieval by Color Semantics with Incomplete Knowledge.

    Science.gov (United States)

    Corridoni, Jacopo M.; Del Bimbo, Alberto; Vicario, Enrico

    1998-01-01

    Presents a system which supports image retrieval by high-level chromatic contents, the sensations that color accordances generate on the observer. Surveys Itten's theory of color semantics and discusses image description and query specification. Presents examples of visual querying. (AEF)

  18. Enhancing Image Retrieval System Using Content Based Search ...

    African Journals Online (AJOL)

    The purpose of this work is to design and implement a software that enhances the retrieval of image using the image content base as the criteria. As the size of multimedia databases and other repositories continues to grow, the difficulty of finding multimedia information increases, it becomes practically impossible to depend ...

  19. Breast Histopathological Image Retrieval Based on Latent Dirichlet Allocation.

    Science.gov (United States)

    Ma, Yibing; Jiang, Zhiguo; Zhang, Haopeng; Xie, Fengying; Zheng, Yushan; Shi, Huaqiang; Zhao, Yu

    2017-07-01

    In the field of pathology, whole slide image (WSI) has become the major carrier of visual and diagnostic information. Content-based image retrieval among WSIs can aid the diagnosis of an unknown pathological image by finding its similar regions in WSIs with diagnostic information. However, the huge size and complex content of WSI pose several challenges for retrieval. In this paper, we propose an unsupervised, accurate, and fast retrieval method for a breast histopathological image. Specifically, the method presents a local statistical feature of nuclei for morphology and distribution of nuclei, and employs the Gabor feature to describe the texture information. The latent Dirichlet allocation model is utilized for high-level semantic mining. Locality-sensitive hashing is used to speed up the search. Experiments on a WSI database with more than 8000 images from 15 types of breast histopathology demonstrate that our method achieves about 0.9 retrieval precision as well as promising efficiency. Based on the proposed framework, we are developing a search engine for an online digital slide browsing and retrieval platform, which can be applied in computer-aided diagnosis, pathology education, and WSI archiving and management.

  20. Measuring and Predicting Tag Importance for Image Retrieval.

    Science.gov (United States)

    Li, Shangwen; Purushotham, Sanjay; Chen, Chen; Ren, Yuzhuo; Kuo, C-C Jay

    2017-12-01

    Textual data such as tags, sentence descriptions are combined with visual cues to reduce the semantic gap for image retrieval applications in today's Multimodal Image Retrieval (MIR) systems. However, all tags are treated as equally important in these systems, which may result in misalignment between visual and textual modalities during MIR training. This will further lead to degenerated retrieval performance at query time. To address this issue, we investigate the problem of tag importance prediction, where the goal is to automatically predict the tag importance and use it in image retrieval. To achieve this, we first propose a method to measure the relative importance of object and scene tags from image sentence descriptions. Using this as the ground truth, we present a tag importance prediction model to jointly exploit visual, semantic and context cues. The Structural Support Vector Machine (SSVM) formulation is adopted to ensure efficient training of the prediction model. Then, the Canonical Correlation Analysis (CCA) is employed to learn the relation between the image visual feature and tag importance to obtain robust retrieval performance. Experimental results on three real-world datasets show a significant performance improvement of the proposed MIR with Tag Importance Prediction (MIR/TIP) system over other MIR systems.

  1. Efficient image retrieval with multiple distance measures

    Science.gov (United States)

    Berman, Andrew P.; Shapiro, Linda G.

    1997-01-01

    There is a growing need for the ability to query image databases based on image content rather than strict keyword search. Most current image database systems that perform query by content require a distance computation for each image in the database. Distance computations can be time consuming, limiting the usability of such systems. There is thus a need for indexing systems and algorithms that can eliminate candidate images without performing distance calculations. As user needs may change from session to session, there is also a need for run-time creation of distance measures. In this paper, we introduce FIDS, or `Flexible Image Database System.' FIDS allows the user to query the database based on user-defined polynomial combinations of predefined distance measures. Using an indexing scheme and algorithms based on the triangle inequality, FIDS can return matches to the query image without directly comparing the query images to much of the database. FIDS is currently being tested on a database of eighteen hundred images.

  2. Feature Selection for Image Retrieval based on Genetic Algorithm

    Directory of Open Access Journals (Sweden)

    Preeti Kushwaha

    2016-12-01

    Full Text Available This paper describes the development and implementation of feature selection for content based image retrieval. We are working on CBIR system with new efficient technique. In this system, we use multi feature extraction such as colour, texture and shape. The three techniques are used for feature extraction such as colour moment, gray level co- occurrence matrix and edge histogram descriptor. To reduce curse of dimensionality and find best optimal features from feature set using feature selection based on genetic algorithm. These features are divided into similar image classes using clustering for fast retrieval and improve the execution time. Clustering technique is done by k-means algorithm. The experimental result shows feature selection using GA reduces the time for retrieval and also increases the retrieval precision, thus it gives better and faster results as compared to normal image retrieval system. The result also shows precision and recall of proposed approach compared to previous approach for each image class. The CBIR system is more efficient and better performs using feature selection based on Genetic Algorithm.

  3. Dictionary Pruning with Visual Word Significance for Medical Image Retrieval

    Science.gov (United States)

    Zhang, Fan; Song, Yang; Cai, Weidong; Hauptmann, Alexander G.; Liu, Sidong; Pujol, Sonia; Kikinis, Ron; Fulham, Michael J; Feng, David Dagan; Chen, Mei

    2016-01-01

    Content-based medical image retrieval (CBMIR) is an active research area for disease diagnosis and treatment but it can be problematic given the small visual variations between anatomical structures. We propose a retrieval method based on a bag-of-visual-words (BoVW) to identify discriminative characteristics between different medical images with Pruned Dictionary based on Latent Semantic Topic description. We refer to this as the PD-LST retrieval. Our method has two main components. First, we calculate a topic-word significance value for each visual word given a certain latent topic to evaluate how the word is connected to this latent topic. The latent topics are learnt, based on the relationship between the images and words, and are employed to bridge the gap between low-level visual features and high-level semantics. These latent topics describe the images and words semantically and can thus facilitate more meaningful comparisons between the words. Second, we compute an overall-word significance value to evaluate the significance of a visual word within the entire dictionary. We designed an iterative ranking method to measure overall-word significance by considering the relationship between all latent topics and words. The words with higher values are considered meaningful with more significant discriminative power in differentiating medical images. We evaluated our method on two public medical imaging datasets and it showed improved retrieval accuracy and efficiency. PMID:27688597

  4. Content Based Medical Image Retrieval for Histopathological, CT and MRI Images

    Directory of Open Access Journals (Sweden)

    Swarnambiga AYYACHAMY

    2013-09-01

    Full Text Available A content based approach is followed for medical images. The purpose of this study is to access the stability of these methods for medical image retrieval. The methods used in color based retrieval for histopathological images are color co-occurrence matrix (CCM and histogram with meta features. For texture based retrieval GLCM (gray level co-occurrence matrix and local binary pattern (LBP were used. For shape based retrieval canny edge detection and otsu‘s method with multivariable threshold were used. Texture and shape based retrieval were implemented using MRI (magnetic resonance images. The most remarkable characteristics of the article are its content based approach for each medical imaging modality. Our efforts were focused on the initial visual search. From our experiment, histogram with meta features in color based retrieval for histopathological images shows a precision of 60 % and recall of 30 %. Whereas GLCM in texture based retrieval for MRI images shows a precision of 70 % and recall of 20 %. Shape based retrieval for MRI images shows a precision of 50% and recall of 25 %. The retrieval results shows that this simple approach is successful.

  5. SOFTWARE FOR REGIONS OF INTEREST RETRIEVAL ON MEDICAL 3D IMAGES

    Directory of Open Access Journals (Sweden)

    G. G. Stromov

    2014-01-01

    Full Text Available Background. Implementation of software for areas of interest retrieval in 3D medical images is described in this article. It has been tested against large volume of model MRIs.Material and methods. We tested software against normal and pathological (severe multiple sclerosis model MRIs from tge BrainWeb resource. Technological stack is based on open-source cross-platform solutions. We implemented storage system on Maria DB (an open-sourced fork of MySQL with P/SQL extensions. Python 2.7 scripting was used for automatization of extract-transform-load operations. The computational core is written on Java 7 with Spring framework 3. MongoDB was used as a cache in the cluster of workstations. Maven 3 was chosen as a dependency manager and build system, the project is hosted at Github.Results. As testing on SSMU's LAN has showed, software has been developed is quite efficiently retrieves ROIs are matching for the morphological substratum on pathological MRIs.Conclusion. Automation of a diagnostic process using medical imaging allows to level down the subjective component in decision making and increase the availability of hi-tech medicine. Software has shown in the article is a complex solution for ROI retrieving and segmentation process on model medical images in full-automated mode.We would like to thank Robert Vincent for great help with consulting of usage the BrainWeb resource.

  6. Applying Semantic Web technologies to improve the retrieval, credibility and use of health-related web resources.

    Science.gov (United States)

    Mayer, Miguel A; Karampiperis, Pythagoras; Kukurikos, Antonis; Karkaletsis, Vangelis; Stamatakis, Kostas; Villarroel, Dagmar; Leis, Angela

    2011-06-01

    The number of health-related websites is increasing day-by-day; however, their quality is variable and difficult to assess. Various "trust marks" and filtering portals have been created in order to assist consumers in retrieving quality medical information. Consumers are using search engines as the main tool to get health information; however, the major problem is that the meaning of the web content is not machine-readable in the sense that computers cannot understand words and sentences as humans can. In addition, trust marks are invisible to search engines, thus limiting their usefulness in practice. During the last five years there have been different attempts to use Semantic Web tools to label health-related web resources to help internet users identify trustworthy resources. This paper discusses how Semantic Web technologies can be applied in practice to generate machine-readable labels and display their content, as well as to empower end-users by providing them with the infrastructure for expressing and sharing their opinions on the quality of health-related web resources.

  7. Efficient Retrieval of the Top-k Most Relevant Spatial Web Objects

    DEFF Research Database (Denmark)

    Cong, Gao; Jensen, Christian Søndergaard; Wu, Dingming

    2009-01-01

    The conventional Internet is acquiring a geo-spatial dimension. Web documents are being geo-tagged, and geo-referenced objects such as points of interest are being associated with descriptive text documents. The resulting fusion of geo-location and documents enables a new kind of top-k query...... that takes into account both location proximity and text relevancy. To our knowledge, only naive techniques exist that are capable of computing a general web information retrieval query while also taking location into account. This paper proposes a new indexing framework for location-aware top-k text...

  8. Recent advances in intelligent image search and video retrieval

    CERN Document Server

    2017-01-01

    This book initially reviews the major feature representation and extraction methods and effective learning and recognition approaches, which have broad applications in the context of intelligent image search and video retrieval. It subsequently presents novel methods, such as improved soft assignment coding, Inheritable Color Space (InCS) and the Generalized InCS framework, the sparse kernel manifold learner method, the efficient Support Vector Machine (eSVM), and the Scale-Invariant Feature Transform (SIFT) features in multiple color spaces. Lastly, the book presents clothing analysis for subject identification and retrieval, and performance evaluation methods of video analytics for traffic monitoring. Digital images and videos are proliferating at an amazing speed in the fields of science, engineering and technology, media and entertainment. With the huge accumulation of such data, keyword searches and manual annotation schemes may no longer be able to meet the practical demand for retrieving relevant conte...

  9. Content-based document image retrieval in complex document collections

    Science.gov (United States)

    Agam, G.; Argamon, S.; Frieder, O.; Grossman, D.; Lewis, D.

    2007-01-01

    We address the problem of content-based image retrieval in the context of complex document images. Complex documents typically start out on paper and are then electronically scanned. These documents have rich internal structure and might only be available in image form. Additionally, they may have been produced by a combination of printing technologies (or by handwriting); and include diagrams, graphics, tables and other non-textual elements. Large collections of such complex documents are commonly found in legal and security investigations. The indexing and analysis of large document collections is currently limited to textual features based OCR data and ignore the structural context of the document as well as important non-textual elements such as signatures, logos, stamps, tables, diagrams, and images. Handwritten comments are also normally ignored due to the inherent complexity of offline handwriting recognition. We address important research issues concerning content-based document image retrieval and describe a prototype for integrated retrieval and aggregation of diverse information contained in scanned paper documents we are developing. Such complex document information processing combines several forms of image processing together with textual/linguistic processing to enable effective analysis of complex document collections, a necessity for a wide range of applications. Our prototype automatically generates rich metadata about a complex document and then applies query tools to integrate the metadata with text search. To ensure a thorough evaluation of the effectiveness of our prototype, we are developing a test collection containing millions of document images.

  10. Harvesting image databases from the Web.

    Science.gov (United States)

    Schroff, Florian; Criminisi, Antonio; Zisserman, Andrew

    2011-04-01

    The objective of this work is to automatically generate a large number of images for a specified object class. A multimodal approach employing both text, metadata, and visual features is used to gather many high-quality images from the Web. Candidate images are obtained by a text-based Web search querying on the object identifier (e.g., the word penguin). The Webpages and the images they contain are downloaded. The task is then to remove irrelevant images and rerank the remainder. First, the images are reranked based on the text surrounding the image and metadata features. A number of methods are compared for this reranking. Second, the top-ranked images are used as (noisy) training data and an SVM visual classifier is learned to improve the ranking further. We investigate the sensitivity of the cross-validation procedure to this noisy training data. The principal novelty of the overall method is in combining text/metadata and visual features in order to achieve a completely automatic ranking of the images. Examples are given for a selection of animals, vehicles, and other classes, totaling 18 classes. The results are assessed by precision/recall curves on ground-truth annotated data and by comparison to previous approaches, including those of Berg and Forsyth and Fergus et al.

  11. Query Interpretation – an Application of Semiotics in Image Retrieval

    NARCIS (Netherlands)

    Boer, M.H.T. de; Brandt, P.; Sappelli, M.; Daniele, L.M.; Schutte, K.; Kraaij, W.

    2015-01-01

    One of the challenges in the field of content-based image retrieval is to bridge the semantic gap that exists between the information extracted from visual data using classifiers, and the interpretation of this data made by the end users. The semantic gap is a cascade of 1) the transformation of

  12. AMARSI: Aerosol modeling and retrieval from multi-spectral imagers

    NARCIS (Netherlands)

    Leeuw, G. de; Curier, R.L.; Staroverova, A.; Kokhanovsky, A.; Hoyningen-Huene, W. van; Rozanov, V.V.; Burrows, J.P.; Hesselmans, G.; Gale, L.; Bouvet, M.

    2008-01-01

    The AMARSI project aims at the development and validation of aerosol retrieval algorithms over ocean. One algorithm will be developed for application with data from the Multi Spectral Imager (MSI) on EarthCARE. A second algorithm will be developed using the combined information from AATSR and MERIS,

  13. Content-Based Image Retrieval: Color-selection exploited

    NARCIS (Netherlands)

    Moens, Marie-Francine; van den Broek, Egon; Vuurpijl, L.G.; de Brusser, Rik; Kisters, P.M.F.; Hiemstra, Djoerd; Kraaij, Wessel; von Schmid, J.C.M.

    2002-01-01

    This research presents a new color selection interface that facilitates query-by-color in Content-Based Image Retrieval (CBIR). Existing CBIR color selection interfaces, are being judged as non-intuitive and difficult to use. Our interface copes with these problems of usability. It is based on 11

  14. Grid-Independent Compressive Imaging and Fourier Phase Retrieval

    Science.gov (United States)

    Liao, Wenjing

    2013-01-01

    This dissertation is composed of two parts. In the first part techniques of band exclusion(BE) and local optimization(LO) are proposed to solve linear continuum inverse problems independently of the grid spacing. The second part is devoted to the Fourier phase retrieval problem. Many situations in optics, medical imaging and signal processing call…

  15. Image Retrieval Based on Wavelet Features

    Science.gov (United States)

    Murtagh, F.

    2006-04-01

    A dominant (additive, stationary) Gaussian noise component in image data will ensure that wavelet coefficients are of Gaussian distribution, and in such a case Shannon entropy quantifies the wavelet transformed data well. But we find that both Gaussian and long tailed distributions may well hold in practice for wavelet coefficients. We investigate entropy-related features based on different wavelet transforms and the newly developed curvelet transform. Using a materials grading case study, we find that second, third, fourth order moments allow 100% successful test set discrimination.

  16. IMAGE-BASED AIRBORNE LiDAR POINT CLOUD ENCODING FOR 3D BUILDING MODEL RETRIEVAL

    Directory of Open Access Journals (Sweden)

    Y.-C. Chen

    2016-06-01

    Full Text Available With the development of Web 2.0 and cyber city modeling, an increasing number of 3D models have been available on web-based model-sharing platforms with many applications such as navigation, urban planning, and virtual reality. Based on the concept of data reuse, a 3D model retrieval system is proposed to retrieve building models similar to a user-specified query. The basic idea behind this system is to reuse these existing 3D building models instead of reconstruction from point clouds. To efficiently retrieve models, the models in databases are compactly encoded by using a shape descriptor generally. However, most of the geometric descriptors in related works are applied to polygonal models. In this study, the input query of the model retrieval system is a point cloud acquired by Light Detection and Ranging (LiDAR systems because of the efficient scene scanning and spatial information collection. Using Point clouds with sparse, noisy, and incomplete sampling as input queries is more difficult than that by using 3D models. Because that the building roof is more informative than other parts in the airborne LiDAR point cloud, an image-based approach is proposed to encode both point clouds from input queries and 3D models in databases. The main goal of data encoding is that the models in the database and input point clouds can be consistently encoded. Firstly, top-view depth images of buildings are generated to represent the geometry surface of a building roof. Secondly, geometric features are extracted from depth images based on height, edge and plane of building. Finally, descriptors can be extracted by spatial histograms and used in 3D model retrieval system. For data retrieval, the models are retrieved by matching the encoding coefficients of point clouds and building models. In experiments, a database including about 900,000 3D models collected from the Internet is used for evaluation of data retrieval. The results of the proposed method show

  17. Indexing natural images for retrieval based on Kansei factors

    Science.gov (United States)

    Black, John A., Jr.; Kahol, Kanav; Tripathi, Priyamvada; Kuchi, Prem; Panchanathan, Sethuraman

    2004-06-01

    Current image indexing methods are based on measures of visual content. However, this approach provides only a partial solution to the image retrieval problem. For example, an artist might want to retrieve an image (for use in an advertising campaign) that evokes a particular "feeling" in the viewer. One technique for measuring evoked feelings, which originated in Japan, indexes images based on the inner impression (i.e. the kansei) experienced by a person while viewing an image or object-impressions such as busy, elegant, romantic, or lavish. The aspects of the image that evoke this inner impression in the viewer are called kansei factors. The challenge in kansei research is to enumerate those factors, with the ultimate goal of indexing images with the "inner impression" that viewers experience. Thus, the focus is on the viewer, rather than on the image, and similarity measures derived from kansei indexing represent similarities in inner experience, rather than visual similarity. This paper presents the results of research that indexes images based on a set of kansei impressions, and then looks for correlations between that indexing and traditional content-based indexing. The goal is to allow the indexing of images based on the inner impressions they evoke, using visual content.

  18. Image based book cover recognition and retrieval

    Science.gov (United States)

    Sukhadan, Kalyani; Vijayarajan, V.; Krishnamoorthi, A.; Bessie Amali, D. Geraldine

    2017-11-01

    In this we are developing a graphical user interface using MATLAB for the users to check the information related to books in real time. We are taking the photos of the book cover using GUI, then by using MSER algorithm it will automatically detect all the features from the input image, after this it will filter bifurcate non-text features which will be based on morphological difference between text and non-text regions. We implemented a text character alignment algorithm which will improve the accuracy of the original text detection. We will also have a look upon the built in MATLAB OCR recognition algorithm and an open source OCR which is commonly used to perform better detection results, post detection algorithm is implemented and natural language processing to perform word correction and false detection inhibition. Finally, the detection result will be linked to internet to perform online matching. More than 86% accuracy can be obtained by this algorithm.

  19. Click prediction for web image reranking using multimodal sparse coding.

    Science.gov (United States)

    Yu, Jun; Rui, Yong; Tao, Dacheng

    2014-05-01

    Image reranking is effective for improving the performance of a text-based image search. However, existing reranking algorithms are limited for two main reasons: 1) the textual meta-data associated with images is often mismatched with their actual visual content and 2) the extracted visual features do not accurately describe the semantic similarities between images. Recently, user click information has been used in image reranking, because clicks have been shown to more accurately describe the relevance of retrieved images to search queries. However, a critical problem for click-based methods is the lack of click data, since only a small number of web images have actually been clicked on by users. Therefore, we aim to solve this problem by predicting image clicks. We propose a multimodal hypergraph learning-based sparse coding method for image click prediction, and apply the obtained click data to the reranking of images. We adopt a hypergraph to build a group of manifolds, which explore the complementarity of different features through a group of weights. Unlike a graph that has an edge between two vertices, a hyperedge in a hypergraph connects a set of vertices, and helps preserve the local smoothness of the constructed sparse codes. An alternating optimization procedure is then performed, and the weights of different modalities and the sparse codes are simultaneously obtained. Finally, a voting strategy is used to describe the predicted click as a binary event (click or no click), from the images' corresponding sparse codes. Thorough empirical studies on a large-scale database including nearly 330 K images demonstrate the effectiveness of our approach for click prediction when compared with several other methods. Additional image reranking experiments on real-world data show the use of click prediction is beneficial to improving the performance of prominent graph-based image reranking algorithms.

  20. Color-Based Image Retrieval from High-Similarity Image Databases

    DEFF Research Database (Denmark)

    Hansen, Michael Adsetts Edberg

    Many image classification problems can fruitfully be thought of as image retrieval in a "high similarity image database" (HSID) characterized by being tuned towards a specific application and having a high degree of visual similarity between entries that should be distinguished. We introduce...... a method for HSID retrieval using a similarity measure based on a linear combination of Jeffreys-Matusita (JM) distances between distributions of color (and color derivatives) estimated from a set of automatically extracted image regions. The weight coefficients are estimated based on optimal retrieval...

  1. Color-Based Image Retrieval from High-Similarity Image Databases

    DEFF Research Database (Denmark)

    Hansen, Michael Adsetts Edberg; Carstensen, Jens Michael

    2003-01-01

    Many image classification problems can fruitfully be thought of as image retrieval in a "high similarity image database" (HSID) characterized by being tuned towards a specific application and having a high degree of visual similarity between entries that should be distinguished. We introduce...... a method for HSID retrieval using a similarity measure based on a linear combination of Jeffreys-Matusita (JM) distances between distributions of color (and color derivatives) estimated from a set of automatically extracted image regions. The weight coefficients are estimated based on optimal retrieval...

  2. Web Based Distributed Coastal Image Analysis System Project

    Data.gov (United States)

    National Aeronautics and Space Administration — This project develops Web based distributed image analysis system processing the Moderate Resolution Imaging Spectroradiometer (MODIS) data to provide decision...

  3. Color-Based Image Retrieval Using Perceptually Modified Hausdorff Distance

    Directory of Open Access Journals (Sweden)

    Park BoGun

    2008-01-01

    Full Text Available In most content-based image retrieval systems, the color information is extensively used for its simplicity and generality. Due to its compactness in characterizing the global information, a uniform quantization of colors, or a histogram, has been the most commonly used color descriptor. However, a cluster-based representation, or a signature, has been proven to be more compact and theoretically sound than a histogram for increasing the discriminatory power and reducing the gap between human perception and computer-aided retrieval system. Despite of these advantages, only few papers have broached dissimilarity measure based on the cluster-based nonuniform quantization of colors. In this paper, we extract the perceptual representation of an original color image, a statistical signature by modifying general color signature, which consists of a set of points with statistical volume. Also we present a novel dissimilarity measure for a statistical signature called Perceptually Modified Hausdorff Distance (PMHD that is based on the Hausdorff distance. In the result, the proposed retrieval system views an image as a statistical signature, and uses the PMHD as the metric between statistical signatures. The precision versus recall results show that the proposed dissimilarity measure generally outperforms all other dissimilarity measures on an unmodified commercial image database.

  4. Heterogeneous Graph Propagation for Large-Scale Web Image Search.

    Science.gov (United States)

    Xie, Lingxi; Tian, Qi; Zhou, Wengang; Zhang, Bo

    2015-11-01

    State-of-the-art web image search frameworks are often based on the bag-of-visual-words (BoVWs) model and the inverted index structure. Despite the simplicity, efficiency, and scalability, they often suffer from low precision and/or recall, due to the limited stability of local features and the considerable information loss on the quantization stage. To refine the quality of retrieved images, various postprocessing methods have been adopted after the initial search process. In this paper, we investigate the online querying process from a graph-based perspective. We introduce a heterogeneous graph model containing both image and feature nodes explicitly, and propose an efficient reranking approach consisting of two successive modules, i.e., incremental query expansion and image-feature voting, to improve the recall and precision, respectively. Compared with the conventional reranking algorithms, our method does not require using geometric information of visual words, therefore enjoys low consumptions of both time and memory. Moreover, our method is independent of the initial search process, and could cooperate with many BoVW-based image search pipelines, or adopted after other postprocessing algorithms. We evaluate our approach on large-scale image search tasks and verify its competitive search performance.

  5. Deeply learnt hashing forests for content based image retrieval in prostate MR images

    Science.gov (United States)

    Shah, Amit; Conjeti, Sailesh; Navab, Nassir; Katouzian, Amin

    2016-03-01

    Deluge in the size and heterogeneity of medical image databases necessitates the need for content based retrieval systems for their efficient organization. In this paper, we propose such a system to retrieve prostate MR images which share similarities in appearance and content with a query image. We introduce deeply learnt hashing forests (DL-HF) for this image retrieval task. DL-HF effectively leverages the semantic descriptiveness of deep learnt Convolutional Neural Networks. This is used in conjunction with hashing forests which are unsupervised random forests. DL-HF hierarchically parses the deep-learnt feature space to encode subspaces with compact binary code words. We propose a similarity preserving feature descriptor called Parts Histogram which is derived from DL-HF. Correlation defined on this descriptor is used as a similarity metric for retrieval from the database. Validations on publicly available multi-center prostate MR image database established the validity of the proposed approach. The proposed method is fully-automated without any user-interaction and is not dependent on any external image standardization like image normalization and registration. This image retrieval method is generalizable and is well-suited for retrieval in heterogeneous databases other imaging modalities and anatomies.

  6. Bat-Inspired Algorithm Based Query Expansion for Medical Web Information Retrieval.

    Science.gov (United States)

    Khennak, Ilyes; Drias, Habiba

    2017-02-01

    With the increasing amount of medical data available on the Web, looking for health information has become one of the most widely searched topics on the Internet. Patients and people of several backgrounds are now using Web search engines to acquire medical information, including information about a specific disease, medical treatment or professional advice. Nonetheless, due to a lack of medical knowledge, many laypeople have difficulties in forming appropriate queries to articulate their inquiries, which deem their search queries to be imprecise due the use of unclear keywords. The use of these ambiguous and vague queries to describe the patients' needs has resulted in a failure of Web search engines to retrieve accurate and relevant information. One of the most natural and promising method to overcome this drawback is Query Expansion. In this paper, an original approach based on Bat Algorithm is proposed to improve the retrieval effectiveness of query expansion in medical field. In contrast to the existing literature, the proposed approach uses Bat Algorithm to find the best expanded query among a set of expanded query candidates, while maintaining low computational complexity. Moreover, this new approach allows the determination of the length of the expanded query empirically. Numerical results on MEDLINE, the on-line medical information database, show that the proposed approach is more effective and efficient compared to the baseline.

  7. Active reranking for web image search.

    Science.gov (United States)

    Tian, Xinmei; Tao, Dacheng; Hua, Xian-Sheng; Wu, Xiuqing

    2010-03-01

    Image search reranking methods usually fail to capture the user's intention when the query term is ambiguous. Therefore, reranking with user interactions, or active reranking, is highly demanded to effectively improve the search performance. The essential problem in active reranking is how to target the user's intention. To complete this goal, this paper presents a structural information based sample selection strategy to reduce the user's labeling efforts. Furthermore, to localize the user's intention in the visual feature space, a novel local-global discriminative dimension reduction algorithm is proposed. In this algorithm, a submanifold is learned by transferring the local geometry and the discriminative information from the labelled images to the whole (global) image database. Experiments on both synthetic datasets and a real Web image search dataset demonstrate the effectiveness of the proposed active reranking scheme, including both the structural information based active sample selection strategy and the local-global discriminative dimension reduction algorithm.

  8. Using deep learning for content-based medical image retrieval

    Science.gov (United States)

    Sun, Qinpei; Yang, Yuanyuan; Sun, Jianyong; Yang, Zhiming; Zhang, Jianguo

    2017-03-01

    Content-Based medical image retrieval (CBMIR) is been highly active research area from past few years. The retrieval performance of a CBMIR system crucially depends on the feature representation, which have been extensively studied by researchers for decades. Although a variety of techniques have been proposed, it remains one of the most challenging problems in current CBMIR research, which is mainly due to the well-known "semantic gap" issue that exists between low-level image pixels captured by machines and high-level semantic concepts perceived by human[1]. Recent years have witnessed some important advances of new techniques in machine learning. One important breakthrough technique is known as "deep learning". Unlike conventional machine learning methods that are often using "shallow" architectures, deep learning mimics the human brain that is organized in a deep architecture and processes information through multiple stages of transformation and representation. This means that we do not need to spend enormous energy to extract features manually. In this presentation, we propose a novel framework which uses deep learning to retrieval the medical image to improve the accuracy and speed of a CBIR in integrated RIS/PACS.

  9. Atmospheric water vapor retrieval from Landsat 8 thermal infrared images

    Science.gov (United States)

    Ren, Huazhong; Du, Chen; Liu, Rongyuan; Qin, Qiming; Yan, Guangjian; Li, Zhao-Liang; Meng, Jinjie

    2015-03-01

    Atmospheric water vapor (wv) is required for the accurate retrieval of the land surface temperature from remote sensing data and other applications. This work aims to estimate wv from Landsat 8 Thermal InfraRed Sensor (TIRS) images using a new modified split-window covariance-variance ratio (MSWCVR) method on the basis of the brightness temperatures of two thermal infrared bands. Results show that the MSWCVR method can theoretically retrieve wv with an accuracy better than 0.3 g/cm2 for dry atmosphere (wv Robotic Network) ground-measured data and MODIS (Moderate Resolution Imaging Spectroradiometer) products. The results show that the retrieved wv from the TIRS data is highly correlated with the wv of AERONET and MODIS but is generally larger. This difference was probably attributed to the uncertainty of radiometric calibration and stray light coming outside from field of view of TIRS instrument in the current images. Consequently, the data quality and radiometric calibration of the TIRS data should be improved in the future.

  10. Efficient Retrieval of Massive Ocean Remote Sensing Images via a Cloud-Based Mean-Shift Algorithm

    Directory of Open Access Journals (Sweden)

    Mengzhao Yang

    2017-07-01

    Full Text Available The rapid development of remote sensing (RS technology has resulted in the proliferation of high-resolution images. There are challenges involved in not only storing large volumes of RS images but also in rapidly retrieving the images for ocean disaster analysis such as for storm surges and typhoon warnings. In this paper, we present an efficient retrieval of massive ocean RS images via a Cloud-based mean-shift algorithm. Distributed construction method via the pyramid model is proposed based on the maximum hierarchical layer algorithm and used to realize efficient storage structure of RS images on the Cloud platform. We achieve high-performance processing of massive RS images in the Hadoop system. Based on the pyramid Hadoop distributed file system (HDFS storage method, an improved mean-shift algorithm for RS image retrieval is presented by fusion with the canopy algorithm via Hadoop MapReduce programming. The results show that the new method can achieve better performance for data storage than HDFS alone and WebGIS-based HDFS. Speedup and scaleup are very close to linear changes with an increase of RS images, which proves that image retrieval using our method is efficient.

  11. Efficient Retrieval of Massive Ocean Remote Sensing Images via a Cloud-Based Mean-Shift Algorithm

    Science.gov (United States)

    Song, Wei; Mei, Haibin

    2017-01-01

    The rapid development of remote sensing (RS) technology has resulted in the proliferation of high-resolution images. There are challenges involved in not only storing large volumes of RS images but also in rapidly retrieving the images for ocean disaster analysis such as for storm surges and typhoon warnings. In this paper, we present an efficient retrieval of massive ocean RS images via a Cloud-based mean-shift algorithm. Distributed construction method via the pyramid model is proposed based on the maximum hierarchical layer algorithm and used to realize efficient storage structure of RS images on the Cloud platform. We achieve high-performance processing of massive RS images in the Hadoop system. Based on the pyramid Hadoop distributed file system (HDFS) storage method, an improved mean-shift algorithm for RS image retrieval is presented by fusion with the canopy algorithm via Hadoop MapReduce programming. The results show that the new method can achieve better performance for data storage than HDFS alone and WebGIS-based HDFS. Speedup and scaleup are very close to linear changes with an increase of RS images, which proves that image retrieval using our method is efficient. PMID:28737699

  12. Efficient Retrieval of Massive Ocean Remote Sensing Images via a Cloud-Based Mean-Shift Algorithm.

    Science.gov (United States)

    Yang, Mengzhao; Song, Wei; Mei, Haibin

    2017-07-23

    The rapid development of remote sensing (RS) technology has resulted in the proliferation of high-resolution images. There are challenges involved in not only storing large volumes of RS images but also in rapidly retrieving the images for ocean disaster analysis such as for storm surges and typhoon warnings. In this paper, we present an efficient retrieval of massive ocean RS images via a Cloud-based mean-shift algorithm. Distributed construction method via the pyramid model is proposed based on the maximum hierarchical layer algorithm and used to realize efficient storage structure of RS images on the Cloud platform. We achieve high-performance processing of massive RS images in the Hadoop system. Based on the pyramid Hadoop distributed file system (HDFS) storage method, an improved mean-shift algorithm for RS image retrieval is presented by fusion with the canopy algorithm via Hadoop MapReduce programming. The results show that the new method can achieve better performance for data storage than HDFS alone and WebGIS-based HDFS. Speedup and scaleup are very close to linear changes with an increase of RS images, which proves that image retrieval using our method is efficient.

  13. Region-Based Color Image Indexing and Retrieval

    DEFF Research Database (Denmark)

    Kompatsiaris, Ioannis; Triantafyllou, Evangelia; Strintzis, Michael G.

    2001-01-01

    In this paper a region-based color image indexing and retrieval algorithm is presented. As a basis for the indexing, a novel K-Means segmentation algorithm is used, modified so as to take into account the coherence of the regions. A new color distance is also defined for this algorithm. Based...... on the extracted regions, characteristic features are estimated using color, texture and shape information. An important and unique aspect of the algorithm is that, in the context of similarity-based querying, the user is allowed to view the internal representation of the submitted image and the query results...

  14. The application of similar image retrieval in electronic commerce.

    Science.gov (United States)

    Hu, YuPing; Yin, Hua; Han, Dezhi; Yu, Fei

    2014-01-01

    Traditional online shopping platform (OSP), which searches product information by keywords, faces three problems: indirect search mode, large search space, and inaccuracy in search results. For solving these problems, we discuss and research the application of similar image retrieval in electronic commerce. Aiming at improving the network customers' experience and providing merchants with the accuracy of advertising, we design a reasonable and extensive electronic commerce application system, which includes three subsystems: image search display subsystem, image search subsystem, and product information collecting subsystem. This system can provide seamless connection between information platform and OSP, on which consumers can automatically and directly search similar images according to the pictures from information platform. At the same time, it can be used to provide accuracy of internet marketing for enterprises. The experiment shows the efficiency of constructing the system.

  15. The Application of Similar Image Retrieval in Electronic Commerce

    Science.gov (United States)

    Hu, YuPing; Yin, Hua; Han, Dezhi; Yu, Fei

    2014-01-01

    Traditional online shopping platform (OSP), which searches product information by keywords, faces three problems: indirect search mode, large search space, and inaccuracy in search results. For solving these problems, we discuss and research the application of similar image retrieval in electronic commerce. Aiming at improving the network customers' experience and providing merchants with the accuracy of advertising, we design a reasonable and extensive electronic commerce application system, which includes three subsystems: image search display subsystem, image search subsystem, and product information collecting subsystem. This system can provide seamless connection between information platform and OSP, on which consumers can automatically and directly search similar images according to the pictures from information platform. At the same time, it can be used to provide accuracy of internet marketing for enterprises. The experiment shows the efficiency of constructing the system. PMID:24883411

  16. The Application of Similar Image Retrieval in Electronic Commerce

    Directory of Open Access Journals (Sweden)

    YuPing Hu

    2014-01-01

    Full Text Available Traditional online shopping platform (OSP, which searches product information by keywords, faces three problems: indirect search mode, large search space, and inaccuracy in search results. For solving these problems, we discuss and research the application of similar image retrieval in electronic commerce. Aiming at improving the network customers’ experience and providing merchants with the accuracy of advertising, we design a reasonable and extensive electronic commerce application system, which includes three subsystems: image search display subsystem, image search subsystem, and product information collecting subsystem. This system can provide seamless connection between information platform and OSP, on which consumers can automatically and directly search similar images according to the pictures from information platform. At the same time, it can be used to provide accuracy of internet marketing for enterprises. The experiment shows the efficiency of constructing the system.

  17. Toward Content Based Image Retrieval with Deep Convolutional Neural Networks.

    Science.gov (United States)

    Sklan, Judah E S; Plassard, Andrew J; Fabbri, Daniel; Landman, Bennett A

    2015-03-19

    Content-based image retrieval (CBIR) offers the potential to identify similar case histories, understand rare disorders, and eventually, improve patient care. Recent advances in database capacity, algorithm efficiency, and deep Convolutional Neural Networks (dCNN), a machine learning technique, have enabled great CBIR success for general photographic images. Here, we investigate applying the leading ImageNet CBIR technique to clinically acquired medical images captured by the Vanderbilt Medical Center. Briefly, we (1) constructed a dCNN with four hidden layers, reducing dimensionality of an input scaled to 128×128 to an output encoded layer of 4×384, (2) trained the network using back-propagation 1 million random magnetic resonance (MR) and computed tomography (CT) images, (3) labeled an independent set of 2100 images, and (4) evaluated classifiers on the projection of the labeled images into manifold space. Quantitative results were disappointing (averaging a true positive rate of only 20%); however, the data suggest that improvements would be possible with more evenly distributed sampling across labels and potential re-grouping of label structures. This prelimainry effort at automated classification of medical images with ImageNet is promising, but shows that more work is needed beyond direct adaptation of existing techniques.

  18. Brief communication: 3-D reconstruction of a collapsed rock pillar from Web-retrieved images and terrestrial lidar data - the 2005 event of the west face of the Drus (Mont Blanc massif)

    Science.gov (United States)

    Guerin, Antoine; Abellán, Antonio; Matasci, Battista; Jaboyedoff, Michel; Derron, Marc-Henri; Ravanel, Ludovic

    2017-07-01

    In June 2005, a series of major rockfall events completely wiped out the Bonatti Pillar located in the legendary Drus west face (Mont Blanc massif, France). Terrestrial lidar scans of the west face were acquired after this event, but no pre-event point cloud is available. Thus, in order to reconstruct the volume and the shape of the collapsed blocks, a 3-D model has been built using photogrammetry (structure-from-motion (SfM) algorithms) based on 30 pictures collected on the Web. All these pictures were taken between September 2003 and May 2005. We then reconstructed the shape and volume of the fallen compartment by comparing the SfM model with terrestrial lidar data acquired in October 2005 and November 2011. The volume is calculated to 292 680 m3 (±5.6 %). This result is close to the value previously assessed by Ravanel and Deline (2008) for this same rock avalanche (265 000 ± 10 000 m3). The difference between these two estimations can be explained by the rounded shape of the volume determined by photogrammetry, which may lead to a volume overestimation. However it is not excluded that the volume calculated by Ravanel and Deline (2008) is slightly underestimated, the thickness of the blocks having been assessed manually from historical photographs.

  19. Spatio-temporal multi-modality ontology for indexing and retrieving satellite images

    OpenAIRE

    MESSOUDI, Wassim; FARAH, Imed Riadh; SAHEB ETTABAA, Karim; Ben Ghezala, Henda; Solaiman, Basel

    2009-01-01

    International audience; This paper presents spatio-temporal multi-modality ontology for indexing and retrieving satellite images in the high level to improve the quality of the system retrieval and to perform semantic in the retrieval process.Our approach is based on three modules: (1) regions and features extraction, (2) ontological indexing and (3) semantic image retrieval. The first module allows extracting regions from the satellite image using the fuzzy c-means FCM) segmentation algorith...

  20. Exploring access to scientific literature using content-based image retrieval

    Science.gov (United States)

    Deserno, Thomas M.; Antani, Sameer; Long, Rodney

    2007-03-01

    The number of articles published in the scientific medical literature is continuously increasing, and Web access to the journals is becoming common. Databases such as SPIE Digital Library, IEEE Xplore, indices such as PubMed, and search engines such as Google provide the user with sophisticated full-text search capabilities. However, information in images and graphs within these articles is entirely disregarded. In this paper, we quantify the potential impact of using content-based image retrieval (CBIR) to access this non-text data. Based on the Journal Citations Report (JCR), the journal Radiology was selected for this study. In 2005, 734 articles were published electronically in this journal. This included 2,587 figures, which yields a rate of 3.52 figures per article. Furthermore, 56.4% of these figures are composed of several individual panels, i.e. the figure combines different images and/or graphs. According to the Image Cross-Language Evaluation Forum (ImageCLEF), the error rate of automatic identification of medical images is about 15%. Therefore, it is expected that, by applying ImageCLEF-like techniques, already 95.5% of articles could be retrieved by means of CBIR. The challenge for CBIR in scientific literature, however, is the use of local texture properties to analyze individual image panels in composite illustrations. Using local features for content-based image representation, 8.81 images per article are available, and the predicted correctness rate may increase to 98.3%. From this study, we conclude that CBIR may have a high impact in medical literature research and suggest that additional research in this area is warranted.

  1. World Wide Web platform-independent access to biomedical text/image databases

    Science.gov (United States)

    Long, L. Rodney; Goh, Gin-Hua; Neve, Leif; Thoma, George R.

    1998-07-01

    The biomedical digital library of the future is expected to provide access to stores of biomedical database information containing text and images. Developing efficient methods for accessing such databases is a research effort at the Lister Hill National Center for Biomedical Communications of the National Library of Medicine. In this paper we examine issues in providing access to databases across the Web and describe a tool we have developed: the Web-based Medical Information Retrieval System (WebMIRS). We address a number of critical issues, including preservation of data integrity, efficient database design, access to documentation, quality of query and results interfaces, capability to export results to other software, and exploitation of multimedia data. WebMIRS is implemented as a Java applet that allows database access to text and to associated image data, without requiring any user software beyond a standard Web browser. The applet implementation allows WebMIRS to run on any hardware platform (such as PCs, the Macintosh, or Unix machines) which supports a Java-enabled Web browser, such as Netscape or Internet Explorer. WebMIRS is being tested on text/x-ray image databases created from the National Health and Nutrition Examination Surveys (NHANES) data collected by the National Center for Health Statistics.

  2. Keynote Talk: Mining the Web 2.0 for Improved Image Search

    Science.gov (United States)

    Baeza-Yates, Ricardo

    There are several semantic sources that can be found in the Web that are either explicit, e.g. Wikipedia, or implicit, e.g. derived from Web usage data. Most of them are related to user generated content (UGC) or what is called today the Web 2.0. In this talk we show how to use these sources of evidence in Flickr, such as tags, visual annotations or clicks, which represent the the wisdom of crowds behind UGC, to improve image search. These results are the work of the multimedia retrieval team at Yahoo! Research Barcelona and they are already being used in Yahoo! image search. This work is part of a larger effort to produce a virtuous data feedback circuit based on the right combination many different technologies to leverage the Web itself.

  3. Rotation invariant deep binary hashing for fast image retrieval

    Science.gov (United States)

    Dai, Lai; Liu, Jianming; Jiang, Aiwen

    2017-07-01

    In this paper, we study how to compactly represent image's characteristics for fast image retrieval. We propose supervised rotation invariant compact discriminative binary descriptors through combining convolutional neural network with hashing. In the proposed network, binary codes are learned by employing a hidden layer for representing latent concepts that dominate on class labels. A loss function is proposed to minimize the difference between binary descriptors that describe reference image and the rotated one. Compared with some other supervised methods, the proposed network doesn't have to require pair-wised inputs for binary code learning. Experimental results show that our method is effective and achieves state-of-the-art results on the CIFAR-10 and MNIST datasets.

  4. Content-based histopathological image retrieval for whole slide image database using binary codes

    Science.gov (United States)

    Zheng, Yushan; Jiang, Zhiguo; Ma, Yibing; Zhang, Haopeng; Xie, Fengying; Shi, Huaqiang; Zhao, Yu

    2017-03-01

    Content-based image retrieval (CBIR) has been widely researched for medical images. In application of histo- pathological images, there are two issues that need to be carefully considered. The one is that the digital slide is stored in a spatially continuous image with a size of more than 10K x 10K pixels. The other is that the size of query image varies in a large range according to different diagnostic conditions. It is a challenging work to retrieve the eligible regions for the query image from the database that consists of whole slide images (WSIs). In this paper, we proposed a CBIR framework for the WSI database and size-scalable query images. Each WSI in the database is encoded and stored in a matrix of binary codes. When retrieving, the query image is first encoded into a set of binary codes and analyzed to pre-choose a set of regions from database using hashing method. Then a multi-binary-code-based similarity measurement based on hamming distance is designed to rank proposal regions. Finally, the top relevant regions and their locations in the WSIs along with the diagnostic information are returned to assist pathologists in diagnoses. The effectiveness of the proposed framework is evaluated in a fine-annotated WSIs database of epithelial breast tumors. The experimental results show that proposed framework is both effective and efficiency for content-based whole slide image retrieval.

  5. Relevance Feedback in Content Based Image Retrieval: A Review

    Directory of Open Access Journals (Sweden)

    Manesh B. Kokare

    2011-01-01

    Full Text Available This paper provides an overview of the technical achievements in the research area of relevance feedback (RF in content-based image retrieval (CBIR. Relevance feedback is a powerful technique in CBIR systems, in order to improve the performance of CBIR effectively. It is an open research area to the researcher to reduce the semantic gap between low-level features and high level concepts. The paper covers the current state of art of the research in relevance feedback in CBIR, various relevance feedback techniques and issues in relevance feedback are discussed in detail.

  6. Yale Image Finder (YIF): a new search engine for retrieving biomedical images

    Science.gov (United States)

    Xu, Songhua; McCusker, James; Krauthammer, Michael

    2008-01-01

    Summary: Yale Image Finder (YIF) is a publicly accessible search engine featuring a new way of retrieving biomedical images and associated papers based on the text carried inside the images. Image queries can also be issued against the image caption, as well as words in the associated paper abstract and title. A typical search scenario using YIF is as follows: a user provides few search keywords and the most relevant images are returned and presented in the form of thumbnails. Users can click on the image of interest to retrieve the high resolution image. In addition, the search engine will provide two types of related images: those that appear in the same paper, and those from other papers with similar image content. Retrieved images link back to their source papers, allowing users to find related papers starting with an image of interest. Currently, YIF has indexed over 140 000 images from over 34 000 open access biomedical journal papers. Availability: http://krauthammerlab.med.yale.edu/imagefinder/ Contact: michael.krauthammer@yale.edu PMID:18614584

  7. Dogslife: A web-based longitudinal study of Labrador Retriever health in the UK

    Directory of Open Access Journals (Sweden)

    Clements Dylan N

    2013-01-01

    Full Text Available Abstract Background Dogslife is the first large-scale internet-based longitudinal study of canine health. The study has been designed to examine how environmental and genetic factors influence the health and development of a birth cohort of UK-based pedigree Labrador Retrievers. Results In the first 12 months of the study 1,407 Kennel Club (KC registered eligible dogs were recruited, at a mean age of 119 days of age (SD 69 days, range 3 days – 504 days. Recruitment rates varied depending upon the study team’s ability to contact owners. Where owners authorised the provision of contact details 8.4% of dogs were recruited compared to 1.3% where no direct contact was possible. The proportion of dogs recruited was higher for owners who transferred the registration of their puppy from the breeder to themselves with the KC, and for owners who were sent an e-mail or postcard requesting participation in the project. Compliance with monthly updates was highly variable. For the 280 dogs that were aged 400 days or more on the 30th June 2011, we estimated between 39% and 45% of owners were still actively involved in the project. Initial evaluation suggests that the cohort is representative of the general population of the KC registered Labrador Retrievers eligible to enrol with the project. Clinical signs of illnesses were reported in 44.3% of Labrador Retrievers registered with Dogslife (median age of first illness 138 days, although only 44.1% of these resulted in a veterinary presentation (median age 316 days. Conclusions The web-based platform has enabled the recruitment of a representative population of KC registered Labrador Retrievers, providing the first large-scale longitudinal population-based study of dog health. The use of multiple different methods (e-mail, post and telephone of contact with dog owners was essential to maximise recruitment and retention of the cohort.

  8. Dogslife: a web-based longitudinal study of Labrador Retriever health in the UK.

    Science.gov (United States)

    Clements, Dylan N; Handel, Ian G; Rose, Erica; Querry, Damon; Pugh, Carys A; Ollier, William Er; Morgan, Kenton L; Kennedy, Lorna J; Sampson, Jeffery; Summers, Kim M; de Bronsvoort, B Mark C

    2013-01-18

    Dogslife is the first large-scale internet-based longitudinal study of canine health. The study has been designed to examine how environmental and genetic factors influence the health and development of a birth cohort of UK-based pedigree Labrador Retrievers. In the first 12 months of the study 1,407 Kennel Club (KC) registered eligible dogs were recruited, at a mean age of 119 days of age (SD 69 days, range 3 days - 504 days). Recruitment rates varied depending upon the study team's ability to contact owners. Where owners authorised the provision of contact details 8.4% of dogs were recruited compared to 1.3% where no direct contact was possible. The proportion of dogs recruited was higher for owners who transferred the registration of their puppy from the breeder to themselves with the KC, and for owners who were sent an e-mail or postcard requesting participation in the project. Compliance with monthly updates was highly variable. For the 280 dogs that were aged 400 days or more on the 30th June 2011, we estimated between 39% and 45% of owners were still actively involved in the project. Initial evaluation suggests that the cohort is representative of the general population of the KC registered Labrador Retrievers eligible to enrol with the project. Clinical signs of illnesses were reported in 44.3% of Labrador Retrievers registered with Dogslife (median age of first illness 138 days), although only 44.1% of these resulted in a veterinary presentation (median age 316 days). The web-based platform has enabled the recruitment of a representative population of KC registered Labrador Retrievers, providing the first large-scale longitudinal population-based study of dog health. The use of multiple different methods (e-mail, post and telephone) of contact with dog owners was essential to maximise recruitment and retention of the cohort.

  9. OntoTrader: an ontological Web trading agent approach for environmental information retrieval.

    Science.gov (United States)

    Iribarne, Luis; Padilla, Nicolás; Ayala, Rosa; Asensio, José A; Criado, Javier

    2014-01-01

    Modern Web-based Information Systems (WIS) are becoming increasingly necessary to provide support for users who are in different places with different types of information, by facilitating their access to the information, decision making, workgroups, and so forth. Design of these systems requires the use of standardized methods and techniques that enable a common vocabulary to be defined to represent the underlying knowledge. Thus, mediation elements such as traders enrich the interoperability of web components in open distributed systems. These traders must operate with other third-party traders and/or agents in the system, which must also use a common vocabulary for communication between them. This paper presents the OntoTrader architecture, an Ontological Web Trading agent based on the OMG ODP trading standard. It also presents the ontology needed by some system agents to communicate with the trading agent and the behavioral framework for the SOLERES OntoTrader agent, an Environmental Management Information System (EMIS). This framework implements a "Query-Searching/Recovering-Response" information retrieval model using a trading service, SPARQL notation, and the JADE platform. The paper also presents reflection, delegation and, federation mediation models and describes formalization, an experimental testing environment in three scenarios, and a tool which allows our proposal to be evaluated and validated.

  10. OntoTrader: An Ontological Web Trading Agent Approach for Environmental Information Retrieval

    Directory of Open Access Journals (Sweden)

    Luis Iribarne

    2014-01-01

    Full Text Available Modern Web-based Information Systems (WIS are becoming increasingly necessary to provide support for users who are in different places with different types of information, by facilitating their access to the information, decision making, workgroups, and so forth. Design of these systems requires the use of standardized methods and techniques that enable a common vocabulary to be defined to represent the underlying knowledge. Thus, mediation elements such as traders enrich the interoperability of web components in open distributed systems. These traders must operate with other third-party traders and/or agents in the system, which must also use a common vocabulary for communication between them. This paper presents the OntoTrader architecture, an Ontological Web Trading agent based on the OMG ODP trading standard. It also presents the ontology needed by some system agents to communicate with the trading agent and the behavioral framework for the SOLERES OntoTrader agent, an Environmental Management Information System (EMIS. This framework implements a “Query-Searching/Recovering-Response” information retrieval model using a trading service, SPARQL notation, and the JADE platform. The paper also presents reflection, delegation and, federation mediation models and describes formalization, an experimental testing environment in three scenarios, and a tool which allows our proposal to be evaluated and validated.

  11. Model-based magnetization retrieval from holographic phase images

    Energy Technology Data Exchange (ETDEWEB)

    Röder, Falk, E-mail: f.roeder@hzdr.de [Helmholtz-Zentrum Dresden-Rossendorf, Institut für Ionenstrahlphysik und Materialforschung, Bautzner Landstr. 400, D-01328 Dresden (Germany); Triebenberg Labor, Institut für Strukturphysik, Technische Universität Dresden, D-01062 Dresden (Germany); Vogel, Karin [Triebenberg Labor, Institut für Strukturphysik, Technische Universität Dresden, D-01062 Dresden (Germany); Wolf, Daniel [Helmholtz-Zentrum Dresden-Rossendorf, Institut für Ionenstrahlphysik und Materialforschung, Bautzner Landstr. 400, D-01328 Dresden (Germany); Triebenberg Labor, Institut für Strukturphysik, Technische Universität Dresden, D-01062 Dresden (Germany); Hellwig, Olav [Helmholtz-Zentrum Dresden-Rossendorf, Institut für Ionenstrahlphysik und Materialforschung, Bautzner Landstr. 400, D-01328 Dresden (Germany); AG Magnetische Funktionsmaterialien, Institut für Physik, Technische Universität Chemnitz, D-09126 Chemnitz (Germany); HGST, A Western Digital Company, 3403 Yerba Buena Rd., San Jose, CA 95135 (United States); Wee, Sung Hun [HGST, A Western Digital Company, 3403 Yerba Buena Rd., San Jose, CA 95135 (United States); Wicht, Sebastian; Rellinghaus, Bernd [IFW Dresden, Institute for Metallic Materials, P.O. Box 270116, D-01171 Dresden (Germany)

    2017-05-15

    The phase shift of the electron wave is a useful measure for the projected magnetic flux density of magnetic objects at the nanometer scale. More important for materials science, however, is the knowledge about the magnetization in a magnetic nano-structure. As demonstrated here, a dominating presence of stray fields prohibits a direct interpretation of the phase in terms of magnetization modulus and direction. We therefore present a model-based approach for retrieving the magnetization by considering the projected shape of the nano-structure and assuming a homogeneous magnetization therein. We apply this method to FePt nano-islands epitaxially grown on a SrTiO{sub 3} substrate, which indicates an inclination of their magnetization direction relative to the structural easy magnetic [001] axis. By means of this real-world example, we discuss prospects and limits of this approach. - Highlights: • Retrieval of the magnetization from holographic phase images. • Magnetostatic model constructed for a magnetic nano-structure. • Decomposition into homogeneously magnetized components. • Discretization of a each component by elementary cuboids. • Analytic solution for the phase of a magnetized cuboid considered. • Fitting a set of magnetization vectors to experimental phase images.

  12. Retrieval of Remote Sensing Images with Pattern Spectra Descriptors

    Directory of Open Access Journals (Sweden)

    Petra Bosilj

    2016-12-01

    Full Text Available The rapidly increasing volume of visual Earth Observation data calls for effective content based image retrieval solutions, specifically tailored for their high spatial resolution and heterogeneous content. In this paper, we address this issue with a novel local implementation of the well-known morphological descriptors called pattern spectra. They are computationally efficient histogram-like structures describing the global distribution of arbitrarily defined attributes of connected image components. Besides employing pattern spectra for the first time in this context, our main contribution lies in their dense calculation, at a local scale, thus enabling their combination with sophisticated visual vocabulary strategies. The Merced Landuse/Landcover dataset has been used for comparing the proposed strategy against alternative global and local content description methods, where the introduced approach is shown to yield promising performances.

  13. Efficiency image data retrieval based on asynchronous capability aware spatial search service middleware

    Science.gov (United States)

    Chen, Nengcheng; Chen, Zeqiang; Gong, Jianya

    2007-11-01

    Recent advances in open geospatial web service, such as Web Coverage Service as well as corresponding web ready data processing service, have led to the generation of large amounts of OGC enabled links on Internet. Recently a few search engines that are specialised with respect to geographic space have appeared. However, users do not always get the effective OGC WCS link information they expect when searching the Web. How to quickly find the correct spatial aware web service in a heterogeneous distributed environment has become a "bottleneck" of geospatial web-based applications. In order to improve the retrieval efficiency of OGC Web Coverage Service (WCS) on WWW, a new methodology for retrieving WCS based on clustering capability aware spatial search service middleware is put forward in this paper.

  14. Enhancing Sketch-Based Image Retrieval by Re-Ranking and Relevance Feedback.

    Science.gov (United States)

    Xueming Qian; Xianglong Tan; Yuting Zhang; Richang Hong; Meng Wang

    2016-01-01

    A sketch-based image retrieval often needs to optimize the tradeoff between efficiency and precision. Index structures are typically applied to large-scale databases to realize efficient retrievals. However, the performance can be affected by quantization errors. Moreover, the ambiguousness of user-provided examples may also degrade the performance, when compared with traditional image retrieval methods. Sketch-based image retrieval systems that preserve the index structure are challenging. In this paper, we propose an effective sketch-based image retrieval approach with re-ranking and relevance feedback schemes. Our approach makes full use of the semantics in query sketches and the top ranked images of the initial results. We also apply relevance feedback to find more relevant images for the input query sketch. The integration of the two schemes results in mutual benefits and improves the performance of the sketch-based image retrieval.

  15. Web-Scale Discovery Services Retrieve Relevant Results in Health Sciences Topics Including MEDLINE Content

    Directory of Open Access Journals (Sweden)

    Elizabeth Margaret Stovold

    2017-06-01

    Full Text Available A Review of: Hanneke, R., & O’Brien, K. K. (2016. Comparison of three web-scale discovery services for health sciences research. Journal of the Medical Library Association, 104(2, 109-117. http://dx.doi.org/10.3163/1536-5050.104.2.004 Abstract Objective – To compare the results of health sciences search queries in three web-scale discovery (WSD services for relevance, duplicate detection, and retrieval of MEDLINE content. Design – Comparative evaluation and bibliometric study. Setting – Six university libraries in the United States of America. Subjects – Three commercial WSD services: Primo, Summon, and EBSCO Discovery Service (EDS. Methods – The authors collected data at six universities, including their own. They tested each of the three WSDs at two data collection sites. However, since one of the sites was using a legacy version of Summon that was due to be upgraded, data collected for Summon at this site were considered obsolete and excluded from the analysis. The authors generated three questions for each of six major health disciplines, then designed simple keyword searches to mimic typical student search behaviours. They captured the first 20 results from each query run at each test site, to represent the first “page” of results, giving a total of 2,086 total search results. These were independently assessed for relevance to the topic. Authors resolved disagreements by discussion, and calculated a kappa inter-observer score. They retained duplicate records within the results so that the duplicate detection by the WSDs could be compared. They assessed MEDLINE coverage by the WSDs in several ways. Using precise strategies to generate a relevant set of articles, they conducted one search from each of the six disciplines in PubMed so that they could compare retrieval of MEDLINE content. These results were cross-checked against the first 20 results from the corresponding query in the WSDs. To aid investigation of overall

  16. Featured Image: The Cosmic Velocity Web

    Science.gov (United States)

    Kohler, Susanna

    2017-09-01

    You may have heard of the cosmic web, a network of filaments, clusters and voids that describes the three-dimensional distribution of matter in our universe. But have you ever considered the idea of a cosmic velocity web? In a new study led by Daniel Pomarde (IRFU CEA-Saclay, France), a team of scientists has built a detailed 3D view of the flows in our universe, showing in particular motions along filaments and in collapsing knots. In the image above (click for the full view), surfaces of knots (red) are embedded within surfaces of filaments (grey). The rainbow lines show the flow motion, revealing acceleration (redder tones) toward knots and retardation (bluer tones) beyond them. You can learn more about Pomarde and collaborators work and see their unusual and intriguing visualizationsin the video they produced, below. Check out the original paper for more information.CitationDaniel Pomarde et al 2017 ApJ 845 55. doi:10.3847/1538-4357/aa7f78

  17. Semantic information retrieval for geoscience resources : results and analysis of an online questionnaire of current web search experiences

    OpenAIRE

    Nkisi-Orji, I.

    2016-01-01

    An online questionnaire “Semantic web searches for geoscience resources” was completed by 35 staff of British Geological Survey (BGS) between 28th July 2015 and 28th August 2015. The questionnaire was designed to better understand current web search habits, preferences, and the reception of semantic search features in order to inform PhD research into the use of domain ontologies for semantic information retrieval. The key findings were that relevance ranking is important in fo...

  18. Quantitative cell imaging using single beam phase retrieval method

    Science.gov (United States)

    Anand, Arun; Chhaniwal, Vani; Javidi, Bahram

    2011-06-01

    Quantitative three-dimensional imaging of cells can provide important information about their morphology as well as their dynamics, which will be useful in studying their behavior under various conditions. There are several microscopic techniques to image unstained, semi-transparent specimens, by converting the phase information into intensity information. But most of the quantitative phase contrast imaging techniques is realized either by using interference of the object wavefront with a known reference beam or using phase shifting interferometry. A two-beam interferometric method is challenging to implement especially with low coherent sources and it also requires a fine adjustment of beams to achieve high contrast fringes. In this letter, the development of a single beam phase retrieval microscopy technique for quantitative phase contrast imaging of cells using multiple intensity samplings of a volume speckle field in the axial direction is described. Single beam illumination with multiple intensity samplings provides fast convergence and a unique solution of the object wavefront. Three-dimensional thickness profiles of different cells such as red blood cells and onion skin cells were reconstructed using this technique with an axial resolution of the order of several nanometers.

  19. Image retrieval method based on metric learning for convolutional neural network

    Science.gov (United States)

    Wang, Jieyuan; Qian, Ying; Ye, Qingqing; Wang, Biao

    2017-09-01

    At present, the research of content-based image retrieval (CBIR) focuses on learning effective feature for the representations of origin images and similarity measures. The retrieval accuracy and efficiency are crucial to a CBIR. With the rise of deep learning, convolutional network is applied in the domain of image retrieval and achieved remarkable results, but the image visual feature extraction of convolutional neural network exist high dimension problems, this problem makes the image retrieval and speed ineffective. This paper uses the metric learning for the image visual features extracted from the convolutional neural network, decreased the feature redundancy, improved the retrieval performance. The work in this paper is also a necessary part for further implementation of feature hashing to the approximate-nearest-neighbor (ANN) retrieval method.

  20. Rapid Retrieval of Lung Nodule CT Images Based on Hashing and Pruning Methods

    Directory of Open Access Journals (Sweden)

    Ling Pan

    2016-01-01

    Full Text Available The similarity-based retrieval of lung nodule computed tomography (CT images is an important task in the computer-aided diagnosis of lung lesions. It can provide similar clinical cases for physicians and help them make reliable clinical diagnostic decisions. However, when handling large-scale lung images with a general-purpose computer, traditional image retrieval methods may not be efficient. In this paper, a new retrieval framework based on a hashing method for lung nodule CT images is proposed. This method can translate high-dimensional image features into a compact hash code, so the retrieval time and required memory space can be reduced greatly. Moreover, a pruning algorithm is presented to further improve the retrieval speed, and a pruning-based decision rule is presented to improve the retrieval precision. Finally, the proposed retrieval method is validated on 2,450 lung nodule CT images selected from the public Lung Image Database Consortium (LIDC database. The experimental results show that the proposed pruning algorithm effectively reduces the retrieval time of lung nodule CT images and improves the retrieval precision. In addition, the retrieval framework is evaluated by differentiating benign and malignant nodules, and the classification accuracy can reach 86.62%, outperforming other commonly used classification methods.

  1. Rapid Retrieval of Lung Nodule CT Images Based on Hashing and Pruning Methods.

    Science.gov (United States)

    Pan, Ling; Qiang, Yan; Yuan, Jie; Wu, Lidong

    2016-01-01

    The similarity-based retrieval of lung nodule computed tomography (CT) images is an important task in the computer-aided diagnosis of lung lesions. It can provide similar clinical cases for physicians and help them make reliable clinical diagnostic decisions. However, when handling large-scale lung images with a general-purpose computer, traditional image retrieval methods may not be efficient. In this paper, a new retrieval framework based on a hashing method for lung nodule CT images is proposed. This method can translate high-dimensional image features into a compact hash code, so the retrieval time and required memory space can be reduced greatly. Moreover, a pruning algorithm is presented to further improve the retrieval speed, and a pruning-based decision rule is presented to improve the retrieval precision. Finally, the proposed retrieval method is validated on 2,450 lung nodule CT images selected from the public Lung Image Database Consortium (LIDC) database. The experimental results show that the proposed pruning algorithm effectively reduces the retrieval time of lung nodule CT images and improves the retrieval precision. In addition, the retrieval framework is evaluated by differentiating benign and malignant nodules, and the classification accuracy can reach 86.62%, outperforming other commonly used classification methods.

  2. Design of Content Based Image Retrieval Scheme for Diabetic Retinopathy Images using Harmony Search Algorithm.

    Science.gov (United States)

    Sivakamasundari, J; Natarajan, V

    2015-01-01

    Diabetic Retinopathy (DR) is a disorder that affects the structure of retinal blood vessels due to long-standing diabetes mellitus. Automated segmentation of blood vessel is vital for periodic screening and timely diagnosis. An attempt has been made to generate continuous retinal vasculature for the design of Content Based Image Retrieval (CBIR) application. The typical normal and abnormal retinal images are preprocessed to improve the vessel contrast. The blood vessels are segmented using evolutionary based Harmony Search Algorithm (HSA) combined with Otsu Multilevel Thresholding (MLT) method by best objective functions. The segmentation results are validated with corresponding ground truth images using binary similarity measures. The statistical, textural and structural features are obtained from the segmented images of normal and DR affected retina and are analyzed. CBIR in medical image retrieval applications are used to assist physicians in clinical decision-support techniques and research fields. A CBIR system is developed using HSA based Otsu MLT segmentation technique and the features obtained from the segmented images. Similarity matching is carried out between the features of query and database images using Euclidean Distance measure. Similar images are ranked and retrieved. The retrieval performance of CBIR system is evaluated in terms of precision and recall. The CBIR systems developed using HSA based Otsu MLT and conventional Otsu MLT methods are compared. The retrieval performance such as precision and recall are found to be 96% and 58% for CBIR system using HSA based Otsu MLT segmentation. This automated CBIR system could be recommended for use in computer assisted diagnosis for diabetic retinopathy screening.

  3. Latent Semantic Analysis as a Method of Content-Based Image Retrieval in Medical Applications

    Science.gov (United States)

    Makovoz, Gennadiy

    2010-01-01

    The research investigated whether a Latent Semantic Analysis (LSA)-based approach to image retrieval can map pixel intensity into a smaller concept space with good accuracy and reasonable computational cost. From a large set of M computed tomography (CT) images, a retrieval query found all images for a particular patient based on semantic…

  4. Design Guidelines for a Content-Based Image Retrieval Color-Selection Interface

    NARCIS (Netherlands)

    Eggen, Berry; van den Broek, Egon; van der Veer, Gerrit C.; Kisters, Peter M.F.; Willems, Rob; Vuurpijl, Louis G.

    2004-01-01

    In Content-Based Image Retrieval (CBIR) two query-methods exist: query-by-example and query-by-memory. The user either selects an example image or selects image features retrieved from memory (such as color, texture, spatial attributes, and shape) to define his query. Hitherto, research on CBIR

  5. Comparison of color representations for content-based image retrieval in dermatology

    NARCIS (Netherlands)

    Bosman, Hedde H.W.J.; Petkov, Nicolai; Jonkman, Marcel F.

    Background/purpose: We compare the effectiveness of 10 different color representations in a content-based image retrieval task for dermatology. Methods: As features, we use the average colors of healthy and lesion skin in an image. The extracted features are used to retrieve similar images from a

  6. Integrating Web Services into Map Image Applications

    National Research Council Canada - National Science Library

    Tu, Shengru

    2003-01-01

    Web services have been opening a wide avenue for software integration. In this paper, we have reported our experiments with three applications that are built by utilizing and providing web services for Geographic Information Systems (GIS...

  7. Using the open Web as an information resource and scholarly Web search engines as retrieval tools for academic and research purposes

    Directory of Open Access Journals (Sweden)

    Filistea Naude

    2010-08-01

    Full Text Available This study provided insight into the significance of the open Web as an information resource and Web search engines as research tools amongst academics. The academic staff establishment of the University of South Africa (Unisa was invited to participate in a questionnaire survey and included 1188 staff members from five colleges. This study culminated in a PhD dissertation in 2008. One hundred and eighty seven respondents participated in the survey which gave a response rate of 15.7%. The results of this study show that academics have indeed accepted the open Web as a useful information resource and Web search engines as retrieval tools when seeking information for academic and research work. The majority of respondents used the open Web and Web search engines on a daily or weekly basis to source academic and research information. The main obstacles presented by using the open Web and Web search engines included lack of time to search and browse the Web, information overload, poor network speed and the slow downloading speed of webpages.

  8. Using the open Web as an information resource and scholarly Web search engines as retrieval tools for academic and research purposes

    Directory of Open Access Journals (Sweden)

    Filistea Naude

    2010-12-01

    Full Text Available This study provided insight into the significance of the open Web as an information resource and Web search engines as research tools amongst academics. The academic staff establishment of the University of South Africa (Unisa was invited to participate in a questionnaire survey and included 1188 staff members from five colleges. This study culminated in a PhD dissertation in 2008. One hundred and eighty seven respondents participated in the survey which gave a response rate of 15.7%. The results of this study show that academics have indeed accepted the open Web as a useful information resource and Web search engines as retrieval tools when seeking information for academic and research work. The majority of respondents used the open Web and Web search engines on a daily or weekly basis to source academic and research information. The main obstacles presented by using the open Web and Web search engines included lack of time to search and browse the Web, information overload, poor network speed and the slow downloading speed of webpages.

  9. Content-based multimedia retrieval: indexing and diversification

    NARCIS (Netherlands)

    van Leuken, R.H.

    2009-01-01

    The demand for efficient systems that facilitate searching in multimedia databases and collections is vastly increasing. Application domains include criminology, musicology, trademark registration, medicine and image or video retrieval on the web. This thesis discusses content-based retrieval

  10. Retrieval of air quality information using image processing technique.

    Science.gov (United States)

    Lim, H. S.; MatJafri, M. Z.; Abdullah, K.; Saleh, N. M.

    2007-04-01

    This paper presents and describes an approach to retrieve concentration of particulate matter of size less than 10- micron (PM10) from Landsat TM data over Penang Island. The objective of this study is test the feasibility of using Landsat TM for PM10 mapping using our proposed developed algorithm. The development of the algorithm was developed base on the aerosol characteristics in the atmosphere. PM10 measurements were collected using a DustTrak Aerosol Monitor 8520 simultaneously with the image acquisition. The station locations of the PM10 measurements were detemined using a hand held GPS. The digital numbers were extracted corresponding to the ground-truth locations for each band and then converted into radiance and reflectance values. The reflectance measured from the satellite [reflectance at the top of atmospheric, ρ(TOA)] was subtracted by the amount given by the surface reflectance to obtain the atmospheric reflectance. Then the atmospheric reflectance was related to the PM10 using regression analysis. The surface reflectance values were created using ACTOR2 image correction software in the PCI Geomatica 9.1.8 image processing software. The proposed developed algorithm produced high accuracy and also showed a good agreement (R =0.8406) between the measured and estimated PM10. This study indicates that it is feasible to use Landsat TM data for mapping PM10 using the proposed algorithm.

  11. Domainwise Web Page Optimization Based On Clustered Query Sessions Using Hybrid Of Trust And ACO For Effective Information Retrieval

    Directory of Open Access Journals (Sweden)

    Dr. Suruchi Chawla

    2015-08-01

    Full Text Available Abstract In this paper hybrid of Ant Colony OptimizationACO and trust has been used for domainwise web page optimization in clustered query sessions for effective Information retrieval. The trust of the web page identifies its degree of relevance in satisfying specific information need of the user. The trusted web pages when optimized using pheromone updates in ACO will identify the trusted colonies of web pages which will be relevant to users information need in a given domain. Hence in this paper the hybrid of Trust and ACO has been used on clustered query sessions for identifying more and more relevant number of documents in a given domain in order to better satisfy the information need of the user. Experiment was conducted on the data set of web query sessions to test the effectiveness of the proposed approach in selected three domains Academics Entertainment and Sports and the results confirm the improvement in the precision of search results.

  12. Has Retrieval Technology in Vertical Site Search Systems Improved over the Years? A Holistic Evaluation for Real Web Systems

    Directory of Open Access Journals (Sweden)

    Mandl, Thomas

    2015-12-01

    Full Text Available Evaluation of retrieval systems is mostly limited to laboratory settings and rarely considers changes of performance over time. This article presents an evaluation of retrieval systems for internal Web site search systems between the years 2006 and 2011. A holistic evaluation methodology for real Web sites was developed which includes tests for functionality, search quality, and user interaction. Among other sites, one set of 20 Web site search systems was evaluated three times in different years and no substantial improvement could be shown. It is surprising that the communication between site and user still leads to very poor results in many cases. Overall, the quality of these search systems could be improved, and several areas for improvement are apparent from our evaluation. For a comparison, Google’s site search function was also tested with the same tasks.

  13. Automatic Detection of Galaxy Type From Datasets of Galaxies Image Based on Image Retrieval Approach.

    Science.gov (United States)

    Abd El Aziz, Mohamed; Selim, I M; Xiong, Shengwu

    2017-06-30

    This paper presents a new approach for the automatic detection of galaxy morphology from datasets based on an image-retrieval approach. Currently, there are several classification methods proposed to detect galaxy types within an image. However, in some situations, the aim is not only to determine the type of galaxy within the queried image, but also to determine the most similar images for query image. Therefore, this paper proposes an image-retrieval method to detect the type of galaxies within an image and return with the most similar image. The proposed method consists of two stages, in the first stage, a set of features is extracted based on shape, color and texture descriptors, then a binary sine cosine algorithm selects the most relevant features. In the second stage, the similarity between the features of the queried galaxy image and the features of other galaxy images is computed. Our experiments were performed using the EFIGI catalogue, which contains about 5000 galaxies images with different types (edge-on spiral, spiral, elliptical and irregular). We demonstrate that our proposed approach has better performance compared with the particle swarm optimization (PSO) and genetic algorithm (GA) methods.

  14. Similarity evaluation between query and retrieved masses using a content-based image retrieval (CBIR) CADx system for characterization of breast masses on ultrasound images: an observer study

    Science.gov (United States)

    Cho, Hyun-chong; Hadjiiski, Lubomir; Sahiner, Berkman; Chan, Heang-Ping; Helvie, Mark; Nees, Alexis V.; Paramagul, Chintana

    2011-03-01

    The purpose of this study is to evaluate the similarity between the query and retrieved masses by a Content-Based Image Retrieval (CBIR) computer-aided diagnosis (CADx) system for characterization of breast masses on ultrasound (US) images based on radiologists' visual similarity assessment. We are developing a CADx system to assist radiologists in characterizing masses on US images. The CADx system retrieves masses that are similar to a query mass from a reference library based on automatically extracted image features. An observer study was performed to compare the retrieval performance of four similarity measures: Euclidean distance (ED), Cosine (Cos), Linear Discriminant Analysis (LDA), and Bayesian Neural Network (BNN). For ED and Cos, a k-nearest neighbor (k-NN) algorithm was used for retrieval. For LDA and BNN, the features of a query mass were combined first into a malignancy score and then masses with similar scores were retrieved. For a query mass, three most similar masses were retrieved with each method and were presented to the radiologists in random order. Three MQSA radiologists rated the similarity between the query mass and the computer-retrieved masses using a nine-point similarity scale (1=very dissimilar, 9=very similar). The average similarity ratings of all radiologists for LDA, BNN, Cos, and ED were 4.71, 4.95, 5.18 and 5.32. The ED measures retrieved masses of significantly higher similarity (p<0.008) than LDA and BNN. Although the BNN measure had the best classification performance (Az: 0.90+/-0.03) in the CBIR scheme, ED exhibited higher image retrieval performance than others based on radiologists' assessment.

  15. Recognition of pornographic web pages by classifying texts and images.

    Science.gov (United States)

    Hu, Weiming; Wu, Ou; Chen, Zhouyao; Fu, Zhouyu; Maybank, Steve

    2007-06-01

    With the rapid development of the World Wide Web, people benefit more and more from the sharing of information. However, Web pages with obscene, harmful, or illegal content can be easily accessed. It is important to recognize such unsuitable, offensive, or pornographic Web pages. In this paper, a novel framework for recognizing pornographic Web pages is described. A C4.5 decision tree is used to divide Web pages, according to content representations, into continuous text pages, discrete text pages, and image pages. These three categories of Web pages are handled, respectively, by a continuous text classifier, a discrete text classifier, and an algorithm that fuses the results from the image classifier and the discrete text classifier. In the continuous text classifier, statistical and semantic features are used to recognize pornographic texts. In the discrete text classifier, the naive Bayes rule is used to calculate the probability that a discrete text is pornographic. In the image classifier, the object's contour-based features are extracted to recognize pornographic images. In the text and image fusion algorithm, the Bayes theory is used to combine the recognition results from images and texts. Experimental results demonstrate that the continuous text classifier outperforms the traditional keyword-statistics-based classifier, the contour-based image classifier outperforms the traditional skin-region-based image classifier, the results obtained by our fusion algorithm outperform those by either of the individual classifiers, and our framework can be adapted to different categories of Web pages.

  16. Method of content-based image retrieval for a spinal x-ray image database

    Science.gov (United States)

    Krainak, Daniel M.; Long, L. Rodney; Thoma, George R.

    2002-05-01

    The Lister Hill National Center for Biomedical Communications, a research and development division of the National Library of Medicine (NLM) maintains a digital archive of 17,000 cervical and lumbar spine images collected in the second National Health and Nutrition Examination Survey (NHANES II) conducted by the National Center for Health Statistics (NCHS). Classification of the images for the osteoarthritis research community has been a long-standing goal of researchers at the NLM, collaborators at NCHS, and the National Institute of Arthritis and Musculoskeletal and Skin Diseases (NIAMS), and capability to retrieve images based on geometric characteristics of the vertebral bodies is of interest to the vertebral morphometry community. Automated or computer-assisted classification and retrieval methods are highly desirable to offset the high cost of manual classification and manipulation by medical experts. We implemented a prototype system for a database of 118 spine x-rays and health survey text data related to these x-rays. The system supports conventional text retrieval, as well as retrieval based on shape similarity to a user-supplied vertebral image or sketch.

  17. Archiving and retrieval of sequential images from tomographic databases in PACS

    Science.gov (United States)

    Shyu, Chi-Ren; Cai, T. T.; Broderick, Lynn S.

    1998-12-01

    In the picture archiving and communication systems (PACS) used in modern hospitals, the current practice is to retrieve images based on keyword search, which returns a complete set of images from the same scan. Both diagnostically useful and negligible images in the image databases are retrieved and browsed by the physicians. In addition to the text-based search query method, queries based on image contents and image examples have been developed and integrated into existing PACS systems. Most of the content-based image retrieval (CBIR) systems for medical image databases are designed to retrieve images individually. However, in a database of tomographic images, it is often diagnostically more useful to simultaneously retrieve multiple images that are closely related for various reasons, such as physiological continguousness, etc. For example, high resolution computed tomography (HRCT) images are taken in a series of cross-sectional slices of human body. Typically, several slices are relevant for making a diagnosis, requiring a PACS system that can retrieve a contiguous sequence of slices. In this paper, we present an extension to our physician-in-the-loop CBIR system, which allows our algorithms to automatically determine the number of adjoining images to retain after certain key images are identified by the physician. Only the key images, so identified by the physician, and the other adjoining images that cohere with the key images are kept on-line for fast retrieval; the rest of the images can be discarded if so desired. This results in large reduction in the amount of storage needed for fast retrieval.

  18. Learning effective color features for content based image retrieval in dermatology

    NARCIS (Netherlands)

    Bunte, Kerstin; Biehl, Michael; Jonkman, Marcel F.; Petkov, Nicolai

    We investigate the extraction of effective color features for a content-based image retrieval (CBIR) application in dermatology. Effectiveness is measured by the rate of correct retrieval of images from four color classes of skin lesions. We employ and compare two different methods to learn

  19. The Complete Local Spatial Central Derivative Binary Pattern for Ultrasound Kidney Images Retrieval

    National Research Council Canada - National Science Library

    Chelladurai Callins Christiyana; Vayanaperumal Rajamani

    2015-01-01

    ...) for ultrasound kidney images retrieval. In a local 3X3 square region of an image, the new pattern considers the relationships among the surrounding neighbors about their neighbors at different spatial distances whereas the standard Local...

  20. Efficient Image Blur in Web-Based Applications

    DEFF Research Database (Denmark)

    Kraus, Martin

    2010-01-01

    Scripting languages require the use of high-level library functions to implement efficient image processing; thus, real-time image blur in web-based applications is a challenging task unless specific library functions are available for this purpose. We present a pyramid blur algorithm, which can...... be implemented using a subimage copy function, and evaluate its performance with various web browsers in comparison to an infinite impulse response filter. While this pyramid algorithm was first proposed for GPU-based image processing, its applicability to web-based applications indicates that some GPU...

  1. Face Image Retrieval of Efficient Sparse Code words and Multiple Attribute in Binning Image

    Directory of Open Access Journals (Sweden)

    Suchitra S

    2017-08-01

    Full Text Available ABSTRACT In photography, face recognition and face retrieval play an important role in many applications such as security, criminology and image forensics. Advancements in face recognition make easier for identity matching of an individual with attributes. Latest development in computer vision technologies enables us to extract facial attributes from the input image and provide similar image results. In this paper, we propose a novel LOP and sparse codewords method to provide similar matching results with respect to input query image. To improve accuracy in image results with input image and dynamic facial attributes, Local octal pattern algorithm [LOP] and Sparse codeword applied in offline and online. The offline and online procedures in face image binning techniques apply with sparse code. Experimental results with Pubfig dataset shows that the proposed LOP along with sparse codewords able to provide matching results with increased accuracy of 90%.

  2. Design and development of semantic web-based system for computer science domain-specific information retrieval

    Directory of Open Access Journals (Sweden)

    Ritika Bansal

    2016-09-01

    Full Text Available In semantic web-based system, the concept of ontology is used to search results by contextual meaning of input query instead of keyword matching. From the research literature, there seems to be a need for a tool which can provide an easy interface for complex queries in natural language that can retrieve the domain-specific information from the ontology. This research paper proposes an IRSCSD system (Information retrieval system for computer science domain as a solution. This system offers advanced querying and browsing of structured data with search results automatically aggregated and rendered directly in a consistent user-interface, thus reducing the manual effort of users. So, the main objective of this research is design and development of semantic web-based system for integrating ontology towards domain-specific retrieval support. Methodology followed is a piecemeal research which involves the following stages. First Stage involves the designing of framework for semantic web-based system. Second stage builds the prototype for the framework using Protégé tool. Third Stage deals with the natural language query conversion into SPARQL query language using Python-based QUEPY framework. Fourth Stage involves firing of converted SPARQL queries to the ontology through Apache's Jena API to fetch the results. Lastly, evaluation of the prototype has been done in order to ensure its efficiency and usability. Thus, this research paper throws light on framework development for semantic web-based system that assists in efficient retrieval of domain-specific information, natural language query interpretation into semantic web language, creation of domain-specific ontology and its mapping with related ontology. This research paper also provides approaches and metrics for ontology evaluation on prototype ontology developed to study the performance based on accessibility of required domain-related information.

  3. A Picture is Worth a Thousand Keywords: Exploring Mobile Image-Based Web Searching

    Directory of Open Access Journals (Sweden)

    Konrad Tollmar

    2008-01-01

    Full Text Available Using images of objects as queries is a new approach to search for information on the Web. Image-based information retrieval goes beyond only matching images, as information in other modalities also can be extracted from data collections using an image search. We have developed a new system that uses images to search for web-based information. This paper has a particular focus on exploring users' experience of general mobile image-based web searches to find what issues and phenomena it contains. This was achieved in a multipart study by creating and letting respondents test prototypes of mobile image-based search systems and collect data using interviews, observations, video observations, and questionnaires. We observed that searching for information based only on visual similarity and without any assistance is sometimes difficult, especially on mobile devices with limited interaction bandwidth. Most of our subjects preferred a search tool that guides the users through the search result based on contextual information, compared to presenting the search result as a plain ranked list.

  4. Image retrieval by information fusion based on scalable vocabulary tree and robust Hausdorff distance

    Science.gov (United States)

    Che, Chang; Yu, Xiaoyang; Sun, Xiaoming; Yu, Boyang

    2017-12-01

    In recent years, Scalable Vocabulary Tree (SVT) has been shown to be effective in image retrieval. However, for general images where the foreground is the object to be recognized while the background is cluttered, the performance of the current SVT framework is restricted. In this paper, a new image retrieval framework that incorporates a robust distance metric and information fusion is proposed, which improves the retrieval performance relative to the baseline SVT approach. First, the visual words that represent the background are diminished by using a robust Hausdorff distance between different images. Second, image matching results based on three image signature representations are fused, which enhances the retrieval precision. We conducted intensive experiments on small-scale to large-scale image datasets: Corel-9, Corel-48, and PKU-198, where the proposed Hausdorff metric and information fusion outperforms the state-of-the-art methods by about 13, 15, and 15%, respectively.

  5. An Adequate Approach to Image Retrieval Based on Local Level Feature Extraction

    Directory of Open Access Journals (Sweden)

    Sumaira Muhammad Hayat Khan

    2010-10-01

    Full Text Available Image retrieval based on text annotation has become obsolete and is no longer interesting for scientists because of its high time complexity and low precision in results. Alternatively, increase in the amount of digital images has generated an excessive need for an accurate and efficient retrieval system. This paper proposes content based image retrieval technique at a local level incorporating all the rudimentary features. Image undergoes the segmentation process initially and each segment is then directed to the feature extraction process. The proposed technique is also based on image?s content which primarily includes texture, shape and color. Besides these three basic features, FD (Fourier Descriptors and edge histogram descriptors are also calculated to enhance the feature extraction process by taking hold of information at the boundary. Performance of the proposed method is found to be quite adequate when compared with the results from one of the best local level CBIR (Content Based Image Retrieval techniques.

  6. A Novel Technique for Shape Feature Extraction Using Content Based Image Retrieval

    Directory of Open Access Journals (Sweden)

    Dhanoa Jaspreet Singh

    2016-01-01

    Full Text Available With the advent of technology and multimedia information, digital images are increasing very quickly. Various techniques are being developed to retrieve/search digital information or data contained in the image. Traditional Text Based Image Retrieval System is not plentiful. Since it is time consuming as it require manual image annotation. Also, the image annotation differs with different peoples. An alternate to this is Content Based Image Retrieval (CBIR system. It retrieves/search for image using its contents rather the text, keywords etc. A lot of exploration has been compassed in the range of Content Based Image Retrieval (CBIR with various feature extraction techniques. Shape is a significant image feature as it reflects the human perception. Moreover, Shape is quite simple to use by the user to define object in an image as compared to other features such as Color, texture etc. Over and above, if applied alone, no descriptor will give fruitful results. Further, by combining it with an improved classifier, one can use the positive features of both the descriptor and classifier. So, a tryout will be made to establish an algorithm for accurate feature (Shape extraction in Content Based Image Retrieval (CBIR. The main objectives of this project are: (a To propose an algorithm for shape feature extraction using CBIR, (b To evaluate the performance of proposed algorithm and (c To compare the proposed algorithm with state of art techniques.

  7. An Image Retrieval System using Impressional Words Based on Individual Preference

    Science.gov (United States)

    Nambo, Hidetaka; Okamine, Tadashi; Kimura, Haruhiko; Nakazawa, Minoru; Hattori, Shimmi

    When we examine goods with their pictures at online shops, regarding flowers and drawings, it is difficult to give proper keywords for retrieving goods' pictures. To refer their pictures efficiently, image retrieval systems using adjectives called as Kansei words, which represent human feelings, impressions, etc., are proposed. However, since the current systems retrieve pictures without customers' preferences, outputs disagree with the preferences. To cope with the problem, we propose an image retrieval system with Kansei words according to users' preferences. Experimental results have shown the efficiencies of the proposed system comparing with performances of a conventional system.

  8. Phase retrieval in X-ray phase-contrast imaging suitable for tomography.

    Science.gov (United States)

    Burvall, Anna; Lundström, Ulf; Takman, Per A C; Larsson, Daniel H; Hertz, Hans M

    2011-05-23

    In-line phase-contrast X-ray imaging provides images where both absorption and refraction contribute. For quantitative analysis of these images, the phase needs to be retrieved numerically. There are many phase-retrieval methods available. Those suitable for phase-contrast tomography, i.e., non-iterative phase-retrieval methods that use only one image at each projection angle, all follow the same pattern though derived in different ways. We outline this pattern and use it to compare the methods to each other, considering only phase-retrieval performance and not the additional effects of tomographic reconstruction. We also outline derivations, approximations and assumptions, and show which methods are similar or identical and how they relate to each other. A simple scheme for choosing reconstruction method is presented, and numerical phase-retrieval performed for all methods.

  9. Optical image transformation and encryption by phase-retrieval-based double random-phase encoding and compressive ghost imaging

    Science.gov (United States)

    Yuan, Sheng; Yang, Yangrui; Liu, Xuemei; Zhou, Xin; Wei, Zhenzhuo

    2018-01-01

    An optical image transformation and encryption scheme is proposed based on double random-phase encoding (DRPE) and compressive ghost imaging (CGI) techniques. In this scheme, a secret image is first transformed into a binary image with the phase-retrieval-based DRPE technique, and then encoded by a series of random amplitude patterns according to the ghost imaging (GI) principle. Compressive sensing, corrosion and expansion operations are implemented to retrieve the secret image in the decryption process. This encryption scheme takes the advantage of complementary capabilities offered by the phase-retrieval-based DRPE and GI-based encryption techniques. That is the phase-retrieval-based DRPE is used to overcome the blurring defect of the decrypted image in the GI-based encryption, and the CGI not only reduces the data amount of the ciphertext, but also enhances the security of DRPE. Computer simulation results are presented to verify the performance of the proposed encryption scheme.

  10. Retrieving clinically relevant diabetic retinopathy images using a multi-class multiple-instance framework

    Science.gov (United States)

    Chandakkar, Parag S.; Venkatesan, Ragav; Li, Baoxin

    2013-02-01

    Diabetic retinopathy (DR) is a vision-threatening complication from diabetes mellitus, a medical condition that is rising globally. Unfortunately, many patients are unaware of this complication because of absence of symptoms. Regular screening of DR is necessary to detect the condition for timely treatment. Content-based image retrieval, using archived and diagnosed fundus (retinal) camera DR images can improve screening efficiency of DR. This content-based image retrieval study focuses on two DR clinical findings, microaneurysm and neovascularization, which are clinical signs of non-proliferative and proliferative diabetic retinopathy. The authors propose a multi-class multiple-instance image retrieval framework which deploys a modified color correlogram and statistics of steerable Gaussian Filter responses, for retrieving clinically relevant images from a database of DR fundus image database.

  11. A World Wide Web Region-Based Image Search Engine

    DEFF Research Database (Denmark)

    Kompatsiaris, Ioannis; Triantafyllou, Evangelia; Strintzis, Michael G.

    2001-01-01

    information. These features along with additional information such as the URL location and the date of index procedure are stored in a database. The user can access and search this indexed content through the Web with an advanced and user friendly interface. The output of the system is a set of links......In this paper the development of an intelligent image content-based search engine for the World Wide Web is presented. This system will offer a new form of media representation and access of content available in WWW. Information Web Crawlers continuously traverse the Internet and collect images...

  12. Large-Scale Partial-Duplicate Image Retrieval and Its Applications

    Science.gov (United States)

    2016-04-23

    Xinbo Gao, Qi Tian. Personalized Visual Vocabulary Adaption for Social Image Retrieval, the ACM International Conference. 02-NOV-14, Orlando, Florida...for Weakly Annotated Image Recognition in Social Media , IEEE International Conference on Computer Vision and Pattern Recognition. 17-JUN-14...2014. [C-38]. Z. Niu, S. Zhang, X. Gao, and Q. Tian, “Personalized Visual Vocabulary Adaption for Social Image Retrieval,” ACM Multimedia, Short

  13. Scipion web tools: Easy to use Cryo-EM image processing over the web.

    Science.gov (United States)

    Conesa Mingo, Pablo; Gutierrez, José; Quintana, Adrián; de la Rosa Trevín, José Miguel; Zaldívar-Peraza, Airén; Cuenca Alba, Jesús; Kazemi, Moshen; Vargas, Javier; Del Cano, Laura; Segura, Joan; S Sorzano, Carlos Oscar; Carazo, Jose María

    2017-10-03

    Macromolecular structural determination by Electron Microscopy under cryogenic conditions is revolutionizing the field of structural biology, interesting a large community of potential users. Still, the path from raw images to density maps is complex, and sophisticated image processing suites are required in this process, often demanding the installation and understanding of different software packages. Here we present Scipion Web Tools, a web-based set of tools/workflows derived from the Scipion image processing framework, specially tailored to non-expert users in need of very precise answers at several key stages of the structural elucidation process. This article is protected by copyright. All rights reserved. © 2017 The Protein Society.

  14. Content Based Image Retrieval by Using Color Descriptor and Discrete Wavelet Transform.

    Science.gov (United States)

    Ashraf, Rehan; Ahmed, Mudassar; Jabbar, Sohail; Khalid, Shehzad; Ahmad, Awais; Din, Sadia; Jeon, Gwangil

    2018-01-25

    Due to recent development in technology, the complexity of multimedia is significantly increased and the retrieval of similar multimedia content is a open research problem. Content-Based Image Retrieval (CBIR) is a process that provides a framework for image search and low-level visual features are commonly used to retrieve the images from the image database. The basic requirement in any image retrieval process is to sort the images with a close similarity in term of visually appearance. The color, shape and texture are the examples of low-level image features. The feature plays a significant role in image processing. The powerful representation of an image is known as feature vector and feature extraction techniques are applied to get features that will be useful in classifying and recognition of images. As features define the behavior of an image, they show its place in terms of storage taken, efficiency in classification and obviously in time consumption also. In this paper, we are going to discuss various types of features, feature extraction techniques and explaining in what scenario, which features extraction technique will be better. The effectiveness of the CBIR approach is fundamentally based on feature extraction. In image processing errands like object recognition and image retrieval feature descriptor is an immense among the most essential step. The main idea of CBIR is that it can search related images to an image passed as query from a dataset got by using distance metrics. The proposed method is explained for image retrieval constructed on YCbCr color with canny edge histogram and discrete wavelet transform. The combination of edge of histogram and discrete wavelet transform increase the performance of image retrieval framework for content based search. The execution of different wavelets is additionally contrasted with discover the suitability of specific wavelet work for image retrieval. The proposed algorithm is prepared and tried to implement for

  15. A Novel Optimization-Based Approach for Content-Based Image Retrieval

    Directory of Open Access Journals (Sweden)

    Manyu Xiao

    2013-01-01

    Full Text Available Content-based image retrieval is nowadays one of the possible and promising solutions to manage image databases effectively. However, with the large number of images, there still exists a great discrepancy between the users’ expectations (accuracy and efficiency and the real performance in image retrieval. In this work, new optimization strategies are proposed on vocabulary tree building, retrieval, and matching methods. More precisely, a new clustering strategy combining classification and conventional K-Means method is firstly redefined. Then a new matching technique is built to eliminate the error caused by large-scaled scale-invariant feature transform (SIFT. Additionally, a new unit mechanism is proposed to reduce the cost of indexing time. Finally, the numerical results show that excellent performances are obtained in both accuracy and efficiency based on the proposed improvements for image retrieval.

  16. Second order Statistical Texture Features from a New CSLBPGLCM for Ultrasound Kidney Images Retrieval

    Directory of Open Access Journals (Sweden)

    Chelladurai CALLINS CHRISTIYANA

    2013-12-01

    Full Text Available This work proposes a new method called Center Symmetric Local Binary Pattern Grey Level Co-occurrence Matrix (CSLBPGLCM for the purpose of extracting second order statistical texture features in ultrasound kidney images. These features are then feed into ultrasound kidney images retrieval system for the point of medical applications. This new GLCM matrix combines the benefit of CSLBP and conventional GLCM. The main intention of this CSLBPGLCM is to reduce the number of grey levels in an image by not simply accumulating the grey levels but incorporating another statistical texture feature in it. The proposed approach is cautiously evaluated in ultrasound kidney images retrieval system and has been compared with conventional GLCM. It is experimentally proved that the proposed method increases the retrieval efficiency, accuracy and reduces the time complexity of ultrasound kidney images retrieval system by means of second order statistical texture features.

  17. Comparing features sets for content-based image retrieval in a medical-case database

    Science.gov (United States)

    Muller, Henning; Rosset, Antoine; Vallee, Jean-Paul; Geissbuhler, Antoine

    2004-04-01

    Content-based image retrieval systems (CBIRSs) have frequently been proposed for the use in medical image databases and PACS. Still, only few systems were developed and used in a real clinical environment. It rather seems that medical professionals define their needs and computer scientists develop systems based on data sets they receive with little or no interaction between the two groups. A first study on the diagnostic use of medical image retrieval also shows an improvement in diagnostics when using CBIRSs which underlines the potential importance of this technique. This article explains the use of an open source image retrieval system (GIFT - GNU Image Finding Tool) for the retrieval of medical images in the medical case database system CasImage that is used in daily, clinical routine in the university hospitals of Geneva. Although the base system of GIFT shows an unsatisfactory performance, already little changes in the feature space show to significantly improve the retrieval results. The performance of variations in feature space with respect to color (gray level) quantizations and changes in texture analysis (Gabor filters) is compared. Whereas stock photography relies mainly on colors for retrieval, medical images need a large number of gray levels for successful retrieval, especially when executing feedback queries. The results also show that a too fine granularity in the gray levels lowers the retrieval quality, especially with single-image queries. For the evaluation of the retrieval peformance, a subset of the entire case database of more than 40,000 images is taken with a total of 3752 images. Ground truth was generated by a user who defined the expected query result of a perfect system by selecting images relevant to a given query image. The results show that a smaller number of gray levels (32 - 64) leads to a better retrieval performance, especially when using relevance feedback. The use of more scales and directions for the Gabor filters in the

  18. Image quality degradation and retrieval errors introduced by registration and interpolation of multispectral digital images

    Energy Technology Data Exchange (ETDEWEB)

    Henderson, B.G.; Borel, C.C.; Theiler, J.P.; Smith, B.W.

    1996-04-01

    Full utilization of multispectral data acquired by whiskbroom and pushbroom imagers requires that the individual channels be registered accurately. Poor registration introduces errors which can be significant, especially in high contrast areas such as boundaries between regions. We simulate the acquisition of multispectral imagery in order to estimate the errors that are introduced by co-registration of different channels and interpolation within the images. We compute the Modulation Transfer Function (MTF) and image quality degradation brought about by fractional pixel shifting and calculate errors in retrieved quantities (surface temperature and water vapor) that occur as a result of interpolation. We also present a method which might be used to estimate sensor platform motion for accurate registration of images acquired by a pushbroom scanner.

  19. Automated semantic indexing of figure captions to improve radiology image retrieval.

    Science.gov (United States)

    Kahn, Charles E; Rubin, Daniel L

    2009-01-01

    We explored automated concept-based indexing of unstructured figure captions to improve retrieval of images from radiology journals. The MetaMap Transfer program (MMTx) was used to map the text of 84,846 figure captions from 9,004 peer-reviewed, English-language articles to concepts in three controlled vocabularies from the UMLS Metathesaurus, version 2006AA. Sampling procedures were used to estimate the standard information-retrieval metrics of precision and recall, and to evaluate the degree to which concept-based retrieval improved image retrieval. Precision was estimated based on a sample of 250 concepts. Recall was estimated based on a sample of 40 concepts. The authors measured the impact of concept-based retrieval to improve upon keyword-based retrieval in a random sample of 10,000 search queries issued by users of a radiology image search engine. Estimated precision was 0.897 (95% confidence interval, 0.857-0.937). Estimated recall was 0.930 (95% confidence interval, 0.838-1.000). In 5,535 of 10,000 search queries (55%), concept-based retrieval found results not identified by simple keyword matching; in 2,086 searches (21%), more than 75% of the results were found by concept-based search alone. Concept-based indexing of radiology journal figure captions achieved very high precision and recall, and significantly improved image retrieval.

  20. Automated Semantic Indexing of Figure Captions to Improve Radiology Image Retrieval

    Science.gov (United States)

    Kahn, Charles E.; Rubin, Daniel L.

    2009-01-01

    Objective We explored automated concept-based indexing of unstructured figure captions to improve retrieval of images from radiology journals. Design The MetaMap Transfer program (MMTx) was used to map the text of 84,846 figure captions from 9,004 peer-reviewed, English-language articles to concepts in three controlled vocabularies from the UMLS Metathesaurus, version 2006AA. Sampling procedures were used to estimate the standard information-retrieval metrics of precision and recall, and to evaluate the degree to which concept-based retrieval improved image retrieval. Measurements Precision was estimated based on a sample of 250 concepts. Recall was estimated based on a sample of 40 concepts. The authors measured the impact of concept-based retrieval to improve upon keyword-based retrieval in a random sample of 10,000 search queries issued by users of a radiology image search engine. Results Estimated precision was 0.897 (95% confidence interval, 0.857–0.937). Estimated recall was 0.930 (95% confidence interval, 0.838–1.000). In 5,535 of 10,000 search queries (55%), concept-based retrieval found results not identified by simple keyword matching; in 2,086 searches (21%), more than 75% of the results were found by concept-based search alone. Conclusion Concept-based indexing of radiology journal figure captions achieved very high precision and recall, and significantly improved image retrieval. PMID:19261938

  1. Improving performance of content based image retrieval system with color features

    Directory of Open Access Journals (Sweden)

    Aleš Hladnik

    2017-04-01

    Full Text Available Content based image retrieval (CBIR encompasses a variety of techniques with a goal to solve the problem of searching for digital images in a large database by their visual content. Applications where the retrieval of similar images plays a crucial role include personal photo and art collections, medical imaging, multimedia publications and video surveillance. Main objective of our study was to try to improve the performance of the query-by-example image retrieval system based on texture features – Gabor wavelet and wavelet transform – by augmenting it with color information about the images, in particular color histogram, color autocorrelogram and color moments. Wang image database comprising 1000 natural color images grouped into 10 categories with 100 images was used for testing individual algorithms. Each image in the database served as a query image and the retrieval performance was evaluated by means of the precision and recall. e number of retrieved images ranged from 10 to 80. e best CBIR performance was obtained when implementing a combination of all 190 texture- and color features. Only slightly worse were the average precision and recall for the texture- and color histogram-based system. is result was somewhat surprising, since color histogram features provide no color spatial informa- tion. We observed a 23% increase in average precision when comparing the system containing a combination of texture- and all color features with the one consisting of exclusively texture descriptors when using Euclidean distance measure and 20 retrieved images. Addition of the color autocorrelogram features to the texture de- scriptors had virtually no e ect on the performance, while only minor improvement was detected when adding rst two color moments – the mean and the standard deviation. Similar to what was found in the previous studies with the same image database, average precision was very high in case of dinosaurs and owers and very low

  2. PLSA-based pathological image retrieval for breast cancer with color deconvolution

    Science.gov (United States)

    Ma, Yibing; Shi, Jun; Jiang, Zhiguo; Feng, Hao

    2013-10-01

    Digital pathological image retrieval plays an important role in computer-aided diagnosis for breast cancer. The retrieval results of an unknown pathological image, which are generally previous cases with diagnostic information, can provide doctors with assistance and reference. In this paper, we develop a novel pathological image retrieval method for breast cancer, which is based on stain component and probabilistic latent semantic analysis (pLSA) model. Specifically, the method firstly utilizes color deconvolution to gain the representation of different stain components for cell nuclei and cytoplasm, and then block Gabor features are conducted on cell nuclei, which is used to construct the codebook. Furthermore, the connection between the words of the codebook and the latent topics among images are modeled by pLSA. Therefore, each image can be represented by the topics and also the high-level semantic concepts of image can be described. Experiments on the pathological image database for breast cancer demonstrate the effectiveness of our method.

  3. Web Image Re-Ranking UsingQuery-Specific Semantic Signatures.

    Science.gov (United States)

    Wang, Xiaogang; Qiu, Shi; Liu, Ke; Tang, Xiaoou

    2014-04-01

    Image re-ranking, as an effective way to improve the results of web-based image search, has been adopted by current commercial search engines such as Bing and Google. Given a query keyword, a pool of images are first retrieved based on textual information. By asking the user to select a query image from the pool, the remaining images are re-ranked based on their visual similarities with the query image. A major challenge is that the similarities of visual features do not well correlate with images' semantic meanings which interpret users' search intention. Recently people proposed to match images in a semantic space which used attributes or reference classes closely related to the semantic meanings of images as basis. However, learning a universal visual semantic space to characterize highly diverse images from the web is difficult and inefficient. In this paper, we propose a novel image re-ranking framework, which automatically offline learns different semantic spaces for different query keywords. The visual features of images are projected into their related semantic spaces to get semantic signatures. At the online stage, images are re-ranked by comparing their semantic signatures obtained from the semantic space specified by the query keyword. The proposed query-specific semantic signatures significantly improve both the accuracy and efficiency of image re-ranking. The original visual features of thousands of dimensions can be projected to the semantic signatures as short as 25 dimensions. Experimental results show that 25-40 percent relative improvement has been achieved on re-ranking precisions compared with the state-of-the-art methods.

  4. Hyperspectral remote sensing image retrieval system using spectral and texture features.

    Science.gov (United States)

    Zhang, Jing; Geng, Wenhao; Liang, Xi; Li, Jiafeng; Zhuo, Li; Zhou, Qianlan

    2017-06-01

    Although many content-based image retrieval systems have been developed, few studies have focused on hyperspectral remote sensing images. In this paper, a hyperspectral remote sensing image retrieval system based on spectral and texture features is proposed. The main contributions are fourfold: (1) considering the "mixed pixel" in the hyperspectral image, endmembers as spectral features are extracted by an improved automatic pixel purity index algorithm, then the texture features are extracted with the gray level co-occurrence matrix; (2) similarity measurement is designed for the hyperspectral remote sensing image retrieval system, in which the similarity of spectral features is measured with the spectral information divergence and spectral angle match mixed measurement and in which the similarity of textural features is measured with Euclidean distance; (3) considering the limited ability of the human visual system, the retrieval results are returned after synthesizing true color images based on the hyperspectral image characteristics; (4) the retrieval results are optimized by adjusting the feature weights of similarity measurements according to the user's relevance feedback. The experimental results on NASA data sets can show that our system can achieve comparable superior retrieval performance to existing hyperspectral analysis schemes.

  5. Using an Automatic Retrieval System in the Web To Assist Co-operative Learning.

    Science.gov (United States)

    Badue, Claudine; Vaz, Wesley; Albuquerque, Eduardo

    This paper presents an information agent and latent semantic-based indexing architecture to retrieve documents on the Internet. The system optimizes the search for documents in the Internet by automatically retrieving relevant links. The information used for the search can be obtained, for instance, from Internet browser caches and from grades of…

  6. Supporting Keyword Search for Image Retrieval with Integration of Probabilistic Annotation

    National Research Council Canada - National Science Library

    Tie Hua Zhou; Ling Wang; Keun Ho Ryu

    2015-01-01

    .... In order to help users formulate efficient and effective image retrieval, we present a novel integration of a probabilistic model based on keyword query architecture that models the probability...

  7. High resolution satellite image indexing and retrieval using SURF features and bag of visual words

    Science.gov (United States)

    Bouteldja, Samia; Kourgli, Assia

    2017-03-01

    In this paper, we evaluate the performance of SURF descriptor for high resolution satellite imagery (HRSI) retrieval through a BoVW model on a land-use/land-cover (LULC) dataset. Local feature approaches such as SIFT and SURF descriptors can deal with a large variation of scale, rotation and illumination of the images, providing, therefore, a better discriminative power and retrieval efficiency than global features, especially for HRSI which contain a great range of objects and spatial patterns. Moreover, we combine SURF and color features to improve the retrieval accuracy, and we propose to learn a category-specific dictionary for each image category which results in a more discriminative image representation and boosts the image retrieval performance.

  8. Web architecture for the remote browsing and analysis of distributed medical images and data.

    Science.gov (United States)

    Masseroli, M; Pinciroli, F

    2001-01-01

    To provide easy retrieval, integration and evaluation of multimodal medical images and data in a web browser environment, distributed application technologies and Java programming were used to develop a client-server architecture based on software agents. The server side manages secure connections and queries to heterogeneous remote databases and file systems containing patient personal and clinical data. The client side is a Java applet running in a web browser and providing a friendly medical user interface to perform queries on patient and medical test data and integrate and visualize properly the various query results. A set of tools based on Java Advanced Imaging API enables to process and analyze the retrieved bioimages, and quantify their features in different regions of interest. The platform-independence Java technology makes the developed prototype easy to be managed in a centralized form and provided in each site where an intranet or internet connection can be located. Giving the healthcare providers effective tools for browsing, querying, visualizing and evaluating comprehensively medical images and records in all locations where they can need them - e.g. emergency, operating theaters, ward, or even outpatient clinics- the implemented prototype represents an important aid in providing more efficient diagnoses and medical treatments.

  9. Efficient Retrieval of Images for Search Engine by Visual Similarity and Re Ranking

    OpenAIRE

    Viswa S S

    2013-01-01

    Nowadays, web scale image search engines (e.g. Google Image Search, Microsoft Live Image Search) rely almost purely on surrounding text features. Users type keywords in hope of finding a certain type of images. The search engine returns thousands of images ranked by the text keywords extracted from the surrounding text. However, many of returned images are noisy, disorganized, or irrelevant. Even Google and Microsoft have no Visual Information for searching of images. Using visual information...

  10. Optical double image security using random phase fractional Fourier domain encoding and phase-retrieval algorithm

    Science.gov (United States)

    Rajput, Sudheesh K.; Nishchal, Naveen K.

    2017-04-01

    We propose a novel security scheme based on the double random phase fractional domain encoding (DRPE) and modified Gerchberg-Saxton (G-S) phase retrieval algorithm for securing two images simultaneously. Any one of the images to be encrypted is converted into a phase-only image using modified G-S algorithm and this function is used as a key for encrypting another image. The original images are retrieved employing the concept of known-plaintext attack and following the DRPE decryption steps with all correct keys. The proposed scheme is also used for encryption of two color images with the help of convolution theorem and phase-truncated fractional Fourier transform. With some modification, the scheme is extended for simultaneous encryption of gray-scale and color images. As a proof-of-concept, simulation results have been presented for securing two gray-scale images, two color images, and simultaneous gray-scale and color images.

  11. Content based image retrieval using local binary pattern operator and data mining techniques.

    Science.gov (United States)

    Vatamanu, Oana Astrid; Frandeş, Mirela; Lungeanu, Diana; Mihalaş, Gheorghe-Ioan

    2015-01-01

    Content based image retrieval (CBIR) concerns the retrieval of similar images from image databases, using feature vectors extracted from images. These feature vectors globally define the visual content present in an image, defined by e.g., texture, colour, shape, and spatial relations between vectors. Herein, we propose the definition of feature vectors using the Local Binary Pattern (LBP) operator. A study was performed in order to determine the optimum LBP variant for the general definition of image feature vectors. The chosen LBP variant is then subsequently used to build an ultrasound image database, and a database with images obtained from Wireless Capsule Endoscopy. The image indexing process is optimized using data clustering techniques for images belonging to the same class. Finally, the proposed indexing method is compared to the classical indexing technique, which is nowadays widely used.

  12. A Novel Image Retrieval Based on Visual Words Integration of SIFT and SURF.

    Science.gov (United States)

    Ali, Nouman; Bajwa, Khalid Bashir; Sablatnig, Robert; Chatzichristofis, Savvas A; Iqbal, Zeshan; Rashid, Muhammad; Habib, Hafiz Adnan

    2016-01-01

    With the recent evolution of technology, the number of image archives has increased exponentially. In Content-Based Image Retrieval (CBIR), high-level visual information is represented in the form of low-level features. The semantic gap between the low-level features and the high-level image concepts is an open research problem. In this paper, we present a novel visual words integration of Scale Invariant Feature Transform (SIFT) and Speeded-Up Robust Features (SURF). The two local features representations are selected for image retrieval because SIFT is more robust to the change in scale and rotation, while SURF is robust to changes in illumination. The visual words integration of SIFT and SURF adds the robustness of both features to image retrieval. The qualitative and quantitative comparisons conducted on Corel-1000, Corel-1500, Corel-2000, Oliva and Torralba and Ground Truth image benchmarks demonstrate the effectiveness of the proposed visual words integration.

  13. A Novel Image Retrieval Based on Visual Words Integration of SIFT and SURF.

    Directory of Open Access Journals (Sweden)

    Nouman Ali

    Full Text Available With the recent evolution of technology, the number of image archives has increased exponentially. In Content-Based Image Retrieval (CBIR, high-level visual information is represented in the form of low-level features. The semantic gap between the low-level features and the high-level image concepts is an open research problem. In this paper, we present a novel visual words integration of Scale Invariant Feature Transform (SIFT and Speeded-Up Robust Features (SURF. The two local features representations are selected for image retrieval because SIFT is more robust to the change in scale and rotation, while SURF is robust to changes in illumination. The visual words integration of SIFT and SURF adds the robustness of both features to image retrieval. The qualitative and quantitative comparisons conducted on Corel-1000, Corel-1500, Corel-2000, Oliva and Torralba and Ground Truth image benchmarks demonstrate the effectiveness of the proposed visual words integration.

  14. Diversification in an image retrieval system based on text and image processing

    Directory of Open Access Journals (Sweden)

    Adrian Iftene

    2014-11-01

    Full Text Available In this paper we present an image retrieval system created within the research project MUCKE (Multimedia and User Credibility Knowledge Extraction, a CHIST-ERA research project where UAIC{\\footnote{"Alexandru Ioan Cuza" University of Iasi}} is one of the partners{\\footnote{Together with Technical University from Wienna, Austria, CEA-LIST Institute from Paris, France and BILKENT University from Ankara, Turkey}}. Our discussion in this work will focus mainly on components that are part of our image retrieval system proposed in MUCKE, and we present the work done by the UAIC group. MUCKE incorporates modules for processing multimedia content in different modes and languages (like English, French, German and Romanian and UAIC is responsible with text processing tasks (for Romanian and English. One of the problems addressed by our work is related to search results diversification. In order to solve this problem, we first process the user queries in both languages and secondly, we create clusters of similar images.

  15. Separability versus prototypicality in handwritten word-image retrieval

    NARCIS (Netherlands)

    van Oosten, Jean-Paul; Schomaker, Lambertus

    Hit lists are at the core of retrieval systems. The top ranks are important, especially if user feedback is used to train the system. Analysis of hit lists revealed counter-intuitive instances in the top ranks for good classifiers. In this study, we propose that two functions need to be optimised:

  16. Mobile medical visual information retrieval.

    Science.gov (United States)

    Depeursinge, Adrien; Duc, Samuel; Eggel, Ivan; Müller, Henning

    2012-01-01

    In this paper, we propose mobile access to peer-reviewed medical information based on textual search and content-based visual image retrieval. Web-based interfaces designed for limited screen space were developed to query via web services a medical information retrieval engine optimizing the amount of data to be transferred in wireless form. Visual and textual retrieval engines with state-of-the-art performance were integrated. Results obtained show a good usability of the software. Future use in clinical environments has the potential of increasing quality of patient care through bedside access to the medical literature in context.

  17. Sketch-Based Image Retrieval: Benchmark and Bag-of-Features Descriptors.

    Science.gov (United States)

    Eitz, M; Hildebrand, K; Boubekeur, T; Alexa, M

    2011-11-01

    We introduce a benchmark for evaluating the performance of large-scale sketch-based image retrieval systems. The necessary data are acquired in a controlled user study where subjects rate how well given sketch/image pairs match. We suggest how to use the data for evaluating the performance of sketch-based image retrieval systems. The benchmark data as well as the large image database are made publicly available for further studies of this type. Furthermore, we develop new descriptors based on the bag-of-features approach and use the benchmark to demonstrate that they significantly outperform other descriptors in the literature.

  18. Quantifying the margin sharpness of lesions on radiological images for content-based image retrieval

    Energy Technology Data Exchange (ETDEWEB)

    Xu Jiajing; Napel, Sandy; Greenspan, Hayit; Beaulieu, Christopher F.; Agrawal, Neeraj; Rubin, Daniel [Department of Electrical Engineering, Stanford University, Stanford, California 94305 (United States); Department of Radiology, Stanford University, Stanford, California 94305 (United States); Department of Biomedical Engineering, Tel Aviv University, Tel Aviv 69978 (Israel); Department of Radiology, Stanford University, Stanford, California 94305 (United States); Department of Computer Science, Stanford University, Stanford, California 94305 (United States); Department of Radiology, Stanford University, Stanford, California 94305 (United States)

    2012-09-15

    . Equivalence across deformations was assessed using Schuirmann's paired two one-sided tests. Results: In simulated images, the concordance correlation between measured gradient and actual gradient was 0.994. The mean (s.d.) and standard deviation NDCG score for the retrieval of K images, K = 5, 10, and 15, were 84% (8%), 85% (7%), and 85% (7%) for CT images containing liver lesions, and 82% (7%), 84% (6%), and 85% (4%) for CT images containing lung nodules, respectively. The authors' proposed method outperformed the two existing margin characterization methods in average NDCG scores over all K, by 1.5% and 3% in datasets containing liver lesion, and 4.5% and 5% in datasets containing lung nodules. Equivalence testing showed that the authors' feature is more robust across all margin deformations (p < 0.05) than the two existing methods for margin sharpness characterization in both simulated and clinical datasets. Conclusions: The authors have described a new image feature to quantify the margin sharpness of lesions. It has strong correlation with known margin sharpness in simulated images and in clinical CT images containing liver lesions and lung nodules. This image feature has excellent performance for retrieving images with similar margin characteristics, suggesting potential utility, in conjunction with other lesion features, for content-based image retrieval applications.

  19. Pleasant/Unpleasant Filtering for Affective Image Retrieval Based on Cross-Correlation of EEG Features

    Directory of Open Access Journals (Sweden)

    Keranmu Xielifuguli

    2014-01-01

    Full Text Available People often make decisions based on sensitivity rather than rationality. In the field of biological information processing, methods are available for analyzing biological information directly based on electroencephalogram: EEG to determine the pleasant/unpleasant reactions of users. In this study, we propose a sensitivity filtering technique for discriminating preferences (pleasant/unpleasant for images using a sensitivity image filtering system based on EEG. Using a set of images retrieved by similarity retrieval, we perform the sensitivity-based pleasant/unpleasant classification of images based on the affective features extracted from images with the maximum entropy method: MEM. In the present study, the affective features comprised cross-correlation features obtained from EEGs produced when an individual observed an image. However, it is difficult to measure the EEG when a subject visualizes an unknown image. Thus, we propose a solution where a linear regression method based on canonical correlation is used to estimate the cross-correlation features from image features. Experiments were conducted to evaluate the validity of sensitivity filtering compared with image similarity retrieval methods based on image features. We found that sensitivity filtering using color correlograms was suitable for the classification of preferred images, while sensitivity filtering using local binary patterns was suitable for the classification of unpleasant images. Moreover, sensitivity filtering using local binary patterns for unpleasant images had a 90% success rate. Thus, we conclude that the proposed method is efficient for filtering unpleasant images.

  20. High-performance web viewer for cardiac images

    Science.gov (United States)

    dos Santos, Marcelo; Furuie, Sergio S.

    2004-04-01

    With the advent of the digital devices for medical diagnosis the use of the regular films in radiology has decreased. Thus, the management and handling of medical images in digital format has become an important and critical task. In Cardiology, for example, the main difficulty is to display dynamic images with the appropriated color palette and frame rate used on acquisition process by Cath, Angio and Echo systems. In addition, other difficulty is handling large images in memory by any existing personal computer, including thin clients. In this work we present a web-based application that carries out these tasks with robustness and excellent performance, without burdening the server and network. This application provides near-diagnostic quality display of cardiac images stored as DICOM 3.0 files via a web browser and provides a set of resources that allows the viewing of still and dynamic images. It can access image files from the local disks, or network connection. Its features include: allows real-time playback, dynamic thumbnails image viewing during loading, access to patient database information, image processing tools, linear and angular measurements, on-screen annotations, image printing and exporting DICOM images to other image formats, and many others, all characterized by a pleasant user-friendly interface, inside a Web browser by means of a Java application. This approach offers some advantages over the most of medical images viewers, such as: facility of installation, integration with other systems by means of public and standardized interfaces, platform independence, efficient manipulation and display of medical images, all with high performance.

  1. Image Retrieval Based on Multiview Constrained Nonnegative Matrix Factorization and Gaussian Mixture Model Spectral Clustering Method

    Directory of Open Access Journals (Sweden)

    Qunyi Xie

    2016-01-01

    Full Text Available Content-based image retrieval has recently become an important research topic and has been widely used for managing images from repertories. In this article, we address an efficient technique, called MNGS, which integrates multiview constrained nonnegative matrix factorization (NMF and Gaussian mixture model- (GMM- based spectral clustering for image retrieval. In the proposed methodology, the multiview NMF scheme provides competitive sparse representations of underlying images through decomposition of a similarity-preserving matrix that is formed by fusing multiple features from different visual aspects. In particular, the proposed method merges manifold constraints into the standard NMF objective function to impose an orthogonality constraint on the basis matrix and satisfy the structure preservation requirement of the coefficient matrix. To manipulate the clustering method on sparse representations, this paper has developed a GMM-based spectral clustering method in which the Gaussian components are regrouped in spectral space, which significantly improves the retrieval effectiveness. In this way, image retrieval of the whole database translates to a nearest-neighbour search in the cluster containing the query image. Simultaneously, this study investigates the proof of convergence of the objective function and the analysis of the computational complexity. Experimental results on three standard image datasets reveal the advantages that can be achieved with the proposed retrieval scheme.

  2. GFG-Based Compression and Retrieval of Document Images in Indian Scripts

    Science.gov (United States)

    Harit, Gaurav; Chaudhury, Santanu; Garg, Ritu

    Indexing and retrieval of Indian language documents is an important problem. We present an interactive access scheme for Indian language document collection using techniques for word-image-based search. The compression and retrieval paradigm we propose is applicable even for those Indian scripts for which reliable OCR technology is not available. Our technique for word spotting is based on exploiting the geometrical features of the word image. The word image features are represented in the form of a graph called geometric feature graph (GFG). The GFG is encoded as a string which serves as a compressed representation of the word image skeleton. We have also augmented the GFG-based word image spotting with latent semantic analysis for more effective retrieval. The query is specified as a set of word images and the documents that best match with the query representation in the latent semantic space are retrieved. The retrieval paradigm is further enhanced to the conceptual level with the use of document image content-domain knowledge specified in the form of an ontology.

  3. Video and image retrieval beyond the cognitive level: the needs and possibilities

    Science.gov (United States)

    Hanjalic, Alan

    2001-01-01

    The worldwide research efforts in the are of image and video retrieval have concentrated so far on increasing the efficiency and reliability of extracting the elements of image and video semantics and so on improving the search and retrieval performance at the cognitive level of content abstraction. At this abstraction level, the user is searching for 'factual' or 'objective' content such as image showing a panorama of San Francisco, an outdoor or an indoor image, a broadcast news report on a defined topic, a movie dialog between the actors A and B or the parts of a basketball game showing fast breaks, steals and scores. These efforts, however, do not address the retrieval applications at the so-called affective level of content abstraction where the 'ground truth' is not strictly defined. Such applications are, for instance, those where subjectivity of the user plays the major role, e.g. the task of retrieving all images that the user 'likes most', and those that are based on 'recognizing emotions' in audiovisual data. Typical examples are searching for all images that 'radiate happiness', identifying all 'sad' movie fragments and looking for the 'romantic landscapes', 'sentimental' movie segments, 'movie highlights' or 'most exciting' moments of a sport event. This paper discusses the needs and possibilities for widening the current scope of research in the area of image and video search and retrieval in order to enable applications at the affective level of content abstraction.

  4. Robust Web Image Annotation via Exploring Multi-Facet and Structural Knowledge.

    Science.gov (United States)

    Hu, Mengqiu; Yang, Yang; Shen, Fumin; Zhang, Luming; Shen, Heng Tao; Li, Xuelong

    2017-10-01

    Driven by the rapid development of Internet and digital technologies, we have witnessed the explosive growth of Web images in recent years. Seeing that labels can reflect the semantic contents of the images, automatic image annotation, which can further facilitate the procedure of image semantic indexing, retrieval, and other image management tasks, has become one of the most crucial research directions in multimedia. Most of the existing annotation methods, heavily rely on well-labeled training data (expensive to collect) and/or single view of visual features (insufficient representative power). In this paper, inspired by the promising advance of feature engineering (e.g., CNN feature and scale-invariant feature transform feature) and inexhaustible image data (associated with noisy and incomplete labels) on the Web, we propose an effective and robust scheme, termed robust multi-view semi-supervised learning (RMSL), for facilitating image annotation task. Specifically, we exploit both labeled images and unlabeled images to uncover the intrinsic data structural information. Meanwhile, to comprehensively describe an individual datum, we take advantage of the correlated and complemental information derived from multiple facets of image data (i.e., multiple views or features). We devise a robust pairwise constraint on outcomes of different views to achieve annotation consistency. Furthermore, we integrate a robust classifier learning component via l2,p loss, which can provide effective noise identification power during the learning process. Finally, we devise an efficient iterative algorithm to solve the optimization problem in RMSL. We conduct comprehensive experiments on three different data sets, and the results illustrate that our proposed approach is promising for automatic image annotation.

  5. Retrieve polarization aberration from image degradation: a new measurement method in DUV lithography

    Science.gov (United States)

    Xiang, Zhongbo; Li, Yanqiu

    2017-10-01

    Detailed knowledge of polarization aberration (PA) of projection lens in higher-NA DUV lithographic imaging is necessary due to its impact to imaging degradations, and precise measurement of PA is conductive to computational lithography techniques such as RET and OPC. Current in situ measurement method of PA thorough the detection of degradations of aerial images need to do linear approximation and apply the assumption of 3-beam/2-beam interference condition. The former approximation neglects the coupling effect of the PA coefficients, which would significantly influence the accuracy of PA retrieving. The latter assumption restricts the feasible pitch of test masks in higher-NA system, conflicts with the Kirhhoff diffraction model of test mask used in retrieving model, and introduces 3D mask effect as a source of retrieving error. In this paper, a new in situ measurement method of PA is proposed. It establishes the analytical quadratic relation between the PA coefficients and the degradations of aerial images of one-dimensional dense lines in coherent illumination through vector aerial imaging, which does not rely on the assumption of 3-beam/2- beam interference and linear approximation. In this case, the retrieval of PA from image degradation can be convert from the nonlinear system of m-quadratic equations to a multi-objective quadratic optimization problem, and finally be solved by nonlinear least square method. Some preliminary simulation results are given to demonstrate the correctness and accuracy of the new PA retrieving model.

  6. A novel 3D shape descriptor for automatic retrieval of anatomical structures from medical images

    Science.gov (United States)

    Nunes, Fátima L. S.; Bergamasco, Leila C. C.; Delmondes, Pedro H.; Valverde, Miguel A. G.; Jackowski, Marcel P.

    2017-03-01

    Content-based image retrieval (CBIR) aims at retrieving from a database objects that are similar to an object provided by a query, by taking into consideration a set of extracted features. While CBIR has been widely applied in the two-dimensional image domain, the retrieval of3D objects from medical image datasets using CBIR remains to be explored. In this context, the development of descriptors that can capture information specific to organs or structures is desirable. In this work, we focus on the retrieval of two anatomical structures commonly imaged by Magnetic Resonance Imaging (MRI) and Computed Tomography (CT) techniques, the left ventricle of the heart and blood vessels. Towards this aim, we developed the Area-Distance Local Descriptor (ADLD), a novel 3D local shape descriptor that employs mesh geometry information, namely facet area and distance from centroid to surface, to identify shape changes. Because ADLD only considers surface meshes extracted from volumetric medical images, it substantially diminishes the amount of data to be analyzed. A 90% precision rate was obtained when retrieving both convex (left ventricle) and non-convex structures (blood vessels), allowing for detection of abnormalities associated with changes in shape. Thus, ADLD has the potential to aid in the diagnosis of a wide range of vascular and cardiac diseases.

  7. Chinese Herbal Medicine Image Recognition and Retrieval by Convolutional Neural Network.

    Science.gov (United States)

    Sun, Xin; Qian, Huinan

    2016-01-01

    Chinese herbal medicine image recognition and retrieval have great potential of practical applications. Several previous studies have focused on the recognition with hand-crafted image features, but there are two limitations in them. Firstly, most of these hand-crafted features are low-level image representation, which is easily affected by noise and background. Secondly, the medicine images are very clean without any backgrounds, which makes it difficult to use in practical applications. Therefore, designing high-level image representation for recognition and retrieval in real world medicine images is facing a great challenge. Inspired by the recent progress of deep learning in computer vision, we realize that deep learning methods may provide robust medicine image representation. In this paper, we propose to use the Convolutional Neural Network (CNN) for Chinese herbal medicine image recognition and retrieval. For the recognition problem, we use the softmax loss to optimize the recognition network; then for the retrieval problem, we fine-tune the recognition network by adding a triplet loss to search for the most similar medicine images. To evaluate our method, we construct a public database of herbal medicine images with cluttered backgrounds, which has in total 5523 images with 95 popular Chinese medicine categories. Experimental results show that our method can achieve the average recognition precision of 71% and the average retrieval precision of 53% over all the 95 medicine categories, which are quite promising given the fact that the real world images have multiple pieces of occluded herbal and cluttered backgrounds. Besides, our proposed method achieves the state-of-the-art performance by improving previous studies with a large margin.

  8. Chinese Herbal Medicine Image Recognition and Retrieval by Convolutional Neural Network.

    Directory of Open Access Journals (Sweden)

    Xin Sun

    Full Text Available Chinese herbal medicine image recognition and retrieval have great potential of practical applications. Several previous studies have focused on the recognition with hand-crafted image features, but there are two limitations in them. Firstly, most of these hand-crafted features are low-level image representation, which is easily affected by noise and background. Secondly, the medicine images are very clean without any backgrounds, which makes it difficult to use in practical applications. Therefore, designing high-level image representation for recognition and retrieval in real world medicine images is facing a great challenge. Inspired by the recent progress of deep learning in computer vision, we realize that deep learning methods may provide robust medicine image representation. In this paper, we propose to use the Convolutional Neural Network (CNN for Chinese herbal medicine image recognition and retrieval. For the recognition problem, we use the softmax loss to optimize the recognition network; then for the retrieval problem, we fine-tune the recognition network by adding a triplet loss to search for the most similar medicine images. To evaluate our method, we construct a public database of herbal medicine images with cluttered backgrounds, which has in total 5523 images with 95 popular Chinese medicine categories. Experimental results show that our method can achieve the average recognition precision of 71% and the average retrieval precision of 53% over all the 95 medicine categories, which are quite promising given the fact that the real world images have multiple pieces of occluded herbal and cluttered backgrounds. Besides, our proposed method achieves the state-of-the-art performance by improving previous studies with a large margin.

  9. A Preliminary Mapping of Web Queries Using Existing Image Query Schemes.

    Science.gov (United States)

    Jansen, Bernard J.

    End user searching on the Web has become the primary method of locating images for many people. This study investigates the nature of Web image queries by attempting to map them to known image classification schemes. In this study, approximately 100,000 image queries from a major Web search engine were collected in 1997, 1999, and 2001. A…

  10. Powerful Descriptor for Image Retrieval Based on Angle Edge and Histograms

    Directory of Open Access Journals (Sweden)

    Mussarat Yasmin

    2013-10-01

    Full Text Available By gaining the place of active and important research area, Content based image retrieval has been proposed in a number of different ways after its inception. In the proposed method, a new angle orientation histogram has been introduced named as Angle Edge Histogram. By applying Pythagorean theory to image, very useful characteristics have been obtained for image matching, search and retrieval. Proposed method has also been compared with existing methods and the results show that it outperforms the existing methods in values of precision and recall and balance of precision and recall. Proposed method receives an average of 94% of precision and 79% of recall rates.

  11. Genetic Algorithm-Based Relevance Feedback for Image Retrieval Using Local Similarity Patterns.

    Science.gov (United States)

    Stejic, Zoran; Takama, Yasufumi; Hirota, Kaoru

    2003-01-01

    Proposes local similarity pattern (LSP) as a new method for computing digital image similarity. Topics include optimizing similarity computation based on genetic algorithm; relevance feedback; and an evaluation of LSP on five databases that showed an increase in retrieval precision over other methods for computing image similarity. (Author/LRW)

  12. Context-based adaptive filtering of interest points in image retrieval

    DEFF Research Database (Denmark)

    Nguyen, Phuong Giang; Andersen, Hans Jørgen

    2009-01-01

    Interest points have been used as local features with success in many computer vision applications such as image/video retrieval and object recognition. However, a major issue when using this approach is a large number of interest points detected from each image and created a dense feature space....

  13. Thin client (web browser)-based collaboration for medical imaging and web-enabled data.

    Science.gov (United States)

    Le, Tuong Huu; Malhi, Nadeem

    2002-01-01

    Utilizing thin client software and open source server technology, a collaborative architecture was implemented allowing for sharing of Digital Imaging and Communications in Medicine (DICOM) and non-DICOM images with real-time markup. Using the Web browser as a thin client integrated with standards-based components, such as DHTML (dynamic hypertext markup language), JavaScript, and Java, collaboration was achieved through a Web server/proxy server combination utilizing Java Servlets and Java Server Pages. A typical collaborative session involved the driver, who directed the navigation of the other collaborators, the passengers, and provided collaborative markups of medical and nonmedical images. The majority of processing was performed on the server side, allowing for the client to remain thin and more accessible.

  14. A Web-Enabled Research Database with Image Recognition

    Science.gov (United States)

    Dulbandzhyan, Ronda; Duncan, Raymond G.; Rimoin, David L.

    1999-01-01

    Dr. Rodney Roentgen is studying a series of films for a new patient with achondroplasia. The pelvis is typical of the disorder but there are some unusual features in the hands. Dr. Roentgen would like to determine if there are X-ray images on file with these features for any other patient that has been seen by the International Skeletal Dysplasia Registry. Using the traditional approach, Dr. Roentgen would have to manually inspect thousands of other X-ray films to try and find a match. With the use of stored database images, Knowledge-Based Retrieval data elements and the Oracle Visual Information Retrieval (VIR) cartridge, this searching process can be greatly streamlined,

  15. The Complete Local Spatial Central Derivative Binary Pattern for Ultrasound Kidney Images Retrieval

    Directory of Open Access Journals (Sweden)

    Chelladurai CALLINS CHRISTIYANA

    2015-12-01

    Full Text Available The Content Based Image Retrieval (CBIR is an active research domain in medical applications. The feature extraction process is the vital procedure in CBIR. This work proposes a new feature extraction procedure named as Complete Local Spatial Central Derivative Binary Pattern (CLSCDBP for ultrasound kidney images retrieval. In a local 3X3 square region of an image, the new pattern considers the relationships among the surrounding neighbors about their neighbors at different spatial distances whereas the standard Local Binary Pattern reflects the relationships between the center pixel and the surrounding neighbors. Though the surrounding neighbor pixels relationship has been considered in the Local Mesh Peak Valley Edge Patterns (LMePVEP, the proposed feature is different by deriving the local pattern based on the encoding of central derivative of the surrounding neighbors of the center pixel. The neighbors of each surrounding pixel in different spatial distances are considered during central derivative computation. The proposed local pattern becomes complete by accompanying the global mean statistics into it. The performance of this new feature is examined in ultrasound kidney images retrieval system. The experimental results confirm that CLSCDBP achieves considerable step up in the retrieval of ultrasound kidney images than LMePVEP in terms of Retrieval Efficiency.

  16. A Scene Text-Based Image Retrieval System

    Science.gov (United States)

    2012-12-01

    images. The majority of OCR engines is designed for scanned text and so depends on segmentation which correctly separates text from background...size is 8×8, cell size is 2×2 and 9 bins for histogram. For each candidate word, HOG feature is extracted and used by the SVM classifier to verify...images. One approach is to extract text appearing in images which often gives an indication of a scene’s semantic content. However, it can be

  17. Embedding Web-Based Statistical Translation Models in Cross-Language Information Retrieval

    NARCIS (Netherlands)

    Kraaij, W.; Nie, J.Y.; Simard, M.

    2003-01-01

    Although more and more language pairs are covered by machine translation (MT) services, there are still many pairs that lack translation resources. Cross-language information retrieval (CUR) is an application that needs translation functionality of a relatively low level of sophistication, since

  18. Publication and Retrieval of Computational Chemical-Physical Data Via the Semantic Web. Final Technical Report

    Energy Technology Data Exchange (ETDEWEB)

    Ostlund, Neil [Chemical Semantics, Inc., Gainesville, FL (United States)

    2017-07-20

    This research showed the feasibility of applying the concepts of the Semantic Web to Computation Chemistry. We have created the first web portal (www.chemsem.com) that allows data created in the calculations of quantum chemistry, and other such chemistry calculations to be placed on the web in a way that makes the data accessible to scientists in a semantic form never before possible. The semantic web nature of the portal allows data to be searched, found, and used as an advance over the usual approach of a relational database. The semantic data on our portal has the nature of a Giant Global Graph (GGG) that can be easily merged with related data and searched globally via a SPARQL Protocol and RDF Query Language (SPARQL) that makes global searches for data easier than with traditional methods. Our Semantic Web Portal requires that the data be understood by a computer and hence defined by an ontology (vocabulary). This ontology is used by the computer in understanding the data. We have created such an ontology for computational chemistry (purl.org/gc) that encapsulates a broad knowledge of the field of computational chemistry. We refer to this ontology as the Gainesville Core. While it is perhaps the first ontology for computational chemistry and is used by our portal, it is only a start of what must be a long multi-partner effort to define computational chemistry. In conjunction with the above efforts we have defined a new potential file standard (Common Standard for eXchange – CSX for computational chemistry data). This CSX file is the precursor of data in the Resource Description Framework (RDF) form that the semantic web requires. Our portal translates CSX files (as well as other computational chemistry data files) into RDF files that are part of the graph database that the semantic web employs. We propose a CSX file as a convenient way to encapsulate computational chemistry data.

  19. Automated and effective content-based image retrieval for digital mammography.

    Science.gov (United States)

    Singh, Vibhav Prakash; Srivastava, Subodh; Srivastava, Rajeev

    2018-01-01

    Nowadays, huge number of mammograms has been generated in hospitals for the diagnosis of breast cancer. Content-based image retrieval (CBIR) can contribute more reliable diagnosis by classifying the query mammograms and retrieving similar mammograms already annotated by diagnostic descriptions and treatment results. Since labels, artifacts, and pectoral muscles present in mammograms can bias the retrieval procedures, automated detection and exclusion of these image noise patterns and/or non-breast regions is an essential pre-processing step. In this study, an efficient and automated CBIR system of mammograms was developed and tested. First, the pre-processing steps including automatic labelling-artifact suppression, automatic pectoral muscle removal, and image enhancement using the adaptive median filter were applied. Next, pre-processed images were segmented using the co-occurrence thresholds based seeded region growing algorithm. Furthermore, a set of image features including shape, histogram based statistical, Gabor, wavelet, and Gray Level Co-occurrence Matrix (GLCM) features, was computed from the segmented region. In order to select the optimal features, a minimum redundancy maximum relevance (mRMR) feature selection method was then applied. Finally, similar images were retrieved using Euclidean distance similarity measure. The comparative experiments conducted with reference to benchmark mammographic images analysis society (MIAS) database confirmed the effectiveness of the proposed work concerning average precision of 72% and 61.30% for normal & abnormal classes of mammograms, respectively.

  20. The iMars WebGIS - Spatio-Temporal Data Queries and Single Image Map Web Services

    Science.gov (United States)

    Walter, Sebastian; Steikert, Ralf; Schreiner, Bjoern; Muller, Jan-Peter; van Gasselt, Stephan; Sidiropoulos, Panagiotis; Lanz-Kroechert, Julia

    2017-04-01

    Server backend which in turn delivers the response back to the MapCache instance. Web frontend: We have implemented a web-GIS frontend based on various OpenLayers components. The basemap is a global color-hillshaded HRSC bundle-adjusted DTM mosaic with a resolution of 50 m per pixel. The new bundle-block-adjusted qudrangle mosaics of the MC-11 quadrangle, both image and DTM, are included with opacity slider options. The layer user interface has been adapted on the base of the ol3-layerswitcher and extended by foldable and switchable groups, layer sorting (by resolution, by time and alphabeticallly) and reordering (drag-and-drop). A collapsible time panel accomodates a time slider interface where the user can filter the visible data by a range of Mars or Earth dates and/or by solar longitudes. The visualisation of time-series of single images is controlled by a specific toolbar enabling the workflow of image selection (by point or bounding box), dynamic image loading and playback of single images in a video player-like environment. During a stress-test campaign we could demonstrate that the system is capable of serving up to 10 simultaneous users on its current lightweight development hardware. It is planned to relocate the software to more powerful hardware by the time of this conference. Conclusions/Outlook: The iMars webGIS is an expert tool for the detection and visualization of surface changes. We demonstrate a technique to dynamically retrieve and display single images based on the time-series structure of the data. Together with the multi-temporal database and its MapServer/MapCache backend it provides a stable and high performance environment for the dissemination of the various iMars products. Acknowledgements: This research has received funding from the EU's FP7 Programme under iMars 607379 and by the German Space Agency (DLR Bonn), grant 50 QM 1301 (HRSC on Mars Express).

  1. Biomedical article retrieval using multimodal features and image annotations in region-based CBIR

    Science.gov (United States)

    You, Daekeun; Antani, Sameer; Demner-Fushman, Dina; Rahman, Md Mahmudur; Govindaraju, Venu; Thoma, George R.

    2010-01-01

    Biomedical images are invaluable in establishing diagnosis, acquiring technical skills, and implementing best practices in many areas of medicine. At present, images needed for instructional purposes or in support of clinical decisions appear in specialized databases and in biomedical articles, and are often not easily accessible to retrieval tools. Our goal is to automatically annotate images extracted from scientific publications with respect to their usefulness for clinical decision support and instructional purposes, and project the annotations onto images stored in databases by linking images through content-based image similarity. Authors often use text labels and pointers overlaid on figures and illustrations in the articles to highlight regions of interest (ROI). These annotations are then referenced in the caption text or figure citations in the article text. In previous research we have developed two methods (a heuristic and dynamic time warping-based methods) for localizing and recognizing such pointers on biomedical images. In this work, we add robustness to our previous efforts by using a machine learning based approach to localizing and recognizing the pointers. Identifying these can assist in extracting relevant image content at regions within the image that are likely to be highly relevant to the discussion in the article text. Image regions can then be annotated using biomedical concepts from extracted snippets of text pertaining to images in scientific biomedical articles that are identified using National Library of Medicine's Unified Medical Language System® (UMLS) Metathesaurus. The resulting regional annotation and extracted image content are then used as indices for biomedical article retrieval using the multimodal features and region-based content-based image retrieval (CBIR) techniques. The hypothesis that such an approach would improve biomedical document retrieval is validated through experiments on an expert-marked biomedical article

  2. Application of the Progressive Wavelet Correlation to Content-Based Image Retrieving

    OpenAIRE

    Stojanovic, Igor; Kraljevski, Ivan; Chungurski, Slavco

    2010-01-01

    The following study presents a method for search and retrieval of images from massive image collections. The method consists of two phases. The first phase uses well-known methods of image searching by descriptors based on the content of the searched image. In the second phase the progressive wavelet correlation method is applied on the small number of image candidates selected in previous search phase. The final search result is the wanted image, if it is in the data base. Experiments are pe...

  3. Three-dimensional imaging using phase retrieval with two focus planes

    Science.gov (United States)

    Ilovitsh, Tali; Ilovitsh, Asaf; Weiss, Aryeh; Meir, Rinat; Zalevsky, Zeev

    2016-03-01

    This work presents a technique for a full 3D imaging of biological samples tagged with gold-nanoparticles (GNPs) using only two images, rather than many images per volume as is currently needed for 3D optical sectioning microscopy. The proposed approach is based on the Gerchberg-Saxton (GS) phase retrieval algorithm. The reconstructed field is free space propagated to all other focus planes using post processing, and the 2D z-stack is merged to create a 3D image of the sample with high fidelity. Because we propose to apply the phase retrieving on nano particles, the regular ambiguities typical to the Gerchberg-Saxton algorithm, are eliminated. In addition, since the method requires the capturing of two images only, it can be suitable for 3D live cell imaging. The proposed concept is presented and validated both on simulated data as well as experimentally.

  4. Phase retrieval for non-ideal in-line phase contrast x-ray imaging

    Science.gov (United States)

    Guo, Baikuan; Gao, Feng; Zhao, Huijuan; Zhang, Limin; Li, Jiao; Zhou, Zhongxing

    2017-02-01

    Phase contrast x-ray imaging techniques have shown the ability to overcome the weakness of the low sensitivity of conventional x-ray imaging. Among them, in-line phase contrast imaging, blessed with simplicity of arrangement, is deemed to be a promising technique in clinical application. To obtain quantitative information from in-line phase contrast images, numerous phase-retrieval techniques have been developed. The theories of these phase-retrieval techniques are mostly proposed on the basis of the ideal detector and the noise-free environment. However, in practice, both detector resolution and system noise would have impacts on the performance of these phase-retrieval methods. To assess the impacts of above-mentioned factors, we include the effects of Gaussian shaped detectors varying in the full width at half maximum (FWHM) and system noise at different levels into numerical simulations. The performance of the phase-retrieval methods under such conditions is evaluated by the root mean square error. The results demonstrate that an increase in the detector FWHM or noise level degrades the effect of phase retrieval, especially for objects in small size.

  5. An image retrieval framework for real-time endoscopic image retargeting.

    Science.gov (United States)

    Ye, Menglong; Johns, Edward; Walter, Benjamin; Meining, Alexander; Yang, Guang-Zhong

    2017-08-01

    Serial endoscopic examinations of a patient are important for early diagnosis of malignancies in the gastrointestinal tract. However, retargeting for optical biopsy is challenging due to extensive tissue variations between examinations, requiring the method to be tolerant to these changes whilst enabling real-time retargeting. This work presents an image retrieval framework for inter-examination retargeting. We propose both a novel image descriptor tolerant of long-term tissue changes and a novel descriptor matching method in real time. The descriptor is based on histograms generated from regional intensity comparisons over multiple scales, offering stability over long-term appearance changes at the higher levels, whilst remaining discriminative at the lower levels. The matching method then learns a hashing function using random forests, to compress the string and allow for fast image comparison by a simple Hamming distance metric. A dataset that contains 13 in vivo gastrointestinal videos was collected from six patients, representing serial examinations of each patient, which includes videos captured with significant time intervals. Precision-recall for retargeting shows that our new descriptor outperforms a number of alternative descriptors, whilst our hashing method outperforms a number of alternative hashing approaches. We have proposed a novel framework for optical biopsy in serial endoscopic examinations. A new descriptor, combined with a novel hashing method, achieves state-of-the-art retargeting, with validation on in vivo videos from six patients. Real-time performance also allows for practical integration without disturbing the existing clinical workflow.

  6. A novel content-based medical image retrieval method based on query topic dependent image features (QTDIF)

    Science.gov (United States)

    Xiong, Wei; Qiu, Bo; Tian, Qi; Mueller, Henning; Xu, Changsheng

    2005-04-01

    Medical image retrieval is still mainly a research domain with a large variety of applications and techniques. With the ImageCLEF 2004 benchmark, an evaluation framework has been created that includes a database, query topics and ground truth data. Eleven systems (with a total of more than 50 runs) compared their performance in various configurations. The results show that there is not any one feature that performs well on all query tasks. Key to successful retrieval is rather the selection of features and feature weights based on a specific set of input features, thus on the query task. In this paper we propose a novel method based on query topic dependent image features (QTDIF) for content-based medical image retrieval. These feature sets are designed to capture both inter-category and intra-category statistical variations to achieve good retrieval performance in terms of recall and precision. We have used Gaussian Mixture Models (GMM) and blob representation to model medical images and construct the proposed novel QTDIF for CBIR. Finally, trained multi-class support vector machines (SVM) are used for image similarity ranking. The proposed methods have been tested over the Casimage database with around 9000 images, for the given 26 image topics, used for imageCLEF 2004. The retrieval performance has been compared with the medGIFT system, which is based on the GNU Image Finding Tool (GIFT). The experimental results show that the proposed QTDIF-based CBIR can provide significantly better performance than systems based general features only.

  7. Medical Image Retrieval with Compact Binary Codes Generated in Frequency Domain Using Highly Reactive Convolutional Features.

    Science.gov (United States)

    Ahmad, Jamil; Muhammad, Khan; Baik, Sung Wook

    2017-12-19

    Efficient retrieval of relevant medical cases using semantically similar medical images from large scale repositories can assist medical experts in timely decision making and diagnosis. However, the ever-increasing volume of images hinder performance of image retrieval systems. Recently, features from deep convolutional neural networks (CNN) have yielded state-of-the-art performance in image retrieval. Further, locality sensitive hashing based approaches have become popular for their ability to allow efficient retrieval in large scale datasets. In this paper, we present a highly efficient method to compress selective convolutional features into sequence of bits using Fast Fourier Transform (FFT). Firstly, highly reactive convolutional feature maps from a pre-trained CNN are identified for medical images based on their neuronal responses using optimal subset selection algorithm. Then, layer-wise global mean activations of the selected feature maps are transformed into compact binary codes using binarization of its Fourier spectrum. The acquired hash codes are highly discriminative and can be obtained efficiently from the original feature vectors without any training. The proposed framework has been evaluated on two large datasets of radiology and endoscopy images. Experimental evaluations reveal that the proposed method significantly outperforms other features extraction and hashing schemes in both effectiveness and efficiency.

  8. Effects of Diacritics on Web Search Engines’ Performance for Retrieval of Yoruba Documents

    Directory of Open Access Journals (Sweden)

    Toluwase Victor Asubiaro

    2014-06-01

    Full Text Available This paper aims to find out the possible effect of the use or nonuse of diacritics in Yoruba search queries on the performance of major search engines, AOL, Bing, Google and Yahoo!, in retrieving documents. 30 Yoruba queries created from the most searched keywords from Nigeria on Google search logs were submitted to the search engines. The search queries were posed to the search engines without diacritics and then with diacritics. All of the search engines retrieved more sites in response to the queries without diacritics. Also, they all retrieved more precise results for queries without diacritics. The search engines also answered more queries without diacritics. There was no significant difference in the precision values of any two of the four search engines for diacritized and undiacritized queries. There was a significant difference in the effectiveness of AOL and Yahoo when diacritics were applied and when they were not applied. The findings of the study indicate that the search engines do not find a relationship between the diacritized Yoruba words and the undiacritized versions. Therefore, there is a need for search engines to add normalization steps to pre-process Yoruba queries and indexes. This study concentrates on a problem with search engines that has not been previously investigated.

  9. Inter-Comparison of GOES-8 Imager and Sounder Skin Temperature Retrievals

    Science.gov (United States)

    Haines, Stephanie L.; Suggs, Ronnie J.; Jedlovec, Gary J.; Arnold, James E. (Technical Monitor)

    2001-01-01

    Skin temperature (ST) retrievals derived from geostationary satellite observations have both high temporal and spatial resolutions and are therefore useful for applications such as assimilation into mesoscale forecast models, nowcasting, and diagnostic studies. Our retrieval method uses a Physical Split Window technique requiring at least two channels within the longwave infrared window. On current GOES satellites, including GOES-11, there are two Imager channels within the required spectral interval. However, beginning with the GOES-M satellite the 12-um channel will be removed, leaving only one longwave channel. The Sounder instrument will continue to have three channels within the longwave window, and therefore ST retrievals will be derived from Sounder measurements. This research compares retrievals from the two instruments and evaluates the effects of the spatial resolution and sensor calibration differences on the retrievals. Both Imager and Sounder retrievals are compared to ground-truth data to evaluate the overall accuracy of the technique. An analysis of GOES-8 and GOES-11 intercomparisons is also presented.

  10. An Empirical Comparison of Visualization Tools To Assist Information Retrieval on the Web.

    Science.gov (United States)

    Heo, Misook; Hirtle, Stephen C.

    2001-01-01

    Discusses problems with navigation in hypertext systems, including cognitive overload, and describes a study that tested information visualization techniques to see which best represented the underlying structure of Web space. Considers the effects of visualization techniques on user performance on information searching tasks and the effects of…

  11. Towards Distributed Information Retrieval in the Semantic Web: Query Reformulation Using the oMAP Framework

    NARCIS (Netherlands)

    U. Straccia; R. Troncy (Raphael)

    2006-01-01

    textabstractThis paper introduces a general methodology for performing distributed search in the Semantic Web. We propose to define this task as a three steps process, namely resource selection, query reformulation/ontology alignment and rank aggregation/data fusion. For the second problem, we have

  12. Exploring topic-based language models for effective web information retrieval

    NARCIS (Netherlands)

    Li, R.; Kaptein, R.; Hiemstra, D.; Kamps, J.

    2008-01-01

    The main obstacle for providing focused search is the relative opaqueness of search request—searchers tend to express their complex information needs in only a couple of keywords. Our overall aim is to find out if, and how, topic-based language models can leads to more effective web information

  13. Exploring Topic-based Language Models for Effective Web Information Retrieval

    NARCIS (Netherlands)

    Li, R.; Kaptein, Rianne; Hiemstra, Djoerd; Kamps, Jaap; Hoenkamp, E.; De Cock, M.; Hoste, V.

    2008-01-01

    The main obstacle for providing focused search is the relative opaqueness of search request -- searchers tend to express their complex information needs in only a couple of keywords. Our overall aim is to find out if, and how, topic-based language models can lead to more effective web information

  14. SWHi system description : A case study in information retrieval, inference, and visualization in the Semantic Web

    NARCIS (Netherlands)

    Fahmi, Ismail; Zhang, Junte; Ellermann, Henk; Bouma, Gosse; Franconi, E; Kifer, M; May, W

    2007-01-01

    Search engines have become the most popular tools for finding information on the Internet. A real-world Semantic Web application can benefit from this by combining its features with some features from search engines. In this paper, we describe methods for indexing and searching a populated ontology

  15. Hierarchical content-based image retrieval by dynamic indexing and guided search

    Science.gov (United States)

    You, Jane; Cheung, King H.; Liu, James; Guo, Linong

    2003-12-01

    This paper presents a new approach to content-based image retrieval by using dynamic indexing and guided search in a hierarchical structure, and extending data mining and data warehousing techniques. The proposed algorithms include: a wavelet-based scheme for multiple image feature extraction, the extension of a conventional data warehouse and an image database to an image data warehouse for dynamic image indexing, an image data schema for hierarchical image representation and dynamic image indexing, a statistically based feature selection scheme to achieve flexible similarity measures, and a feature component code to facilitate query processing and guide the search for the best matching. A series of case studies are reported, which include a wavelet-based image color hierarchy, classification of satellite images, tropical cyclone pattern recognition, and personal identification using multi-level palmprint and face features.

  16. Modeling multiple visual words assignment for bag-of-features based medical image retrieval

    KAUST Repository

    Wang, Jim Jing-Yan

    2012-01-01

    In this paper, we investigate the bag-of-features based medical image retrieval methods, which represent an image as a collection of local features, such as image patch and key points with SIFT descriptor. To improve the bag-of-features method, we first model the assignment of local descriptor as contribution functions, and then propose a new multiple assignment strategy. By assuming the local feature can be reconstructed by its neighboring visual words in vocabulary, we solve the reconstruction weights as a QP problem and then use the solved weights as contribution functions, which results in a new assignment method called the QP assignment. We carry our experiments on ImageCLEFmed datasets. Experiments\\' results show that our proposed method exceeds the performances of traditional solutions and works well for the bag-of-features based medical image retrieval tasks.

  17. Research on image retrieval using deep convolutional neural network combining L1 regularization and PRelu activation function

    Science.gov (United States)

    QingJie, Wei; WenBin, Wang

    2017-06-01

    In this paper, the image retrieval using deep convolutional neural network combined with regularization and PRelu activation function is studied, and improves image retrieval accuracy. Deep convolutional neural network can not only simulate the process of human brain to receive and transmit information, but also contains a convolution operation, which is very suitable for processing images. Using deep convolutional neural network is better than direct extraction of image visual features for image retrieval. However, the structure of deep convolutional neural network is complex, and it is easy to over-fitting and reduces the accuracy of image retrieval. In this paper, we combine L1 regularization and PRelu activation function to construct a deep convolutional neural network to prevent over-fitting of the network and improve the accuracy of image retrieval

  18. Aspect-based Relevance Learning for Image Retrieval

    NARCIS (Netherlands)

    M.J. Huiskes (Mark)

    2005-01-01

    htmlabstractWe analyze the special structure of the relevance feedback learning problem, focusing particularly on the effects of image selection by partial relevance on the clustering behavior of feedback examples. We propose a scheme, aspect-based relevance learning, which guarantees that feedback

  19. Words Matter: Scene Text for Image Classification and Retrieval

    NARCIS (Netherlands)

    Karaoglu, S.; Tao, R.; Gevers, T.; Smeulders, A.W.M.

    Text in natural images typically adds meaning to an object or scene. In particular, text specifies which business places serve drinks (e.g., cafe, teahouse) or food (e.g., restaurant, pizzeria), and what kind of service is provided (e.g., massage, repair). The mere presence of text, its words, and

  20. High Resolution Imaging Using Phase Retrieval. Volume 2

    Science.gov (United States)

    1991-10-01

    25. pp. 5?3-578. 195;4 tamned from random perturbations of the constraint func- 181 K. Deimling. .Vonltnear Functional Analisis , Noe% Nork Springer...plane data of Figure 1(b) and corresponding image, Figure lie). The transform was also multiplied by aperture H to obtain the aperture plane dato of

  1. Genetic Algorithm Phase Retrieval for the Systematic Image-Based Optical Alignment Testbed

    Science.gov (United States)

    Taylor, Jaime; Rakoczy, John; Steincamp, James

    2003-01-01

    Phase retrieval requires calculation of the real-valued phase of the pupil fimction from the image intensity distribution and characteristics of an optical system. Genetic 'algorithms were used to solve two one-dimensional phase retrieval problem. A GA successfully estimated the coefficients of a polynomial expansion of the phase when the number of coefficients was correctly specified. A GA also successfully estimated the multiple p h e s of a segmented optical system analogous to the seven-mirror Systematic Image-Based Optical Alignment (SIBOA) testbed located at NASA s Marshall Space Flight Center. The SIBOA testbed was developed to investigate phase retrieval techniques. Tiphilt and piston motions of the mirrors accomplish phase corrections. A constant phase over each mirror can be achieved by an independent tip/tilt correction: the phase Conection term can then be factored out of the Discrete Fourier Tranform (DFT), greatly reducing computations.

  2. LinkHub: a Semantic Web system that facilitates cross-database queries and information retrieval in proteomics

    Directory of Open Access Journals (Sweden)

    Cheung Kei-Hoi

    2007-05-01

    Full Text Available Abstract Background A key abstraction in representing proteomics knowledge is the notion of unique identifiers for individual entities (e.g. proteins and the massive graph of relationships among them. These relationships are sometimes simple (e.g. synonyms but are often more complex (e.g. one-to-many relationships in protein family membership. Results We have built a software system called LinkHub using Semantic Web RDF that manages the graph of identifier relationships and allows exploration with a variety of interfaces. For efficiency, we also provide relational-database access and translation between the relational and RDF versions. LinkHub is practically useful in creating small, local hubs on common topics and then connecting these to major portals in a federated architecture; we have used LinkHub to establish such a relationship between UniProt and the North East Structural Genomics Consortium. LinkHub also facilitates queries and access to information and documents related to identifiers spread across multiple databases, acting as "connecting glue" between different identifier spaces. We demonstrate this with example queries discovering "interologs" of yeast protein interactions in the worm and exploring the relationship between gene essentiality and pseudogene content. We also show how "protein family based" retrieval of documents can be achieved. LinkHub is available at hub.gersteinlab.org and hub.nesg.org with supplement, database models and full-source code. Conclusion LinkHub leverages Semantic Web standards-based integrated data to provide novel information retrieval to identifier-related documents through relational graph queries, simplifies and manages connections to major hubs such as UniProt, and provides useful interactive and query interfaces for exploring the integrated data.

  3. Neural mechanism of lmplicit and explicit memory retrieval: functional MR imaging

    Energy Technology Data Exchange (ETDEWEB)

    Kang, Heoung Keun; Jeong, Gwang Woo; Park, Tae Jin; Seo, Jeong Jin; Kim, Hyung Joong; Eun, Sung Jong; Chung, Tae Woong [Chonnam National University Medical School, Gwangju (Korea, Republic of)

    2003-03-01

    To identify, using functional MR imaging, distinct cerebral centers and to evaluate the neural mechanism associated with implicit and explicit retrieval of words during conceptual processing. Seven healthy volunteers aged 21-25 (mean, 22) years underwent BOLD-based fMR imaging using a 1.5T signa horizon echospeed MR system. To activate the cerebral cortices, a series of tasks was performed as follows: the encoding of two-syllable words, and implicit and explicit retrieval of previously learned words during conceptual processing. The activation paradigm consisted of a cycle of alternating periods of 30 seconds of stimulation and 30 seconds of rest. Stimulation was accomplished by encoding eight two-syllable words and the retrieval of previously presented words, while the control condition was a white screen with a small fixed cross. During the tasks we acquired ten slices (6 mm slice thickness, 1 mm gap) parallel to the AC-PC line, and the resulting functional activation maps were reconstructed using a statistical parametric mapping program (SPM99). A comparison of activation ratios (percentages), based on the number of volunteers, showed that activation of Rhs-35, PoCiG-23 and ICiG-26{center_dot}30 was associated with explicit retrieval only; other brain areas were activated during the performance of both implicit and explicit retrieval tasks. Activation ratios were higher for explicit tasks than for implicit; in the cingulate gyrus and temporal lobe they were 30% and 10% greater, respectively. During explicit retrieval, a distinct brain activation index (percentage) was seen in the temporal, parietal, and occipital lobe and cingulate gyrus, and PrCeG-4, Pr/ PoCeG-43 in the frontal lobe. During implicit retrieval, on the other hand, activity was greater in the frontal lobe, including the areas of SCA-25, SFG/MFG-10, IFG-44{center_dot}45, OrbG-11{center_dot}47, SFG-6{center_dot}8 and MFG-9{center_dot}46. Overall, activation was lateralized mainly in the left

  4. Aspect-based Relevance Learning for Image Retrieval

    OpenAIRE

    Huiskes, Mark

    2005-01-01

    htmlabstractWe analyze the special structure of the relevance feedback learning problem, focusing particularly on the effects of image selection by partial relevance on the clustering behavior of feedback examples. We propose a scheme, aspect-based relevance learning, which guarantees that feedback on feature values is accepted only once evidential support that the feedback was intended by the user is sufficiently strong. The scheme additionally allows for natural simulation of the relevance ...

  5. Content-Based Image Retrieval Method using the Relative Location of Multiple ROIs

    Directory of Open Access Journals (Sweden)

    LEE, J.

    2011-08-01

    Full Text Available Recently the method of specifying multiple regions of interest (ROI based image retrieval has been suggested. However it measures the similarity of the images without proper consideration of the spatial layouts of the ROIs and thus fails to accurately reflect the intent of the user. In this paper, we propose a new similarity measurement using the relative layouts of the ROIs. The proposed method divides images into blocks of certain size and extracted MPEG-7 dominant colors from the blocks overlapping with the user-designated ROIs to measure their similarities with the target images. At this point, similarity was weighted when the relative location of the ROIs in the query image and the target image was the same. The relative location was calculated by four directions (i.e. up, down, left and right of the basis ROI. The proposed method by an experiment using MPEG-7 XM shows that its performance is higher than the global image retrieval method or the retrieval method that does not consider the relative location of ROIs.

  6. Supporting Keyword Search for Image Retrieval with Integration of Probabilistic Annotation

    Directory of Open Access Journals (Sweden)

    Tie Hua Zhou

    2015-05-01

    Full Text Available The ever-increasing quantities of digital photo resources are annotated with enriching vocabularies to form semantic annotations. Photo-sharing social networks have boosted the need for efficient and intuitive querying to respond to user requirements in large-scale image collections. In order to help users formulate efficient and effective image retrieval, we present a novel integration of a probabilistic model based on keyword query architecture that models the probability distribution of image annotations: allowing users to obtain satisfactory results from image retrieval via the integration of multiple annotations. We focus on the annotation integration step in order to specify the meaning of each image annotation, thus leading to the most representative annotations of the intent of a keyword search. For this demonstration, we show how a probabilistic model has been integrated to semantic annotations to allow users to intuitively define explicit and precise keyword queries in order to retrieve satisfactory image results distributed in heterogeneous large data sources. Our experiments on SBU (collected by Stony Brook University database show that (i our integrated annotation contains higher quality representatives and semantic matches; and (ii the results indicating annotation integration can indeed improve image search result quality.

  7. Improving Web image search by bag-based reranking.

    Science.gov (United States)

    Duan, Lixin; Li, Wen; Tsang, Ivor Wai-Hung; Xu, Dong

    2011-11-01

    Given a textual query in traditional text-based image retrieval (TBIR), relevant images are to be reranked using visual features after the initial text-based search. In this paper, we propose a new bag-based reranking framework for large-scale TBIR. Specifically, we first cluster relevant images using both textual and visual features. By treating each cluster as a "bag" and the images in the bag as "instances," we formulate this problem as a multi-instance (MI) learning problem. MI learning methods such as mi-SVM can be readily incorporated into our bag-based reranking framework. Observing that at least a certain portion of a positive bag is of positive instances while a negative bag might also contain positive instances, we further use a more suitable generalized MI (GMI) setting for this application. To address the ambiguities on the instance labels in the positive and negative bags under this GMI setting, we develop a new method referred to as GMI-SVM to enhance retrieval performance by propagating the labels from the bag level to the instance level. To acquire bag annotations for (G)MI learning, we propose a bag ranking method to rank all the bags according to the defined bag ranking score. The top ranked bags are used as pseudopositive training bags, while pseudonegative training bags can be obtained by randomly sampling a few irrelevant images that are not associated with the textual query. Comprehensive experiments on the challenging real-world data set NUS-WIDE demonstrate our framework with automatic bag annotation can achieve the best performances compared with existing image reranking methods. Our experiments also demonstrate that GMI-SVM can achieve better performances when using the manually labeled training bags obtained from relevance feedback.

  8. The development of a human-centered object based image retrieval engine

    NARCIS (Netherlands)

    van Rikxoort, Eva M.; Kröse, B.J.A.; van den Broek, Egon; Bos, H.J.; Hendriks, E.A.; Schouten, Theo E.; Heijnsdijk, J.W.J.

    2005-01-01

    The development of a new object-based image retrieval (OBIR) engine is discussed. Its goal was to yield intuitive results for users by using human-based techniques. The engine utilizes a unique and efficient set of 15 features: 11 color categories and 4 texture features, derived from the color

  9. A picture is worth a thousand words : content-based image retrieval techniques

    NARCIS (Netherlands)

    Thomée, Bart

    2010-01-01

    In my dissertation I investigate techniques for improving the state of the art in content-based image retrieval. To place my work into context, I highlight the current trends and challenges in my field by analyzing over 200 recent articles. Next, I propose a novel paradigm called ‘artificial

  10. The utilization of human color categorization for content-based image retrieval

    NARCIS (Netherlands)

    van den Broek, Egon; Rogowitz, Bernice E.; Kisters, Peter M.F.; Pappas, Thrasyvoulos N.; Vuurpijl, Louis G.

    2004-01-01

    We present the concept of intelligent Content-Based Image Retrieval (iCBIR), which incorporates knowledge concerning human cognition in system development. The present research focuses on the utilization of color categories (or focal colors) for CBIR purposes, in particularly considered to be useful

  11. Content-Based Image Retrieval Benchmarking: Utilizing color categories and color distributions

    NARCIS (Netherlands)

    van den Broek, Egon; Kisters, Peter M.F.; Vuurpijl, Louis G.

    From a human centered perspective three ingredients for Content-Based Image Retrieval (CBIR) were developed. First, with their existence confirmed by experimental data, 11 color categories were utilized for CBIR and used as input for a new color space segmentation technique. The complete HSI color

  12. Research on Techniques of Multifeatures Extraction for Tongue Image and Its Application in Retrieval

    Directory of Open Access Journals (Sweden)

    Liyan Chen

    2017-01-01

    Full Text Available Tongue diagnosis is one of the important methods in the Chinese traditional medicine. Doctors can judge the disease’s situation by observing patient’s tongue color and texture. This paper presents a novel approach to extract color and texture features of tongue images. First, we use improved GLA (Generalized Lloyd Algorithm to extract the main color of tongue image. Considering that the color feature cannot fully express tongue image information, the paper analyzes tongue edge’s texture features and proposes an algorithm to extract them. Then, we integrate the two features in retrieval by different weight. Experimental results show that the proposed method can improve the detection rate of lesion in tongue image relative to single feature retrieval.

  13. Supporting Keyword Search for Image Retrieval with Integration of Probabilistic Annotation

    OpenAIRE

    Tie Hua Zhou; Ling Wang; Keun Ho Ryu

    2015-01-01

    The ever-increasing quantities of digital photo resources are annotated with enriching vocabularies to form semantic annotations. Photo-sharing social networks have boosted the need for efficient and intuitive querying to respond to user requirements in large-scale image collections. In order to help users formulate efficient and effective image retrieval, we present a novel integration of a probabilistic model based on keyword query architecture that models the probability distribution of i...

  14. Double color image encryption using iterative phase retrieval algorithm in quaternion gyrator domain.

    Science.gov (United States)

    Shao, Zhuhong; Shu, Huazhong; Wu, Jiasong; Dong, Zhifang; Coatrieux, Gouenou; Coatrieux, Jean Louis

    2014-03-10

    This paper describes a novel algorithm to encrypt double color images into a single undistinguishable image in quaternion gyrator domain. By using an iterative phase retrieval algorithm, the phase masks used for encryption are obtained. Subsequently, the encrypted image is generated via cascaded quaternion gyrator transforms with different rotation angles. The parameters in quaternion gyrator transforms and phases serve as encryption keys. By knowing these keys, the original color images can be fully restituted. Numerical simulations have demonstrated the validity of the proposed encryption system as well as its robustness against loss of data and additive Gaussian noise.

  15. Optimal query-based relevance feedback in medical image retrieval using score fusion-based classification.

    Science.gov (United States)

    Behnam, Mohammad; Pourghassem, Hossein

    2015-04-01

    In this paper, a new content-based medical image retrieval (CBMIR) framework using an effective classification method and a novel relevance feedback (RF) approach are proposed. For a large-scale database with diverse collection of different modalities, query image classification is inevitable due to firstly, reducing the computational complexity and secondly, increasing influence of data fusion by removing unimportant data and focus on the more valuable information. Hence, we find probability distribution of classes in the database using Gaussian mixture model (GMM) for each feature descriptor and then using the fusion of obtained scores from the dependency probabilities, the most relevant clusters are identified for a given query. Afterwards, visual similarity of query image and images in relevant clusters are calculated. This method is performed separately on all feature descriptors, and then the results are fused together using feature similarity ranking level fusion algorithm. In the RF level, we propose a new approach to find the optimal queries based on relevant images. The main idea is based on density function estimation of positive images and strategy of moving toward the aggregation of estimated density function. The proposed framework has been evaluated on ImageCLEF 2005 database consisting of 10,000 medical X-ray images of 57 semantic classes. The experimental results show that compared with the existing CBMIR systems, our framework obtains the acceptable performance both in the image classification and in the image retrieval by RF.

  16. An efficient similarity measure for content based image retrieval using memetic algorithm

    Directory of Open Access Journals (Sweden)

    Mutasem K. Alsmadi

    2017-06-01

    Full Text Available Content based image retrieval (CBIR systems work by retrieving images which are related to the query image (QI from huge databases. The available CBIR systems extract limited feature sets which confine the retrieval efficacy. In this work, extensive robust and important features were extracted from the images database and then stored in the feature repository. This feature set is composed of color signature with the shape and color texture features. Where, features are extracted from the given QI in the similar fashion. Consequently, a novel similarity evaluation using a meta-heuristic algorithm called a memetic algorithm (genetic algorithm with great deluge is achieved between the features of the QI and the features of the database images. Our proposed CBIR system is assessed by inquiring number of images (from the test dataset and the efficiency of the system is evaluated by calculating precision-recall value for the results. The results were superior to other state-of-the-art CBIR systems in regard to precision.

  17. Parsed and fixed block representations of visual information for image retrieval

    Science.gov (United States)

    Bae, Soo Hyun; Juang, Biing-Hwang

    2009-02-01

    The theory of linguistics teaches us the existence of a hierarchical structure in linguistic expressions, from letter to word root, and on to word and sentences. By applying syntax and semantics beyond words, one can further recognize the grammatical relationship between among words and the meaning of a sequence of words. This layered view of a spoken language is useful for effective analysis and automated processing. Thus, it is interesting to ask if a similar hierarchy of representation of visual information does exist. A class of techniques that have a similar nature to the linguistic parsing is found in the Lempel-Ziv incremental parsing scheme. Based on a new class of multidimensional incremental parsing algorithms extended from the Lempel-Ziv incremental parsing, a new framework for image retrieval, which takes advantage of the source characterization property of the incremental parsing algorithm, was proposed recently. With the incremental parsing technique, a given image is decomposed into a number of patches, called a parsed representation. This representation can be thought of as a morphological interface between elementary pixel and a higher level representation. In this work, we examine the properties of two-dimensional parsed representation in the context of imagery information retrieval and in contrast to vector quantization; i.e. fixed square-block representations and minimum average distortion criteria. We implemented four image retrieval systems for the comparative study; three, called IPSILON image retrieval systems, use parsed representation with different perceptual distortion thresholds and one uses the convectional vector quantization for visual pattern analysis. We observe that different perceptual distortion in visual pattern matching does not have serious effects on the retrieval precision although allowing looser perceptual thresholds in image compression result poor reconstruction fidelity. We compare the effectiveness of the use of the

  18. OpenMSI: a high-performance web-based platform for mass spectrometry imaging.

    Science.gov (United States)

    Rübel, Oliver; Greiner, Annette; Cholia, Shreyas; Louie, Katherine; Bethel, E Wes; Northen, Trent R; Bowen, Benjamin P

    2013-11-05

    Mass spectrometry imaging (MSI) enables researchers to directly probe endogenous molecules directly within the architecture of the biological matrix. Unfortunately, efficient access, management, and analysis of the data generated by MSI approaches remain major challenges to this rapidly developing field. Despite the availability of numerous dedicated file formats and software packages, it is a widely held viewpoint that the biggest challenge is simply opening, sharing, and analyzing a file without loss of information. Here we present OpenMSI, a software framework and platform that addresses these challenges via an advanced, high-performance, extensible file format and Web API for remote data access (http://openmsi.nersc.gov). The OpenMSI file format supports storage of raw MSI data, metadata, and derived analyses in a single, self-describing format based on HDF5 and is supported by a large range of analysis software (e.g., Matlab and R) and programming languages (e.g., C++, Fortran, and Python). Careful optimization of the storage layout of MSI data sets using chunking, compression, and data replication accelerates common, selective data access operations while minimizing data storage requirements and are critical enablers of rapid data I/O. The OpenMSI file format has shown to provide >2000-fold improvement for image access operations, enabling spectrum and image retrieval in less than 0.3 s across the Internet even for 50 GB MSI data sets. To make remote high-performance compute resources accessible for analysis and to facilitate data sharing and collaboration, we describe an easy-to-use yet powerful Web API, enabling fast and convenient access to MSI data, metadata, and derived analysis results stored remotely to facilitate high-performance data analysis and enable implementation of Web based data sharing, visualization, and analysis.

  19. OpenMSI: A High-Performance Web-Based Platform for Mass Spectrometry Imaging

    Energy Technology Data Exchange (ETDEWEB)

    Rubel, Oliver; Greiner, Annette; Cholia, Shreyas; Louie, Katherine; Bethel, E. Wes; Northen, Trent R.; Bowen, Benjamin P.

    2013-10-02

    Mass spectrometry imaging (MSI) enables researchers to directly probe endogenous molecules directly within the architecture of the biological matrix. Unfortunately, efficient access, management, and analysis of the data generated by MSI approaches remain major challenges to this rapidly developing field. Despite the availability of numerous dedicated file formats and software packages, it is a widely held viewpoint that the biggest challenge is simply opening, sharing, and analyzing a file without loss of information. Here we present OpenMSI, a software framework and platform that addresses these challenges via an advanced, high-performance, extensible file format and Web API for remote data access (http://openmsi.nersc.gov). The OpenMSI file format supports storage of raw MSI data, metadata, and derived analyses in a single, self-describing format based on HDF5 and is supported by a large range of analysis software (e.g., Matlab and R) and programming languages (e.g., C++, Fortran, and Python). Careful optimization of the storage layout of MSI data sets using chunking, compression, and data replication accelerates common, selective data access operations while minimizing data storage requirements and are critical enablers of rapid data I/O. The OpenMSI file format has shown to provide >2000-fold improvement for image access operations, enabling spectrum and image retrieval in less than 0.3 s across the Internet even for 50 GB MSI data sets. To make remote high-performance compute resources accessible for analysis and to facilitate data sharing and collaboration, we describe an easy-to-use yet powerful Web API, enabling fast and convenient access to MSI data, metadata, and derived analysis results stored remotely to facilitate high-performance data analysis and enable implementation of Web based data sharing, visualization, and analysis.

  20. Benchmarking, Research, Development, and Support for ORNL Automated Image and Signature Retrieval (AIR/ASR) Technologies

    Energy Technology Data Exchange (ETDEWEB)

    Tobin, K.W.

    2004-06-01

    This report describes the results of a Cooperative Research and Development Agreement (CRADA) with Applied Materials, Inc. (AMAT) of Santa Clara, California. This project encompassed the continued development and integration of the ORNL Automated Image Retrieval (AIR) technology, and an extension of the technology denoted Automated Signature Retrieval (ASR), and other related technologies with the Defect Source Identification (DSI) software system that was under development by AMAT at the time this work was performed. In the semiconductor manufacturing environment, defect imagery is used to diagnose problems in the manufacturing line, train yield management engineers, and examine historical data for trends. Image management in semiconductor data systems is a growing cause of concern in the industry as fabricators are now collecting up to 20,000 images each week. In response to this concern, researchers at the Oak Ridge National Laboratory (ORNL) developed a semiconductor-specific content-based image retrieval method and system, also known as AIR. The system uses an image-based query-by-example method to locate and retrieve similar imagery from a database of digital imagery using visual image characteristics. The query method is based on a unique architecture that takes advantage of the statistical, morphological, and structural characteristics of image data, generated by inspection equipment in industrial applications. The system improves the manufacturing process by allowing rapid access to historical records of similar events so that errant process equipment can be isolated and corrective actions can be quickly taken to improve yield. The combined ORNL and AMAT technology is referred to hereafter as DSI-AIR and DSI-ASR.

  1. A Flexible Phase Retrieval Framework for Flux-limited Coherent X-Ray Imaging

    CERN Document Server

    Shi, Liang; Lane, Thomas J

    2016-01-01

    Coherent X-ray diffraction imaging~(CXDI) experiments are intrinsically limited by shot noise, a lack of prior knowledge about the sample's support, and missing measurements due to the experimental geometry. We propose a flexible, iterative phase retrieval framework that allows for accurate modeling of Gaussian or Poissonian noise statistics, modified support updates, regularization of reconstructed signals, and handling of missing data in the observations. The proposed method is efficiently solved using alternating direction method of multipliers~(ADMM) and is demonstrated to consistently outperform state-of-the-art algorithms for low-photon phase retrieval from CXDI experiments, both for simulated diffraction patterns and for experimental measurements.

  2. Combining semantic technologies with a content-based image retrieval system - Preliminary considerations

    Science.gov (United States)

    Chmiel, P.; Ganzha, M.; Jaworska, T.; Paprzycki, M.

    2017-10-01

    Nowadays, as a part of systematic growth of volume, and variety, of information that can be found on the Internet, we observe also dramatic increase in sizes of available image collections. There are many ways to help users browsing / selecting images of interest. One of popular approaches are Content-Based Image Retrieval (CBIR) systems, which allow users to search for images that match their interests, expressed in the form of images (query by example). However, we believe that image search and retrieval could take advantage of semantic technologies. We have decided to test this hypothesis. Specifically, on the basis of knowledge captured in the CBIR, we have developed a domain ontology of residential real estate (detached houses, in particular). This allows us to semantically represent each image (and its constitutive architectural elements) represented within the CBIR. The proposed ontology was extended to capture not only the elements resulting from image segmentation, but also "spatial relations" between them. As a result, a new approach to querying the image database (semantic querying) has materialized, thus extending capabilities of the developed system.

  3. Youpi: A Web-based Astronomical Image Processing Pipeline

    Science.gov (United States)

    Monnerville, M.; Sémah, G.

    2010-12-01

    Youpi stands for “YOUpi is your processing PIpeline”. It is a portable, easy to use web application providing high level functionalities to perform data reduction on scientific FITS images. It is built on top of open source processing tools that are released to the community by Terapix, in order to organize your data on a computer cluster, to manage your processing jobs in real time and to facilitate teamwork by allowing fine-grain sharing of results and data. On the server side, Youpi is written in the Python programming language and uses the Django web framework. On the client side, Ajax techniques are used along with the Prototype and script.aculo.us Javascript librairies.

  4. Web-Based Image Viewer for Monitoring High-Definition Agricultural Images

    Science.gov (United States)

    Kobayashi, Kazuki; Toda, Shohei; Kobayashi, Fumitoshi; Saito, Yasunori

    This paper describes a Web-based image viewer which was developed to monitor high-definition agricultural images. In the cultivation of crops, physiological data and environmental data are important to increase crop yields. However, it is a burden for farmers to collect such data. Against this backdrop, the authors developed a monitoring system to automatically collect high-definition crop images, which can be viewed on a specialized Web-based image viewer. Users can easily observe detailed crop images over the Internet and easily find differences among the images. The authors experimentally installed the monitoring system in an apple orchard and observed the apples growing there. The system has been operating since August 11, 2009. In this paper, we confirm the ability of the monitoring system to perform detailed observations, including tracing the progress of a disease that affects the growth of an apple.

  5. A SIMPLE BUT EFFICIENT SCHEME FOR COLOUR IMAGE RETRIEVAL BASED ON STATISTICAL TESTS OF HYPOTHESIS

    Directory of Open Access Journals (Sweden)

    K. Seetharaman

    2011-02-01

    Full Text Available This paper proposes a simple but efficient scheme for colour image retrieval, based on statistical tests of hypothesis, namely test for equality of variance, test for equality of mean. The test for equality of variance is performed to test the similarity of the query and target images. If the images pass the test, then the test for equality of mean is performed on the same images to examine whether the two images have the same attributes / characteristics. If the query and target images pass the tests then it is inferred that the two images belong to the same class i.e. both the images are same; otherwise, it is assumed that the images belong to different classes i.e. both the images are different. The obtained test statistic values are indexed in ascending order and the image corresponding to the least value is identified as same / similar images. The proposed system is invariant for translation, scaling, and rotation, since the proposed system adjusts itself and treats either the query image or the target image is sample of other. The proposed scheme provides cent percent accuracy if the query and target images are same, whereas there is a slight variation for similar, transformed.

  6. The ADAM project: a generic web interface for retrieval and display of ATLAS TDAQ information.

    CERN Document Server

    Harwood, A; The ATLAS collaboration; Magnoni, L; Vandelli, W; Savu, D

    2011-01-01

    This paper describes a new approach to the visualization of stored information about the operation of the ATLAS Trigger and Data Acquisition system. ATLAS is one of the two general purpose detectors positioned along the Large Hadron Collider at CERN. Its data acquisition system consists of several thousand computers interconnected via multiple gigabit Ethernet networks, that are constantly monitored via different tools. Operational parameters ranging from the temperature of the computers to the network utilization are stored in several databases for later analysis. Although the ability to view these data-sets individually is already in place, currently there is no way to view this data together, in a uniform format, from one location. The ADAM project has been launched in order to overcome this limitation. It defines a uniform web interface to collect data from multiple providers that have different structures. It is capable of aggregating and correlating the data according to user defined criteria. Finally, ...

  7. ADAM Project – A generic web interface for retrieval and display of ATLAS TDAQ information.

    CERN Document Server

    Harwood, A; The ATLAS collaboration; Lehmann Miotto, G

    2011-01-01

    This paper describes a new approach to the visualization of stored information about the operation of the ATLAS Trigger and Data Acquisition system. ATLAS is one of the two general purpose detectors positioned along the Large Hadron Collider at CERN. Its data acquisition system consists of several thousand computers interconnected via multiple gigabit Ethernet networks, that are constantly monitored via different tools. Operational parameters ranging from the temperature of the computers, to the network utilization are stored in several databases for a posterior analysis. Although the ability to view these data-sets individually is already in place, there currently is no way to view this data together, in a uniform format, from one location. The ADAM project has been launched in order to overcome this limitation. It defines a uniform web interface to collect data from multiple diversely structured providers. It is capable of aggregating and correlating the data according to user defined criteria. Finally it v...

  8. BAGET: a web server for the effortless retrieval of prokaryotic gene context and sequence.

    Science.gov (United States)

    Oberto, Jacques

    2008-02-01

    BAGET (Bacterial and Archaeal Gene Exploration Tool) is a web service designed to facilitate extraction, by molecular geneticists and phylogeneticists, of specific gene and protein sequences from completely determined prokaryotic genomes. Upon selection of a particular prokaryotic organism and gene, two levels of visual gene context information are provided on a single dynamic page: (i) a graphical representation of a user defined portion of the chromosome centered on the gene of interest and (ii) the DNA sequence of the query gene, of the immediate neighboring genes and the intergenic regions each identified by a consistent color code. The aminoacid sequence is provided for protein-coding query genes. Query results can be exported as a rich text format (RTF) word processor file for printing, archival or further analysis. http://archaea.u-psud.fr/bin/baget.dll.

  9. Web application for recording learners’ mouse trajectories and retrieving their study logs for data analysis

    Directory of Open Access Journals (Sweden)

    Yoshinori Miyazaki

    2012-03-01

    Full Text Available With the accelerated implementation of e-learning systems in educational institutions, it has become possible to record learners’ study logs in recent years. It must be admitted that little research has been conducted upon the analysis of the study logs that are obtained. In addition, there is no software that traces the mouse movements of learners during their learning processes, which the authors believe would enable teachers to better understand their students’ behaviors. The objective of this study is to develop a Web application that records students’ study logs, including their mouse trajectories, and to devise an IR tool that can summarize such diversified data. The results of an experiment are also scrutinized to provide an analysis of the relationship between learners’ activities and their study logs.

  10. Content-based image retrieval using spatial layout information in brain tumor T1-weighted contrast-enhanced MR images.

    Directory of Open Access Journals (Sweden)

    Meiyan Huang

    Full Text Available This study aims to develop content-based image retrieval (CBIR system for the retrieval of T1-weighted contrast-enhanced MR (CE-MR images of brain tumors. When a tumor region is fed to the CBIR system as a query, the system attempts to retrieve tumors of the same pathological category. The bag-of-visual-words (BoVW model with partition learning is incorporated into the system to extract informative features for representing the image contents. Furthermore, a distance metric learning algorithm called the Rank Error-based Metric Learning (REML is proposed to reduce the semantic gap between low-level visual features and high-level semantic concepts. The effectiveness of the proposed method is evaluated on a brain T1-weighted CE-MR dataset with three types of brain tumors (i.e., meningioma, glioma, and pituitary tumor. Using the BoVW model with partition learning, the mean average precision (mAP of retrieval increases beyond 4.6% with the learned distance metrics compared with the spatial pyramid BoVW method. The distance metric learned by REML significantly outperforms three other existing distance metric learning methods in terms of mAP. The mAP of the CBIR system is as high as 91.8% using the proposed method, and the precision can reach 93.1% when the top 10 images are returned by the system. These preliminary results demonstrate that the proposed method is effective and feasible for the retrieval of brain tumors in T1-weighted CE-MR Images.

  11. [An Improved DDV Method to Retrieve AOT for HJ CCD Image in Typical Mountainous Areas].

    Science.gov (United States)

    Zhao, Zhi-qiang; Li, Ai-nong; Bian, Jin-hu; Huang, Cheng-quan

    2015-06-01

    Domestic HJ CCD imaging applications in environment and disaster monitoring and prediction has great potential. But, HJ CCD image lack of Mid-Nir band can not directly retrieve Aerosol Optical Thickness (AOT) by the traditional Dark Dense Vegetation (DDV) method, and the mountain AOT changes in space-time dramatically affected by the mountain environment, which reduces the accuracy of atmospheric correction. Based on wide distribution of mountainous dark dense forest, the red band histogram threshold method was introduced to identify the mountainous DDV pixels. Subsequently, the AOT of DDV pixels were retrieved by lookup table constructed by 6S radiative transfer model with assumption of constant ratio between surface reflectance in red and blue bands, and then were interpolated to whole image. MODIS aerosol product and the retrieved AOT by the proposed algorithm had very good consistency in spatial distribution, and HJ CCD image was more suitable for the remote sensing monitoring of aerosol in mountain areas, which had higher spatial resolution. Their fitting curve of scatterplot was y = 0.828 6x-0.01 and R2 was 0.984 3 respectively. Which indicate the improved DDV method can effectively retrieve AOT, and its precision can satisfy the atmospheric correction and terrain radiation correction for Hj CCD image in mountainous areas. The improvement of traditional DDV method can effectively solve the insufficient information problem of the HJ CCD image which have only visible light and near infrared band, when solving radiative transfer equation. Meanwhile, the improved method fully considered the influence of mountainous terrain environment. It lays a solid foundation for the HJ CCD image atmospheric correction in the mountainous areas, and offers the possibility for its automated processing. In addition, the red band histogram threshold method was better than NDVI method to identify mountain DDV pixels. And, the lookup table and ratio between surface reflectance

  12. Improving image retrieval effectiveness via query expansion using MeSH hierarchical structure.

    Science.gov (United States)

    Crespo Azcárate, Mariano; Mata Vázquez, Jacinto; Maña López, Manuel

    2013-01-01

    We explored two strategies for query expansion utilizing medical subject headings (MeSH) ontology to improve the effectiveness of medical image retrieval systems. In order to achieve greater effectiveness in the expansion, the search text was analyzed to identify which terms were most amenable to being expanded. To perform the expansions we utilized the hierarchical structure by which the MeSH descriptors are organized. Two strategies for selecting the terms to be expanded in each query were studied. The first consisted of identifying the medical concepts using the unified medical language system metathesaurus. In the second strategy the text of the query was divided into n-grams, resulting in sequences corresponding to MeSH descriptors. For the evaluation of the system, we used the collection made available by the ImageCLEF organization in its 2011 medical image retrieval task. The main measure of efficiency employed for evaluating the techniques developed was the mean average precision (MAP). Both strategies exceeded the average MAP score in the ImageCLEF 2011 competition (0.1644). The n-gram expansion strategy achieved a MAP of 0.2004, which represents an improvement of 21.89% over the average MAP score in the competition. On the other hand, the medical concepts expansion strategy scored 0.2172 in the MAP, representing a 32.11% improvement. This run won the text-based medical image retrieval task in 2011. Query expansion exploiting the hierarchical structure of the MeSH descriptors achieved a significant improvement in image retrieval systems.

  13. Retrieval of Sentence Sequences for an Image Stream via Coherence Recurrent Convolutional Networks.

    Science.gov (United States)

    Park, Cesc; Kim, Youngjin; Kim, Gunhee

    2017-05-02

    We propose an approach for retrieving a sequence of natural sentences for an image stream. Since general users often take a series of pictures on their experiences, much online visual information exists in the form of image streams, for which it would better take into consideration of the whole image stream to produce natural language descriptions. While almost all previous studies have dealt with the relation between a single image and a single natural sentence, our work extends both input and output dimension to a sequence of images and a sequence of sentences. For retrieving a coherent flow of multiple sentences for a photo stream, we propose a multimodal neural architecture called coherence recurrent convolutional network (CRCN), which consists of convolutional neural networks, bidirectional long short-term memory (LSTM) networks, and an entity-based local coherence model. Our approach directly learns from vast user-generated resource of blog posts as text-image parallel training data. We collect more than 22K unique blog posts with 170K associated images for the travel topics of NYC, Disneyland, Australia, and Hawaii. We demonstrate that our approach outperforms other state-of-the-art image captioning methods for text sequence generation, using both quantitative measures and user studies via Amazon Mechanical Turk.

  14. Single-image phase retrieval using an edge illumination X-ray phase-contrast imaging setup

    Energy Technology Data Exchange (ETDEWEB)

    Diemoz, Paul C., E-mail: p.diemoz@ucl.ac.uk; Vittoria, Fabio A. [University College London, London WC1 E6BT (United Kingdom); Research Complex at Harwell, Oxford Harwell Campus, Didcot OX11 0FA (United Kingdom); Hagen, Charlotte K.; Endrizzi, Marco [University College London, London WC1 E6BT (United Kingdom); Coan, Paola [Ludwig-Maximilians-University, Munich 81377 (Germany); Ludwig-Maximilians-University, Garching 85748 (Germany); Brun, Emmanuel [Ludwig-Maximilians-University, Garching 85748 (Germany); European Synchrotron Radiation Facility, Grenoble 38043 (France); Wagner, Ulrich H.; Rau, Christoph [Diamond Light Source, Harwell Oxford Campus, Didcot OX11 0DE (United Kingdom); Robinson, Ian K. [Research Complex at Harwell, Oxford Harwell Campus, Didcot OX11 0FA (United Kingdom); London Centre for Nanotechnology, London WC1 H0AH (United Kingdom); Bravin, Alberto [European Synchrotron Radiation Facility, Grenoble 38043 (France); Olivo, Alessandro [University College London, London WC1 E6BT (United Kingdom); Research Complex at Harwell, Oxford Harwell Campus, Didcot OX11 0FA (United Kingdom)

    2015-06-25

    A method enabling the retrieval of thickness or projected electron density of a sample from a single input image is derived theoretically and successfully demonstrated on experimental data. A method is proposed which enables the retrieval of the thickness or of the projected electron density of a sample from a single input image acquired with an edge illumination phase-contrast imaging setup. The method assumes the case of a quasi-homogeneous sample, i.e. a sample with a constant ratio between the real and imaginary parts of its complex refractive index. Compared with current methods based on combining two edge illumination images acquired in different configurations of the setup, this new approach presents advantages in terms of simplicity of acquisition procedure and shorter data collection time, which are very important especially for applications such as computed tomography and dynamical imaging. Furthermore, the fact that phase information is directly extracted, instead of its derivative, can enable a simpler image interpretation and be beneficial for subsequent processing such as segmentation. The method is first theoretically derived and its conditions of applicability defined. Quantitative accuracy in the case of homogeneous objects as well as enhanced image quality for the imaging of complex biological samples are demonstrated through experiments at two synchrotron radiation facilities. The large range of applicability, the robustness against noise and the need for only one input image suggest a high potential for investigations in various research subjects.

  15. Morphological segmentation and digital image processing to retrieve geometric characteristics of fabric filaments

    Science.gov (United States)

    Guizar-Sicairos, Manuel; Hernandez-Aranda, Raul; Serroukh, Ibrahim; Serrano-Heredia, Alfonso

    2005-03-01

    An image processing algorithm, mainly based on morphological enhancement and segmentation, is developed and applied to optical microscope images of transverse cuts of fabric filaments, to retrieve useful shape characteristics. Adaptive filtering and non-linear fitting algorithms are also applied. Computer generated noisy images are used to estimate the algorithm accuracy with excellent results. This algorithm is a significant improvement over the current human-based inspection method for filament shape analysis, and its development and application will improve quality control in textile industry. The complete procedure is outlined in the present work, showing relevant results and pointing out pertinent restrictions.

  16. Geometric super-resolved imaging based upon axial scanning and phase retrieval.

    Science.gov (United States)

    Borkowski, Amikam; Marom, Emanuel; Zalevsky, Zeev

    2014-06-20

    In this paper, we propose a new geometric super-resolving approach that overcomes the geometric resolution reduction caused by the spatially large pixels of the detector array. The improvement process is obtained by applying an axial scanning procedure. In the scanning process, several images are captured corresponding to focus applied at several axial planes. By applying an iterative Gerchberg-Saxton-based algorithm, we managed to retrieve the phase and to reconstruct the original high-resolution image from the captured set of low-resolution images. In addition, the paper also presents a numerically efficient algorithm to compute the free space Fresnel integral.

  17. Unsupervised symmetrical trademark image retrieval in soccer telecast using wavelet energy and quadtree decomposition

    Science.gov (United States)

    Ong, Swee Khai; Lim, Wee Keong; Soo, Wooi King

    2013-04-01

    Trademark, a distinctive symbol, is used to distinguish products or services provided by a particular person, group or organization from other similar entries. As trademark represents the reputation and credit standing of the owner, it is important to differentiate one trademark from another. Many methods have been proposed to identify, classify and retrieve trademarks. However, most methods required features database and sample sets for training prior to recognition and retrieval process. In this paper, a new feature on wavelet coefficients, the localized wavelet energy, is introduced to extract features of trademarks. With this, unsupervised content-based symmetrical trademark image retrieval is proposed without the database and prior training set. The feature analysis is done by an integration of the proposed localized wavelet energy and quadtree decomposed regional symmetrical vector. The proposed framework eradicates the dependence on query database and human participation during the retrieval process. In this paper, trademarks for soccer games sponsors are the intended trademark category. Video frames from soccer telecast are extracted and processed for this study. Reasonably good localization and retrieval results on certain categories of trademarks are achieved. A distinctive symbol is used to distinguish products or services provided by a particular person, group or organization from other similar entries.

  18. Web Site Presentation of Corporate Social Responsibility towards Customers Trust and Corporate Image

    National Research Council Canada - National Science Library

    Mohamad Hisyam Selamat; Rafeah Mat Saat; Raja Haslinda Raja Mohd

    2016-01-01

    .... Companies use the potential of their web site in communicating CSR issues. This study aims to examine the role of web site presentation of CSR disclosure and its relationship with trust and corporate image...

  19. Web based tools for visualizing imaging data and development of XNATView, a zero footprint image viewer

    Directory of Open Access Journals (Sweden)

    David A Gutman

    2014-05-01

    Full Text Available Advances in web technologies now allow direct visualization of imaging data sets without necessitating the download of large file sets or the installation of software. This allows centralization of file storage and facilitates image review and analysis. XNATView is a light framework recently developed in our lab to visualize DICOM images stored in The Extensible Neuroimaging Archive Toolkit (XNAT. It consists of a PyXNAT-based framework to wrap around the REST API and query the data in XNAT. XNATView was developed to simplify quality assurance, help organize imaging data, and facilitate data sharing for intra- and inter-laboratory collaborations. Its zero-footprint design allows the user to connect to XNAT from a web browser, navigate through projects, experiments, and subjects, and view DICOM images with accompanying metadata all within a single viewing instance.

  20. Hurricane Imaging Radiometer Wind Speed and Rain Rate Retrievals during the 2010 GRIP Flight Experiment

    Science.gov (United States)

    Sahawneh, Saleem; Farrar, Spencer; Johnson, James; Jones, W. Linwood; Roberts, Jason; Biswas, Sayak; Cecil, Daniel

    2014-01-01

    Microwave remote sensing observations of hurricanes, from NOAA and USAF hurricane surveillance aircraft, provide vital data for hurricane research and operations, for forecasting the intensity and track of tropical storms. The current operational standard for hurricane wind speed and rain rate measurements is the Stepped Frequency Microwave Radiometer (SFMR), which is a nadir viewing passive microwave airborne remote sensor. The Hurricane Imaging Radiometer, HIRAD, will extend the nadir viewing SFMR capability to provide wide swath images of wind speed and rain rate, while flying on a high altitude aircraft. HIRAD was first flown in the Genesis and Rapid Intensification Processes, GRIP, NASA hurricane field experiment in 2010. This paper reports on geophysical retrieval results and provides hurricane images from GRIP flights. An overview of the HIRAD instrument and the radiative transfer theory based, wind speed/rain rate retrieval algorithm is included. Results are presented for hurricane wind speed and rain rate for Earl and Karl, with comparison to collocated SFMR retrievals and WP3D Fuselage Radar images for validation purposes.

  1. Deep Hashing Based Fusing Index Method for Large-Scale Image Retrieval

    Directory of Open Access Journals (Sweden)

    Lijuan Duan

    2017-01-01

    Full Text Available Hashing has been widely deployed to perform the Approximate Nearest Neighbor (ANN search for the large-scale image retrieval to solve the problem of storage and retrieval efficiency. Recently, deep hashing methods have been proposed to perform the simultaneous feature learning and the hash code learning with deep neural networks. Even though deep hashing has shown the better performance than traditional hashing methods with handcrafted features, the learned compact hash code from one deep hashing network may not provide the full representation of an image. In this paper, we propose a novel hashing indexing method, called the Deep Hashing based Fusing Index (DHFI, to generate a more compact hash code which has stronger expression ability and distinction capability. In our method, we train two different architecture’s deep hashing subnetworks and fuse the hash codes generated by the two subnetworks together to unify images. Experiments on two real datasets show that our method can outperform state-of-the-art image retrieval applications.

  2. Content-based image retrieval for interstitial lung diseases using classification confidence

    Science.gov (United States)

    Dash, Jatindra Kumar; Mukhopadhyay, Sudipta; Prabhakar, Nidhi; Garg, Mandeep; Khandelwal, Niranjan

    2013-02-01

    Content Based Image Retrieval (CBIR) system could exploit the wealth of High-Resolution Computed Tomography (HRCT) data stored in the archive by finding similar images to assist radiologists for self learning and differential diagnosis of Interstitial Lung Diseases (ILDs). HRCT findings of ILDs are classified into several categories (e.g. consolidation, emphysema, ground glass, nodular etc.) based on their texture like appearances. Therefore, analysis of ILDs is considered as a texture analysis problem. Many approaches have been proposed for CBIR of lung images using texture as primitive visual content. This paper presents a new approach to CBIR for ILDs. The proposed approach makes use of a trained neural network (NN) to find the output class label of query image. The degree of confidence of the NN classifier is analyzed using Naive Bayes classifier that dynamically takes a decision on the size of the search space to be used for retrieval. The proposed approach is compared with three simple distance based and one classifier based texture retrieval approaches. Experimental results show that the proposed technique achieved highest average percentage precision of 92.60% with lowest standard deviation of 20.82%.

  3. Phase retrieval and 3D imaging in gold nanoparticles based fluorescence microscopy (Conference Presentation)

    Science.gov (United States)

    Ilovitsh, Tali; Ilovitsh, Asaf; Weiss, Aryeh M.; Meir, Rinat; Zalevsky, Zeev

    2017-02-01

    Optical sectioning microscopy can provide highly detailed three dimensional (3D) images of biological samples. However, it requires acquisition of many images per volume, and is therefore time consuming, and may not be suitable for live cell 3D imaging. We propose the use of the modified Gerchberg-Saxton phase retrieval algorithm to enable full 3D imaging of gold nanoparticles tagged sample using only two images. The reconstructed field is free space propagated to all other focus planes using post processing, and the 2D z-stack is merged to create a 3D image of the sample with high fidelity. Because we propose to apply the phase retrieving on nano particles, the regular ambiguities typical to the Gerchberg-Saxton algorithm, are eliminated. The proposed concept is then further enhanced also for tracking of single fluorescent particles within a three dimensional (3D) cellular environment based on image processing algorithms that can significantly increases localization accuracy of the 3D point spread function in respect to regular Gaussian fitting. All proposed concepts are validated both on simulated data as well as experimentally.

  4. Spatially detailed retrievals of spring phenology from single-season high-resolution image time series

    Science.gov (United States)

    Vrieling, Anton; Skidmore, Andrew K.; Wang, Tiejun; Meroni, Michele; Ens, Bruno J.; Oosterbeek, Kees; O'Connor, Brian; Darvishzadeh, Roshanak; Heurich, Marco; Shepherd, Anita; Paganini, Marc

    2017-07-01

    Vegetation indices derived from satellite image time series have been extensively used to estimate the timing of phenological events like season onset. Medium spatial resolution (≥250 m) satellite sensors with daily revisit capability are typically employed for this purpose. In recent years, phenology is being retrieved at higher resolution (≤30 m) in response to increasing availability of high-resolution satellite data. To overcome the reduced acquisition frequency of such data, previous attempts involved fusion between high- and medium-resolution data, or combinations of multi-year acquisitions in a single phenological reconstruction. The objectives of this study are to demonstrate that phenological parameters can now be retrieved from single-season high-resolution time series, and to compare these retrievals against those derived from multi-year high-resolution and single-season medium-resolution satellite data. The study focuses on the island of Schiermonnikoog, the Netherlands, which comprises a highly-dynamic saltmarsh, dune vegetation, and agricultural land. Combining NDVI series derived from atmospherically-corrected images from RapidEye (5 m-resolution) and the SPOT5 Take5 experiment (10m-resolution) acquired between March and August 2015, phenological parameters were estimated using a function fitting approach. We then compared results with phenology retrieved from four years of 30 m Landsat 8 OLI data, and single-year 100 m Proba-V and 250 m MODIS temporal composites of the same period. Retrieved phenological parameters from combined RapidEye/SPOT5 displayed spatially consistent results and a large spatial variability, providing complementary information to existing vegetation community maps. Retrievals that combined four years of Landsat observations into a single synthetic year were affected by the inclusion of years with warmer spring temperatures, whereas adjustment of the average phenology to 2015 observations was only feasible for a few pixels

  5. Coincident Aerosol and H2O Retrievals versus HSI Imager Field Campaign ReportH2O Retrievals versus HSI Imager Field Campaign Report

    Energy Technology Data Exchange (ETDEWEB)

    Anderson, Gail P. [National Oceanic and Atmospheric Administration (NOAA), Washington, DC (United States); Cipar, John [Air Force Research Lab. (AFRL), Wright-Patterson AFB, OH (United States); Armstrong, Peter S. [Air Force Research Lab. (AFRL), Wright-Patterson AFB, OH (United States); van den Bosch, J. [Air Force Research Lab. (AFRL), Wright-Patterson AFB, OH (United States)

    2016-05-01

    Two spectrally calibrated tarpaulins (tarps) were co-located at a fixed Global Positioning System (GPS) position on the gravel antenna field at the U.S. Department of Energy (DOE) Atmospheric Radiation Measurement (ARM) Climate Research Facility’s Southern Great Plains (SGP) site. Their placement was timed to coincide with the overflight of a new hyperspectral imaging satellite. The intention was to provide an analysis of the data obtained, including the measured and retrieved spectral albedos for the calibration tarps. Subsequently, a full suite of retrieved values of H2O column, and the aerosol overburden, were to be compared to those determined by alternate SGP ground truth assets. To the extent possible, the down-looking cloud images would be assessed against the all-sky images. Because cloud contamination above a certain level precludes the inversion processing of the satellite data, coupled with infrequent targeting opportunities, clear-sky conditions were imposed. The SGP site was chosen not only as a target of opportunity for satellite validation, but as perhaps the best coincident field measurement site, as established by DOE’s ARM Facility. The satellite team had every expectation of using the information obtained from the SGP to improve the inversion products for all subsequent satellite images, including the cloud and radiative models and parameterizations and, thereby, the performance assessment for subsequent and historic image collections. Coordinating with the SGP onsite team, four visits, all in 2009, to the Central Facility occurred: • June 6-8 (successful exploratory visit to plan tarp placements, etc.) • July 18-24 (canceled because of forecast for heavy clouds) • Sep 9-12 (ground tarps placed, onset of clouds) • Nov 7-9 (visit ultimately canceled because of weather predictions). As noted, in each instance, any significant overcast prediction precluded image collection from the satellite. Given the long task-scheduling procedures

  6. Multimodal graph-based reranking for web image search.

    Science.gov (United States)

    Wang, Meng; Li, Hao; Tao, Dacheng; Lu, Ke; Wu, Xindong

    2012-11-01

    This paper introduces a web image search reranking approach that explores multiple modalities in a graph-based learning scheme. Different from the conventional methods that usually adopt a single modality or integrate multiple modalities into a long feature vector, our approach can effectively integrate the learning of relevance scores, weights of modalities, and the distance metric and its scaling for each modality into a unified scheme. In this way, the effects of different modalities can be adaptively modulated and better reranking performance can be achieved. We conduct experiments on a large dataset that contains more than 1000 queries and 1 million images to evaluate our approach. Experimental results demonstrate that the proposed reranking approach is more robust than using each individual modality, and it also performs better than many existing methods.

  7. Optical image encryption using password key based on phase retrieval algorithm

    Science.gov (United States)

    Zhao, Tieyu; Ran, Qiwen; Yuan, Lin; Chi, Yingying; Ma, Jing

    2016-04-01

    A novel optical image encryption system is proposed using password key based on phase retrieval algorithm (PRA). In the encryption process, a shared image is taken as a symmetric key and the plaintext is encoded into the phase-only mask based on the iterative PRA. The linear relationship between the plaintext and ciphertext is broken using the password key, which can resist the known plaintext attack. The symmetric key and the retrieved phase are imported into the input plane and Fourier plane of 4f system during the decryption, respectively, so as to obtain the plaintext on the CCD. Finally, we analyse the key space of the password key, and the results show that the proposed scheme can resist a brute force attack due to the flexibility of the password key.

  8. Trademark Image Retrieval using Angular Radial Histogram Approach on Object Region

    OpenAIRE

    Moe Zet Pwint; Mie Mie Tin; Yokota, Mitsuhiro; Thi Thi Zin

    2017-01-01

    Trademarks are valuable things for companies and organizations around the world. Trademarks can represent standard, quality, service and background image of the companies or the organization. Due to the increasing number of business companies and also trademarks, it is important to have a computerized system that can detect and extract the similarity of trademarks because a new trademark must be different from other registered trademarks. Content-Based Trademark Retrieval (CBTR) can deal with...

  9. An Intelligent System for Analyzing Welding Defects using Image Retrieval Techniques

    OpenAIRE

    Pein, Raoul Pascal; Lu, Joan; Stav, John Birger; Xu, Qiang; Uran, Miro; Mráz, Luboš

    2009-01-01

    The development of new approaches in image processing and retrieval provides several opportunities in supporting in different\\ud domains. The group of welding engineers frequently needs to conduct visual inspections to assess the quality of welding products.\\ud It is investigated, if this process can be supported by different kinds of software. Techniques from a generic CBIR system have\\ud been successfully used to cluster welding photographs according to the severeness of visual faults. Simi...

  10. Semantics-Based Intelligent Indexing and Retrieval of Digital Images - A Case Study

    Science.gov (United States)

    Osman, Taha; Thakker, Dhavalkumar; Schaefer, Gerald

    The proliferation of digital media has led to a huge interest in classifying and indexing media objects for generic search and usage. In particular, we are witnessing colossal growth in digital image repositories that are difficult to navigate using free-text search mechanisms, which often return inaccurate matches as they typically rely on statistical analysis of query keyword recurrence in the image annotation or surrounding text. In this chapter we present a semantically enabled image annotation and retrieval engine that is designed to satisfy the requirements of commercial image collections market in terms of both accuracy and efficiency of the retrieval process. Our search engine relies on methodically structured ontologies for image annotation, thus allowing for more intelligent reasoning about the image content and subsequently obtaining a more accurate set of results and a richer set of alternatives matchmaking the original query. We also show how our well-analysed and designed domain ontology contributes to the implicit expansion of user queries as well as presenting our initial thoughts on exploiting lexical databases for explicit semantic-based query expansion.

  11. Single-channel color image encryption using phase retrieve algorithm in fractional Fourier domain

    Science.gov (United States)

    Sui, Liansheng; Xin, Meiting; Tian, Ailing; Jin, Haiyan

    2013-12-01

    A single-channel color image encryption is proposed based on a phase retrieve algorithm and a two-coupled logistic map. Firstly, a gray scale image is constituted with three channels of the color image, and then permuted by a sequence of chaotic pairs generated by the two-coupled logistic map. Secondly, the permutation image is decomposed into three new components, where each component is encoded into a phase-only function in the fractional Fourier domain with a phase retrieve algorithm that is proposed based on the iterative fractional Fourier transform. Finally, an interim image is formed by the combination of these phase-only functions and encrypted into the final gray scale ciphertext with stationary white noise distribution by using chaotic diffusion, which has camouflage property to some extent. In the process of encryption and decryption, chaotic permutation and diffusion makes the resultant image nonlinear and disorder both in spatial domain and frequency domain, and the proposed phase iterative algorithm has faster convergent speed. Additionally, the encryption scheme enlarges the key space of the cryptosystem. Simulation results and security analysis verify the feasibility and effectiveness of this method.

  12. Region-Based Image Retrieval Using an Object Ontology and Relevance Feedback

    Directory of Open Access Journals (Sweden)

    Kompatsiaris Ioannis

    2004-01-01

    Full Text Available An image retrieval methodology suited for search in large collections of heterogeneous images is presented. The proposed approach employs a fully unsupervised segmentation algorithm to divide images into regions and endow the indexing and retrieval system with content-based functionalities. Low-level descriptors for the color, position, size, and shape of each region are subsequently extracted. These arithmetic descriptors are automatically associated with appropriate qualitative intermediate-level descriptors, which form a simple vocabulary termed object ontology. The object ontology is used to allow the qualitative definition of the high-level concepts the user queries for (semantic objects, each represented by a keyword and their relations in a human-centered fashion. When querying for a specific semantic object (or objects, the intermediate-level descriptor values associated with both the semantic object and all image regions in the collection are initially compared, resulting in the rejection of most image regions as irrelevant. Following that, a relevance feedback mechanism, based on support vector machines and using the low-level descriptors, is invoked to rank the remaining potentially relevant image regions and produce the final query results. Experimental results and comparisons demonstrate, in practice, the effectiveness of our approach.

  13. Evaluation of shape indexing methods for content-based retrieval of x-ray images

    Science.gov (United States)

    Antani, Sameer; Long, L. Rodney; Thoma, George R.; Lee, Dah-Jye

    2003-01-01

    Efficient content-based image retrieval of biomedical images is a challenging problem of growing research interest. Feature representation algorithms used in indexing medical images on the pathology of interest have to address conflicting goals of reducing feature dimensionality while retaining important and often subtle biomedical features. At the Lister Hill National Center for Biomedical Communications, a R&D division of the National Library of Medicine, we are developing a content-based image retrieval system for digitized images of a collection of 17,000 cervical and lumbar x-rays taken as a part of the second National Health and Nutrition Examination Survey (NHANES II). Shape is the only feature that effectively describes various pathologies identified by medical experts as being consistently and reliably found in the image collection. In order to determine if the state of the art in shape representation methods is suitable for this application, we have evaluated representative algorithms selected from the literature. The algorithms were tested on a subset of 250 vertebral shapes. In this paper we present the requirements of an ideal algorithm, define the evaluation criteria, and present the results and our analysis of the evaluation. We observe that while the shape methods perform well on visual inspection of the overall shape boundaries, they fall short in meeting the needs of determining similarity between the vertebral shapes based on the pathology.

  14. A preliminary assessment. Digital imaging storage and retrieval in the 1980s.

    Science.gov (United States)

    1984-01-01

    The current status of digital imaging storage and retrieval is described, as applied to both digitally created images and those converted from conventional films. Technologies that are beginning to play a role in digital image management--particularly, different configurations of Picture Archiving and Communications Systems (PACS)--are examined in terms of their stage of development, equipment, and operating costs. This assessment finds that the future success and diffusion of these systems will depend upon the diagnostic adequacy of digital images, improvements in image digitizing processes, and the availability of optical disk or other low-cost mass storage. In addition, the paper concludes that Medicare's prospective payment system will greatly influence the spread of this technology because of both the cost-saving incentives the system will place on health care professionals and the still-undetermined method of capital cost reimbursement.

  15. Secret shared multiple-image encryption based on row scanning compressive ghost imaging and phase retrieval in the Fresnel domain

    Science.gov (United States)

    Li, Xianye; Meng, Xiangfeng; Wang, Yurong; Yang, Xiulun; Yin, Yongkai; Peng, Xiang; He, Wenqi; Dong, Guoyan; Chen, Hongyi

    2017-09-01

    A multiple-image encryption method is proposed that is based on row scanning compressive ghost imaging, (t, n) threshold secret sharing, and phase retrieval in the Fresnel domain. In the encryption process, after wavelet transform and Arnold transform of the target image, the ciphertext matrix can be first detected using a bucket detector. Based on a (t, n) threshold secret sharing algorithm, the measurement key used in the row scanning compressive ghost imaging can be decomposed and shared into two pairs of sub-keys, which are then reconstructed using two phase-only mask (POM) keys with fixed pixel values, placed in the input plane and transform plane 2 of the phase retrieval scheme, respectively; and the other POM key in the transform plane 1 can be generated and updated by the iterative encoding of each plaintext image. In each iteration, the target image acts as the input amplitude constraint in the input plane. During decryption, each plaintext image possessing all the correct keys can be successfully decrypted by measurement key regeneration, compression algorithm reconstruction, inverse wavelet transformation, and Fresnel transformation. Theoretical analysis and numerical simulations both verify the feasibility of the proposed method.

  16. Local structure-based region-of-interest retrieval in brain MR images.

    Science.gov (United States)

    Unay, Devrim; Ekin, Ahmet; Jasinschi, Radu S

    2010-07-01

    The aging population and the growing amount of medical data have increased the need for automated tools in the neurology departments. Although the researchers have been developing computerized methods to help the medical expert, these efforts have primarily emphasized to improve the effectiveness in single patient data, such as computing a brain lesion size. However, patient-to-patient comparison that should help improve diagnosis and therapy has not received much attention. To this effect, this paper introduces a fast and robust region-of-interest retrieval method for brain MR images. We make the following various contributions to the domains of brain MR image analysis, and search and retrieval system: 1) we show the potential and robustness of local structure information in the search and retrieval of brain MR images; 2) we provide analysis of two complementary features, local binary patterns (LBPs) and Kanade-Lucas-Tomasi feature points, and their comparison with a baseline method; 3) we show that incorporating spatial context in the features substantially improves accuracy; and 4) we automatically extract dominant LBPs and demonstrate their effectiveness relative to the conventional LBP approach. Comprehensive experiments on real and simulated datasets revealed that dominant LBPs with spatial context is robust to geometric deformations and intensity variations, and have high accuracy and speed even in pathological cases. The proposed method can not only aid the medical expert in disease diagnosis, or be used in scout (localizer) scans for optimization of acquisition parameters, but also supports low-power handheld devices.

  17. Unified modeling language and design of a case-based retrieval system in medical imaging.

    Science.gov (United States)

    LeBozec, C.; Jaulent, M. C.; Zapletal, E.; Degoulet, P.

    1998-01-01

    One goal of artificial intelligence research into case-based reasoning (CBR) systems is to develop approaches for designing useful and practical interactive case-based environments. Explaining each step of the design of the case-base and of the retrieval process is critical for the application of case-based systems to the real world. We describe herein our approach to the design of IDEM--Images and Diagnosis from Examples in Medicine--a medical image case-based retrieval system for pathologists. Our approach is based on the expressiveness of an object-oriented modeling language standard: the Unified Modeling Language (UML). We created a set of diagrams in UML notation illustrating the steps of the CBR methodology we used. The key aspect of this approach was selecting the relevant objects of the system according to user requirements and making visualization of cases and of the components of the case retrieval process. Further evaluation of the expressiveness of the design document is required but UML seems to be a promising formalism, improving the communication between the developers and users. Images Figure 6 Figure 7 PMID:9929346

  18. STUDY COMPARISON OF SVM-, K-NN- AND BACKPROPAGATION-BASED CLASSIFIER FOR IMAGE RETRIEVAL

    Directory of Open Access Journals (Sweden)

    Muhammad Athoillah

    2015-03-01

    Full Text Available Classification is a method for compiling data systematically according to the rules that have been set previously. In recent years classification method has been proven to help many people’s work, such as image classification, medical biology, traffic light, text classification etc. There are many methods to solve classification problem. This variation method makes the researchers find it difficult to determine which method is best for a problem. This framework is aimed to compare the ability of classification methods, such as Support Vector Machine (SVM, K-Nearest Neighbor (K-NN, and Backpropagation, especially in study cases of image retrieval with five category of image dataset. The result shows that K-NN has the best average result in accuracy with 82%. It is also the fastest in average computation time with 17,99 second during retrieve session for all categories class. The Backpropagation, however, is the slowest among three of them. In average it needed 883 second for training session and 41,7 second for retrieve session.

  19. Retrieval of Both Soil Moisture and Texture Using TerraSAR-X Images

    Directory of Open Access Journals (Sweden)

    Azza Gorrab

    2015-08-01

    Full Text Available The aim of this paper is to propose a methodology combing multi-temporal X-band SAR images (TerraSAR-X with continuous ground thetaprobe measurements, for the retrieval of surface soil moisture and texture at a high spatial resolution. Our analysis is based on seven radar images acquired at a 36° incidence angle in the HH polarization, over a semi-arid site in Tunisia (North Africa. The soil moisture estimations are based on an empirical change detection approach using TerraSAR-X data and ground auxiliary thetaprobe network measurements. Two assumptions were tested: (1 roughness variations during the three-month radar acquisition campaigns were not accounted for; (2 a simple correction for temporal variations in roughness was included. The results reveal a small improvement in the estimation of soil moisture when a correction for temporal variations in roughness is introduced. By considering the estimated temporal dynamics of soil moisture, a methodology is proposed for the retrieval of clay and sand content (expressed as percentages in soil. Two empirical relationships were established between the mean moisture values retrieved from the seven acquired radar images and the two soil texture components over 36 test fields. Validation of the proposed approach was carried out over a second set of 34 fields, showing that highly accurate clay estimations can be achieved. Maps of soil moisture, clay and sand percentages at the studied site are derived.

  20. A Visual Analytics Approach Using the Exploration of Multidimensional Feature Spaces for Content-Based Medical Image Retrieval.

    Science.gov (United States)

    Kumar, Ashnil; Nette, Falk; Klein, Karsten; Fulham, Michael; Kim, Jinman

    2015-09-01

    Content-based image retrieval (CBIR) is a search technique based on the similarity of visual features and has demonstrated potential benefits for medical diagnosis, education, and research. However, clinical adoption of CBIR is partially hindered by the difference between the computed image similarity and the user's search intent, the semantic gap, with the end result that relevant images with outlier features may not be retrieved. Furthermore, most CBIR algorithms do not provide intuitive explanations as to why the retrieved images were considered similar to the query (e.g., which subset of features were similar), hence, it is difficult for users to verify if relevant images, with a small subset of outlier features, were missed. Users, therefore, resort to examining irrelevant images and there are limited opportunities to discover these "missed" images. In this paper, we propose a new approach to medical CBIR by enabling a guided visual exploration of the search space through a tool, called visual analytics for medical image retrieval (VAMIR). The visual analytics approach facilitates interactive exploration of the entire dataset using the query image as a point-of-reference. We conducted a user study and several case studies to demonstrate the capabilities of VAMIR in the retrieval of computed tomography images and multimodality positron emission tomography and computed tomography images.

  1. Web Services for Dynamic Coloring of UAVSAR Images

    Science.gov (United States)

    Wang, Jun; Pierce, Marlon; Donnellan, Andrea; Parker, Jay

    2015-08-01

    QuakeSim has implemented a service-based Geographic Information System to enable users to access large amounts of Uninhabited Aerial Vehicle Synthetic Aperture Radar (UAVSAR) data through an online interface. The QuakeSim Interferometric Synthetic Aperture Radar (InSAR) profile tool calculates radar-observed displacement (from an unwrapped interferogram product) along user-specified lines. Pre-rendered thumbnails with InSAR fringe patterns are used to display interferogram and unwrapped phase images on a Google Map in the InSAR profile tool. One challenge with this tool lies in the user visually identifying regions of interest when drawing the profile line. This requires that the user correctly interpret the InSAR imagery, which currently uses fringe patterns. The mapping between pixel color and pixel value is not a one-to-one relationship from the InSAR fringe pattern, and it causes difficulty in understanding general displacement information for QuakeSim users. The goal of this work is to generate color maps that directly reflect the pixel values (displacement) as an addition to the pre-rendered images. Because of an extremely uneven distribution of pixel values on an InSAR image, a histogram-based, nonlinear color template generation algorithm is currently under development. A web service enables on-the-fly coloring of UAVSAR images with dynamically generated color templates.

  2. Computer-aided diagnosis of mammographic masses using geometric verification-based image retrieval

    Science.gov (United States)

    Li, Qingliang; Shi, Weili; Yang, Huamin; Zhang, Huimao; Li, Guoxin; Chen, Tao; Mori, Kensaku; Jiang, Zhengang

    2017-03-01

    Computer-Aided Diagnosis of masses in mammograms is an important indicator of breast cancer. The use of retrieval systems in breast examination is increasing gradually. In this respect, the method of exploiting the vocabulary tree framework and the inverted file in the mammographic masse retrieval have been proved high accuracy and excellent scalability. However it just considered the features in each image as a visual word and had ignored the spatial configurations of features. It greatly affect the retrieval performance. To overcome this drawback, we introduce the geometric verification method to retrieval in mammographic masses. First of all, we obtain corresponding match features based on the vocabulary tree framework and the inverted file. After that, we grasps the main point of local similarity characteristic of deformations in the local regions by constructing the circle regions of corresponding pairs. Meanwhile we segment the circle to express the geometric relationship of local matches in the area and generate the spatial encoding strictly. Finally we judge whether the matched features are correct or not, based on verifying the all spatial encoding are whether satisfied the geometric consistency. Experiments show the promising results of our approach.

  3. Plant Leaf Chlorophyll Content Retrieval Based on a Field Imaging Spectroscopy System

    Directory of Open Access Journals (Sweden)

    Bo Liu

    2014-10-01

    Full Text Available A field imaging spectrometer system (FISS; 380–870 nm and 344 bands was designed for agriculture applications. In this study, FISS was used to gather spectral information from soybean leaves. The chlorophyll content was retrieved using a multiple linear regression (MLR, partial least squares (PLS regression and support vector machine (SVM regression. Our objective was to verify the performance of FISS in a quantitative spectral analysis through the estimation of chlorophyll content and to determine a proper quantitative spectral analysis method for processing FISS data. The results revealed that the derivative reflectance was a more sensitive indicator of chlorophyll content and could extract content information more efficiently than the spectral reflectance, which is more significant for FISS data compared to ASD (analytical spectral devices data, reducing the corresponding RMSE (root mean squared error by 3.3%–35.6%. Compared with the spectral features, the regression methods had smaller effects on the retrieval accuracy. A multivariate linear model could be the ideal model to retrieve chlorophyll information with a small number of significant wavelengths used. The smallest RMSE of the chlorophyll content retrieved using FISS data was 0.201 mg/g, a relative reduction of more than 30% compared with the RMSE based on a non-imaging ASD spectrometer, which represents a high estimation accuracy compared with the mean chlorophyll content of the sampled leaves (4.05 mg/g. Our study indicates that FISS could obtain both spectral and spatial detailed information of high quality. Its image-spectrum-in-one merit promotes the good performance of FISS in quantitative spectral analyses, and it can potentially be widely used in the agricultural sector.

  4. The study of aerosol optical properties retrieval using Advanced Himawari Imager onboard a HIMAWARI-8 satellite

    Science.gov (United States)

    Lim, H.; Kim, J.; Choi, M.; Chan, P. W.; Park, H.; Go, S.

    2016-12-01

    Japan Meteorological Agency (JMA) successfully launched the next-generation geostationary satellite called Himawari-8 in 7 October 2014 and started a formal operation in 7 .July 2015. The Advanced Himawari Imager (AHI) sensor has 16 channels (from *** to *** nm), is the next-generation geostationary satellite that observes the entire Earth every 10 minutes. This study attempts to retrieve the aerosol optical properties based on the spectral matching method, with using four visible and near infrared channels (470, 510, 640, 860nm) of the AHI sensor. This method requires the preparation of look-up table (LUT) approach, to retrieve aerosol optical properties through inversion process using the Radiative Transfer Model. The retrieved aerosol optical depth (AOD) from the AHI sensor shows a rather high correlation (R=0.896) with AODs from other geostationary satellite measurements such as Geostationary Ocean Color Imager (GOCI). But the slope and offset of inter-comparison results show that AHI AOD tends to be underestimated. This underestimation seems associated with the uncertainty of the surface reflectance. We also perform an inter-comparison of the top of atmospheric (TOA) reflectances using several satellite AOD measurements such as the AHI, GOCI, and MODerate resolution Imaging Spectro-radiometer (MODIS) AODs. Since AHI tends to show lower AODs than GOCI and MODIS, we first carry out the cross-calibration process and then retrieve the TOA reflectance again. After cross-calibration, correlation coefficient of AHI AOD becomes about 0.91 with the slope of 1.023 and offset of 0.025, indicating the improved auality of AHI AOD.

  5. A STUDY ON RANKING METHOD IN RETRIEVING WEB PAGES BASED ON CONTENT AND LINK ANALYSIS: COMBINATION OF FOURIER DOMAIN SCORING AND PAGERANK SCORING

    Directory of Open Access Journals (Sweden)

    Diana Purwitasari

    2008-01-01

    Full Text Available Ranking module is an important component of search process which sorts through relevant pages. Since collection of Web pages has additional information inherent in the hyperlink structure of the Web, it can be represented as link score and then combined with the usual information retrieval techniques of content score. In this paper we report our studies about ranking score of Web pages combined from link analysis, PageRank Scoring, and content analysis, Fourier Domain Scoring. Our experiments use collection of Web pages relate to Statistic subject from Wikipedia with objectives to check correctness and performance evaluation of combination ranking method. Evaluation of PageRank Scoring show that the highest score does not always relate to Statistic. Since the links within Wikipedia articles exists so that users are always one click away from more information on any point that has a link attached, it it possible that unrelated topics to Statistic are most likely frequently mentioned in the collection. While the combination method show link score which is given proportional weight to content score of Web pages does effect the retrieval results.

  6. Content-based image retrieval of digitized histopathology in boosted spectrally embedded spaces.

    Science.gov (United States)

    Sridhar, Akshay; Doyle, Scott; Madabhushi, Anant

    2015-01-01

    Content-based image retrieval (CBIR) systems allow for retrieval of images from within a database that are similar in visual content to a query image. This is useful for digital pathology, where text-based descriptors alone might be inadequate to accurately describe image content. By representing images via a set of quantitative image descriptors, the similarity between a query image with respect to archived, annotated images in a database can be computed and the most similar images retrieved. Recently, non-linear dimensionality reduction methods have become popular for embedding high-dimensional data into a reduced-dimensional space while preserving local object adjacencies, thereby allowing for object similarity to be determined more accurately in the reduced-dimensional space. However, most dimensionality reduction methods implicitly assume, in computing the reduced-dimensional representation, that all features are equally important. In this paper we present boosted spectral embedding(BoSE), which utilizes a boosted distance metric to selectively weight individual features (based on training data) to subsequently map the data into a reduced-dimensional space. BoSE is evaluated against spectral embedding (SE) (which employs equal feature weighting) in the context of CBIR of digitized prostate and breast cancer histopathology images. The following datasets, which were comprised of a total of 154 hematoxylin and eosin stained histopathology images, were used: (1) Prostate cancer histopathology (benign vs. malignant), (2) estrogen receptor (ER) + breast cancer histopathology (low vs. high grade), and (3) HER2+ breast cancer histopathology (low vs. high levels of lymphocytic infiltration). We plotted and calculated the area under precision-recall curves (AUPRC) and calculated classification accuracy using the Random Forest classifier. BoSE outperformed SE both in terms of CBIR-based (area under the precision-recall curve) and classifier-based (classification accuracy

  7. Optimizing top precision performance measure of content-based image retrieval by learning similarity function

    KAUST Repository

    Liang, Ru-Ze

    2017-04-24

    In this paper we study the problem of content-based image retrieval. In this problem, the most popular performance measure is the top precision measure, and the most important component of a retrieval system is the similarity function used to compare a query image against a database image. However, up to now, there is no existing similarity learning method proposed to optimize the top precision measure. To fill this gap, in this paper, we propose a novel similarity learning method to maximize the top precision measure. We model this problem as a minimization problem with an objective function as the combination of the losses of the relevant images ranked behind the top-ranked irrelevant image, and the squared Frobenius norm of the similarity function parameter. This minimization problem is solved as a quadratic programming problem. The experiments over two benchmark data sets show the advantages of the proposed method over other similarity learning methods when the top precision is used as the performance measure.

  8. Medical Image Retrieval Using Vector Quantization and Fuzzy S-tree.

    Science.gov (United States)

    Nowaková, Jana; Prílepok, Michal; Snášel, Václav

    2017-02-01

    The aim of the article is to present a novel method for fuzzy medical image retrieval (FMIR) using vector quantization (VQ) with fuzzy signatures in conjunction with fuzzy S-trees. In past times, a task of similar pictures searching was not based on searching for similar content (e.g. shapes, colour) of the pictures but on the picture name. There exist some methods for the same purpose, but there is still some space for development of more efficient methods. The proposed image retrieval system is used for finding similar images, in our case in the medical area - in mammography, in addition to the creation of the list of similar images - cases. The created list is used for assessing the nature of the finding - whether the medical finding is malignant or benign. The suggested method is compared to the method using Normalized Compression Distance (NCD) instead of fuzzy signatures and fuzzy S-tree. The method with NCD is useful for the creation of the list of similar cases for malignancy assessment, but it is not able to capture the area of interest in the image. The proposed method is going to be added to the complex decision support system to help to determine appropriate healthcare according to the experiences of similar, previous cases.

  9. Phase-Retrieved Tomography enables imaging of a Tumor Spheroid in Mesoscopy Regime

    CERN Document Server

    Ancora, Daniele; Giasafaki, Georgia; Psycharakis, Stylianos E; Liapis, Evangelos; Zacharakis, Giannis

    2016-01-01

    Optical tomographic imaging of biological specimen bases its reliability on the combination of both accurate experimental measures and advanced computational techniques. In general, due to high scattering and absorption in most of the tissues, multi view geometries are required to reduce diffuse halo and blurring in the reconstructions. Scanning processes are used to acquire the data but they inevitably introduces perturbation, negating the assumption of aligned measures. Here we propose an innovative, registration free, imaging protocol implemented to image a human tumor spheroid at mesoscopic regime. The technique relies on the calculation of autocorrelation sinogram and object autocorrelation, finalizing the tomographic reconstruction via a three dimensional Gerchberg Saxton algorithm that retrieves the missing phase information. Our method is conceptually simple and focuses on single image acquisition, regardless of the specimen position in the camera plane. We demonstrate increased deep resolution abilit...

  10. Mining biomedical images towards valuable information retrieval in biomedical and life sciences

    Science.gov (United States)

    Ahmed, Zeeshan; Zeeshan, Saman; Dandekar, Thomas

    2016-01-01

    Biomedical images are helpful sources for the scientists and practitioners in drawing significant hypotheses, exemplifying approaches and describing experimental results in published biomedical literature. In last decades, there has been an enormous increase in the amount of heterogeneous biomedical image production and publication, which results in a need for bioimaging platforms for feature extraction and analysis of text and content in biomedical images to take advantage in implementing effective information retrieval systems. In this review, we summarize technologies related to data mining of figures. We describe and compare the potential of different approaches in terms of their developmental aspects, used methodologies, produced results, achieved accuracies and limitations. Our comparative conclusions include current challenges for bioimaging software with selective image mining, embedded text extraction and processing of complex natural language queries. PMID:27538578

  11. Toward content-based image retrieval with deep convolutional neural networks

    Science.gov (United States)

    Sklan, Judah E. S.; Plassard, Andrew J.; Fabbri, Daniel; Landman, Bennett A.

    2015-03-01

    Content-based image retrieval (CBIR) offers the potential to identify similar case histories, understand rare disorders, and eventually, improve patient care. Recent advances in database capacity, algorithm efficiency, and deep Convolutional Neural Networks (dCNN), a machine learning technique, have enabled great CBIR success for general photographic images. Here, we investigate applying the leading ImageNet CBIR technique to clinically acquired medical images captured by the Vanderbilt Medical Center. Briefly, we (1) constructed a dCNN with four hidden layers, reducing dimensionality of an input scaled to 128x128 to an output encoded layer of 4x384, (2) trained the network using back-propagation 1 million random magnetic resonance (MR) and computed tomography (CT) images, (3) labeled an independent set of 2100 images, and (4) evaluated classifiers on the projection of the labeled images into manifold space. Quantitative results were disappointing (averaging a true positive rate of only 20%); however, the data suggest that improvements would be possible with more evenly distributed sampling across labels and potential re-grouping of label structures. This preliminary effort at automated classification of medical images with ImageNet is promising, but shows that more work is needed beyond direct adaptation of existing techniques.

  12. IMAGE EXPLORER: Astronomical Image Analysis on an HTML5-based Web Application

    Science.gov (United States)

    Gopu, A.; Hayashi, S.; Young, M. D.

    2014-05-01

    Large datasets produced by recent astronomical imagers cause the traditional paradigm for basic visual analysis - typically downloading one's entire image dataset and using desktop clients like DS9, Aladin, etc. - to not scale, despite advances in desktop computing power and storage. This paper describes Image Explorer, a web framework that offers several of the basic visualization and analysis functionality commonly provided by tools like DS9, on any HTML5 capable web browser on various platforms. It uses a combination of the modern HTML5 canvas, JavaScript, and several layers of lossless PNG tiles producted from the FITS image data. Astronomers are able to rapidly and simultaneously open up several images on their web-browser, adjust the intensity min/max cutoff or its scaling function, and zoom level, apply color-maps, view position and FITS header information, execute typically used data reduction codes on the corresponding FITS data using the FRIAA framework, and overlay tiles for source catalog objects, etc.

  13. Fresnel domain nonlinear optical image encryption scheme based on Gerchberg-Saxton phase-retrieval algorithm.

    Science.gov (United States)

    Rajput, Sudheesh K; Nishchal, Naveen K

    2014-01-20

    We propose a novel nonlinear image-encryption scheme based on a Gerchberg-Saxton (G-S) phase-retrieval algorithm in the Fresnel transform domain. The decryption process can be performed using conventional double random phase encoding (DRPE) architecture. The encryption is realized by applying G-S phase-retrieval algorithm twice, which generates two asymmetric keys from intermediate phases. The asymmetric keys are generated in such a way that decryption is possible optically with a conventional DRPE method. Due to the asymmetric nature of the keys, the proposed encryption process is nonlinear and offers enhanced security. The cryptanalysis has been carried out, which proves the robustness of proposed scheme against known-plaintext, chosen-plaintext, and special attacks. A simple optical setup for decryption has also been suggested. Results of computer simulation support the idea of the proposed cryptosystem.

  14. Learning Low Dimensional Convolutional Neural Networks for High-Resolution Remote Sensing Image Retrieval

    Directory of Open Access Journals (Sweden)

    Weixun Zhou

    2017-05-01

    Full Text Available Learning powerful feature representations for image retrieval has always been a challenging task in the field of remote sensing. Traditional methods focus on extracting low-level hand-crafted features which are not only time-consuming but also tend to achieve unsatisfactory performance due to the complexity of remote sensing images. In this paper, we investigate how to extract deep feature representations based on convolutional neural networks (CNNs for high-resolution remote sensing image retrieval (HRRSIR. To this end, several effective schemes are proposed to generate powerful feature representations for HRRSIR. In the first scheme, a CNN pre-trained on a different problem is treated as a feature extractor since there are no sufficiently-sized remote sensing datasets to train a CNN from scratch. In the second scheme, we investigate learning features that are specific to our problem by first fine-tuning the pre-trained CNN on a remote sensing dataset and then proposing a novel CNN architecture based on convolutional layers and a three-layer perceptron. The novel CNN has fewer parameters than the pre-trained and fine-tuned CNNs and can learn low dimensional features from limited labelled images. The schemes are evaluated on several challenging, publicly available datasets. The results indicate that the proposed schemes, particularly the novel CNN, achieve state-of-the-art performance.

  15. Constraining regional energy and hydrologic modeling of the cryosphere with quantitative retrievals from imaging spectroscopy

    Science.gov (United States)

    Skiles, M.; Painter, T. H.

    2016-12-01

    The timing and magnitude of snowmelt is determined by net solar radiation in most snow covered environments. Despite this well-established understanding of snow energy balance, measurements of snow reflectance and albedo are sparse or nonexistent. This is particularly relevant in mountainous regions, where snow accumulation and melt patterns influence both climate and hydrology. The Airborne Snow Observatory, a coupled lidar and imaging spectrometer platform, has been monitoring time series of snow water equivalent and snow reflectance over entire mountain basins since 2013. The ASO imaging spectrometer products build upon a legacy of algorithms for retrieving snow properties from the unique spectral signature of snow. Here, we present the full time series (2013-2016) of snow properties, including snow albedo, grain size, and impurity radiative forcing, across the Tuolumne River Basin, Sierra Nevada Mountains, CA. Additionally, we show that incorporating snow albedo into a snow energy balance model improves both the prediction of snow water equivalent and snowmelt timing. These results demonstrate the hydroclimatic modeling that is enabled by the quantitative retrievals uniquely available from imaging spectroscopy. As such, they have important implications for monitoring global snow and ice physical properties and regional and global climate modeling with spaceborne imaging spectroscopy, for example, NASA's planned HYSPIRI mission.

  16. A New Laws Filtered Local Binary Pattern Texture Descriptor for Ultrasound Kidney Images Retrieval

    Directory of Open Access Journals (Sweden)

    Chelladurai CALLINS CHRISTIYANA

    2014-09-01

    Full Text Available Content Based Image Retrieval (CBIR is an inevitable technique in medical applications. One of the important tasks in CBIR is the feature extraction process. A new feature extraction procedure called Laws Filtered Local Binary Pattern (LFLBP for extracting texture features from ultrasound kidney images is proposed in this manuscript. This new texture feature combines the gain of Laws Masks and Local Binary Pattern (LBP. The Laws Masks enhance the discrimination power of LBP by capturing high energy texture points in an image and efficiently characterize the textures. The new descriptor is intended to utilize the local information in an effective manner neither the increase of encoding levels nor the usage of adjacent neighbourhood information. The performance of this new descriptor is compared with the LBP and the Local Ternary Pattern (LTP. The experimental results show that the ultrasound kidney images retrieval system with this new descriptor has good average precision value (77% as compared to LBP (74% and LTP (74.3%.

  17. Fast retrieval of calcification from sequential intravascular ultrasound gray-scale images.

    Science.gov (United States)

    Zheng, Sun; Bing-Ru, Liu

    2016-08-12

    Intravascular ultrasound (IVUS)-based tissue characterization is invaluable for the computer-aided diagnosis and interventional treatment of cardiac vessel diseases. Although the analysis of raw backscattered signals allows more accurate plaque characterization than gray-scale images, its applications are limited due to its nature of electrocardiogram-gated acquisition. Images acquired by IVUS devices that do not allow the acquisition of raw signals cannot be characterized. To address these limitations, we developed a method for fast frame-by-frame retrieval and location of calcification according to the jump features of radial gray-level variation curves from sequential IVUS gray-scale images. The proposed method consists of three main steps: (1) radial gray-level variation curves are extracted from each filtered polar view, (2) sequential images are preliminarily queried according to the maximal slopes of radial gray-level variation curves, and finally, (3) key frames that include calcification are selected through checking the gray-level features of successive pixel columns in the preliminary results. Experimental results with clinically acquired in vivo data sets indicate key frames that include calcification can be retrieved with the advantages of simplicity, high efficiency, and accuracy. Recognition results correlate well with manual characterization results obtained by experienced physicians and through virtual histology.

  18. Bag-of-features based medical image retrieval via multiple assignment and visual words weighting

    KAUST Repository

    Wang, Jingyan

    2011-11-01

    Bag-of-features based approaches have become prominent for image retrieval and image classification tasks in the past decade. Such methods represent an image as a collection of local features, such as image patches and key points with scale invariant feature transform (SIFT) descriptors. To improve the bag-of-features methods, we first model the assignments of local descriptors as contribution functions, and then propose a novel multiple assignment strategy. Assuming the local features can be reconstructed by their neighboring visual words in a vocabulary, reconstruction weights can be solved by quadratic programming. The weights are then used to build contribution functions, resulting in a novel assignment method, called quadratic programming (QP) assignment. We further propose a novel visual word weighting method. The discriminative power of each visual word is analyzed by the sub-similarity function in the bin that corresponds to the visual word. Each sub-similarity function is then treated as a weak classifier. A strong classifier is learned by boosting methods that combine those weak classifiers. The weighting factors of the visual words are learned accordingly. We evaluate the proposed methods on medical image retrieval tasks. The methods are tested on three well-known data sets, i.e., the ImageCLEFmed data set, the 304 CT Set, and the basal-cell carcinoma image set. Experimental results demonstrate that the proposed QP assignment outperforms the traditional nearest neighbor assignment, the multiple assignment, and the soft assignment, whereas the proposed boosting based weighting strategy outperforms the state-of-the-art weighting methods, such as the term frequency weights and the term frequency-inverse document frequency weights. © 2011 IEEE.

  19. ImageJS: Personalized, participated, pervasive, and reproducible image bioinformatics in the web browser.

    Science.gov (United States)

    Almeida, Jonas S; Iriabho, Egiebade E; Gorrepati, Vijaya L; Wilkinson, Sean R; Grüneberg, Alexander; Robbins, David E; Hackney, James R

    2012-01-01

    Image bioinformatics infrastructure typically relies on a combination of server-side high-performance computing and client desktop applications tailored for graphic rendering. On the server side, matrix manipulation environments are often used as the back-end where deployment of specialized analytical workflows takes place. However, neither the server-side nor the client-side desktop solution, by themselves or combined, is conducive to the emergence of open, collaborative, computational ecosystems for image analysis that are both self-sustained and user driven. ImageJS was developed as a browser-based webApp, untethered from a server-side backend, by making use of recent advances in the modern web browser such as a very efficient compiler, high-end graphical rendering capabilities, and I/O tailored for code migration. Multiple versioned code hosting services were used to develop distinct ImageJS modules to illustrate its amenability to collaborative deployment without compromise of reproducibility or provenance. The illustrative examples include modules for image segmentation, feature extraction, and filtering. The deployment of image analysis by code migration is in sharp contrast with the more conventional, heavier, and less safe reliance on data transfer. Accordingly, code and data are loaded into the browser by exactly the same script tag loading mechanism, which offers a number of interesting applications that would be hard to attain with more conventional platforms, such as NIH's popular ImageJ application. The modern web browser was found to be advantageous for image bioinformatics in both the research and clinical environments. This conclusion reflects advantages in deployment scalability and analysis reproducibility, as well as the critical ability to deliver advanced computational statistical procedures machines where access to sensitive data is controlled, that is, without local "download and installation".

  20. ImageJS: Personalized, participated, pervasive, and reproducible image bioinformatics in the web browser

    Directory of Open Access Journals (Sweden)

    Jonas S Almeida

    2012-01-01

    Full Text Available Background: Image bioinformatics infrastructure typically relies on a combination of server-side high-performance computing and client desktop applications tailored for graphic rendering. On the server side, matrix manipulation environments are often used as the back-end where deployment of specialized analytical workflows takes place. However, neither the server-side nor the client-side desktop solution, by themselves or combined, is conducive to the emergence of open, collaborative, computational ecosystems for image analysis that are both self-sustained and user driven. Materials and Methods: ImageJS was developed as a browser-based webApp, untethered from a server-side backend, by making use of recent advances in the modern web browser such as a very efficient compiler, high-end graphical rendering capabilities, and I/O tailored for code migration. Results : Multiple versioned code hosting services were used to develop distinct ImageJS modules to illustrate its amenability to collaborative deployment without compromise of reproducibility or provenance. The illustrative examples include modules for image segmentation, feature extraction, and filtering. The deployment of image analysis by code migration is in sharp contrast with the more conventional, heavier, and less safe reliance on data transfer. Accordingly, code and data are loaded into the browser by exactly the same script tag loading mechanism, which offers a number of interesting applications that would be hard to attain with more conventional platforms, such as NIH′s popular ImageJ application. Conclusions : The modern web browser was found to be advantageous for image bioinformatics in both the research and clinical environments. This conclusion reflects advantages in deployment scalability and analysis reproducibility, as well as the critical ability to deliver advanced computational statistical procedures machines where access to sensitive data is controlled, that is, without

  1. Using an image-extended relational database to support content-based image retrieval in a PACS.

    Science.gov (United States)

    Traina, Caetano; Traina, Agma J M; Araújo, Myrian R B; Bueno, Josiane M; Chino, Fabio J T; Razente, Humberto; Azevedo-Marques, Paulo M

    2005-12-01

    This paper presents a new Picture Archiving and Communication System (PACS), called cbPACS, which has content-based image retrieval capabilities. The cbPACS answers range and k-nearest- neighbor similarity queries, employing a relational database manager extended to support images. The images are compared through their features, which are extracted by an image-processing module and stored in the extended relational database. The database extensions were developed aiming at efficiently answering similarity queries by taking advantage of specialized indexing methods. The main concept supporting the extensions is the definition, inside the relational manager, of distance functions based on features extracted from the images. An extension to the SQL language enables the construction of an interpreter that intercepts the extended commands and translates them to standard SQL, allowing any relational database server to be used. By now, the system implemented works on features based on color distribution of the images through normalized histograms as well as metric histograms. Metric histograms are invariant regarding scale, translation and rotation of images and also to brightness transformations. The cbPACS is prepared to integrate new image features, based on texture and shape of the main objects in the image.

  2. Color image retrieval: from low-level representation to high-level concept.

    Science.gov (United States)

    Larabi, M-C; Richard, N; Fernandez-Maloigne, C

    2007-01-20

    This work takes place within the framework of color image retrieval in specialized databases. A CBIR scheme has been proposed allowing to use a description thanks to low-level features in the framework of a high-level concept using a knowledge management. Starting from a representation of the three most frequently used features i.e. color, texture and shape, cooperation techniques are proposed in order to use the expert knowledge in the combination process. Two types of cooperation have been defined; the first is without categorization and the second with categorization. The second approach allows to make a selection of the query category in order to simplify queries in large image databases. Finally, in order to avoid the standard relevance feedback stage, a competition technique has been proposed allowing an unsupervised refinement of the submitted query. The proposed techniques have shown their performances and their robustness and are generalizable to all types of image databases.

  3. A fast image retrieval method based on SVM and imbalanced samples in filtering multimedia message spam

    Science.gov (United States)

    Chen, Zhang; Peng, Zhenming; Peng, Lingbing; Liao, Dongyi; He, Xin

    2011-11-01

    With the swift and violent development of the Multimedia Messaging Service (MMS), it becomes an urgent task to filter the Multimedia Message (MM) spam effectively in real-time. For the fact that most MMs contain images or videos, a method based on retrieving images is given in this paper for filtering MM spam. The detection method used in this paper is a combination of skin-color detection, texture detection, and face detection, and the classifier for this imbalanced problem is a very fast multi-classification combining Support vector machine (SVM) with unilateral binary decision tree. The experiments on 3 test sets show that the proposed method is effective, with the interception rate up to 60% and the average detection time for each image less than 1 second.

  4. Interactive content-based image retrieval (CBIR) computer-aided diagnosis (CADx) system for ultrasound breast masses using relevance feedback

    Science.gov (United States)

    Cho, Hyun-chong; Hadjiiski, Lubomir; Sahiner, Berkman; Chan, Heang-Ping; Paramagul, Chintana; Helvie, Mark; Nees, Alexis V.

    2012-03-01

    We designed a Content-Based Image Retrieval (CBIR) Computer-Aided Diagnosis (CADx) system to assist radiologists in characterizing masses on ultrasound images. The CADx system retrieves masses that are similar to a query mass from a reference library based on computer-extracted features that describe texture, width-to-height ratio, and posterior shadowing of a mass. Retrieval is performed with k nearest neighbor (k-NN) method using Euclidean distance similarity measure and Rocchio relevance feedback algorithm (RRF). In this study, we evaluated the similarity between the query and the retrieved masses with relevance feedback using our interactive CBIR CADx system. The similarity assessment and feedback were provided by experienced radiologists' visual judgment. For training the RRF parameters, similarities of 1891 image pairs obtained from 62 masses were rated by 3 MQSA radiologists using a 9-point scale (9=most similar). A leave-one-out method was used in training. For each query mass, 5 most similar masses were retrieved from the reference library using radiologists' similarity ratings, which were then used by RRF to retrieve another 5 masses for the same query. The best RRF parameters were chosen based on three simulated observer experiments, each of which used one of the radiologists' ratings for retrieval and relevance feedback. For testing, 100 independent query masses on 100 images and 121 reference masses on 230 images were collected. Three radiologists rated the similarity between the query and the computer-retrieved masses. Average similarity ratings without and with RRF were 5.39 and 5.64 on the training set and 5.78 and 6.02 on the test set, respectively. The average Az values without and with RRF were 0.86+/-0.03 and 0.87+/-0.03 on the training set and 0.91+/-0.03 and 0.90+/-0.03 on the test set, respectively. This study demonstrated that RRF improved the similarity of the retrieved masses.

  5. From Grass Roots to Corporate Image--The Maturation of the Web.

    Science.gov (United States)

    Quinn, Christine A.

    1995-01-01

    The experience of Stanford University (California) in developing the institutional image it portrayed on the World Wide Web is discussed. Principles and practical suggestions for developing such an image through layout and content are offered, including a list of things not to do on a Web page. (MSE)

  6. Aerosol retrieval for second global imager on GCOM-C1

    Science.gov (United States)

    Sano, Itaru; Mukai, Sonoyo; Nakata, Makiko

    2017-04-01

    The second global imager (SGLI) on global change observation mission - 1st climate satellite (GCOM-C1) will be launched in December of 2017 as a part of Japanese GCOM project. In addition, GCOM-W has already been launched in 2012. The SGLI is an imager which measures Earth's reflectance's from near ultra violet (NUV) to thermal infrared (TIR) for estimation of physical parameters of atmosphere, land and ocean. It can be pointed out that unique features of SGLI are as follows; 1) high resolution imager provides us 250 m resolution dataset from NUV to near infrared (NIR) wavelengths, 2) polarization information (I, Q, and U as Stokes components) are available at two wavelengths (red and NIR) with 45 degrees forward or backward tilting along satellite tracking direction. Note that the resolution of polarization channels is 1 km x 1 km at nadir. This work introduces current status of aerosol retrieval algorithm for SGLI. Here we use the two set of stokes components (Q and U) at red and near infrared for polarization information as well as total reflectance at blue channel for aerosol retrieval. It is still difficult to retrieve variety of the aerosol properties simultaneously. We propose appropriate aerosol size distribution model which is based on compiled results of world wide NASA/AERONET observations. The proposed aerosol size distribution can reduce the number of unknown parameters. The values of complex refractive index of aerosols show weak dependence on wavelength for visible range, and hence it is assumed to be ranged from the value of 1.4 as transparent particles to the absorbing one as 1.60 - 0.02i. Further the reflectance of land surface should be taken into account. As a result, the aerosol properties obtained from the satellite data, radiation simulation and ground measurements are investigated.

  7. INFLUENCE OF THE VIEWING GEOMETRY WITHIN HYPERSPECTRAL IMAGES RETRIEVED FROM UAV SNAPSHOT CAMERAS

    Directory of Open Access Journals (Sweden)

    H. Aasen

    2016-06-01

    Full Text Available Hyperspectral data has great potential for vegetation parameter retrieval. However, due to angular effects resulting from different sun-surface-sensor geometries, objects might appear differently depending on the position of an object within the field of view of a sensor. Recently, lightweight snapshot cameras have been introduced, which capture hyperspectral information in two spatial and one spectral dimension and can be mounted on unmanned aerial vehicles. This study investigates the influence of the different viewing geometries within an image on the apparent hyperspectral reflection retrieved by these sensors. Additionally, it is evaluated how hyperspectral vegetation indices like the NDVI are effected by the angular effects within a single image and if the viewing geometry influences the apparent heterogeneity with an area of interest. The study is carried out for a barley canopy at booting stage. The results show significant influences of the position of the area of interest within the image. The red region of the spectrum is more influenced by the position than the near infrared. The ability of the NDVI to compensate these effects was limited to the capturing positions close to nadir. The apparent heterogeneity of the area of interest is the highest close to a nadir.

  8. Fast DCNN based on FWT, intelligent dropout and layer skipping for image retrieval.

    Science.gov (United States)

    ElAdel, Asma; Zaied, Mourad; Amar, Chokri Ben

    2017-11-01

    Deep Convolutional Neural Network (DCNN) can be marked as a powerful tool for object and image classification and retrieval. However, the training stage of such networks is highly consuming in terms of storage space and time. Also, the optimization is still a challenging subject. In this paper, we propose a fast DCNN based on Fast Wavelet Transform (FWT), intelligent dropout and layer skipping. The proposed approach led to improve the image retrieval accuracy as well as the searching time. This was possible thanks to three key advantages: First, the rapid way to compute the features using FWT. Second, the proposed intelligent dropout method is based on whether or not a unit is efficiently and not randomly selected. Third, it is possible to classify the image using efficient units of earlier layer(s) and skipping all the subsequent hidden layers directly to the output layer. Our experiments were performed on CIFAR-10 and MNIST datasets and the obtained results are very promising. Copyright © 2017 Elsevier Ltd. All rights reserved.

  9. Learning Low Dimensional Convolutional Neural Networks for High-Resolution Remote Sensing Image Retrieval

    Science.gov (United States)

    Zhou, Weixun; Newsam, Shawn; Li, Congmin; Shao, Zhenfeng

    2017-05-01

    Learning powerful feature representations for image retrieval has always been a challenging task in the field of remote sensing. Traditional methods focus on extracting low-level hand-crafted features which are not only time-consuming but also tend to achieve unsatisfactory performance due to the content complexity of remote sensing images. In this paper, we investigate how to extract deep feature representations based on convolutional neural networks (CNN) for high-resolution remote sensing image retrieval (HRRSIR). To this end, two effective schemes are proposed to generate powerful feature representations for HRRSIR. In the first scheme, the deep features are extracted from the fully-connected and convolutional layers of the pre-trained CNN models, respectively; in the second scheme, we propose a novel CNN architecture based on conventional convolution layers and a three-layer perceptron. The novel CNN model is then trained on a large remote sensing dataset to learn low dimensional features. The two schemes are evaluated on several public and challenging datasets, and the results indicate that the proposed schemes and in particular the novel CNN are able to achieve state-of-the-art performance.

  10. Green's function retrieval and passive imaging from correlations of wideband thermal radiations.

    Science.gov (United States)

    Davy, Matthieu; Fink, Mathias; de Rosny, Julien

    2013-05-17

    We present an experimental demonstration of electromagnetic Green's function retrieval from thermal radiations in anechoic and reverberant cavities. The Green's function between two antennas is estimated by cross correlating milliseconds of decimeter noise. We show that the temperature dependence of the cross-correlation amplitude is well predicted by the blackbody theory in the Rayleigh-Jeans limit. The effect of a nonuniform temperature distribution on the cross-correlation time symmetry is also explored. Finally, we open a new way to image scatterers using ambient thermal radiations.

  11. Synergistic Instance-Level Subspace Alignment for Fine-Grained Sketch-Based Image Retrieval.

    Science.gov (United States)

    Li, Ke; Pang, Kaiyue; Song, Yi-Zhe; Hospedales, Timothy M; Xiang, Tao; Zhang, Honggang

    2017-08-25

    We study the problem of fine-grained sketch-based image retrieval. By performing instance-level (rather than category-level) retrieval, it embodies a timely and practical application, particularly with the ubiquitous availability of touchscreens. Three factors contribute to the challenging nature of the problem: (i) free-hand sketches are inherently abstract and iconic, making visual comparisons with photos difficult, (ii) sketches and photos are in two different visual domains, i.e. black and white lines vs. color pixels, and (iii) fine-grained distinctions are especially challenging when executed across domain and abstraction-level. To address these challenges, we propose to bridge the image-sketch gap both at the high-level via parts and attributes, as well as at the low-level, via introducing a new domain alignment method. More specifically, (i) we contribute a dataset with 304 photos and 912 sketches, where each sketch and image is annotated with its semantic parts and associated part-level attributes. With the help of this dataset, we investigate (ii) how strongly-supervised deformable part-based models can be learned that subsequently enable automatic detection of part-level attributes, and provide pose-aligned sketch-image comparisons. To reduce the sketch-image gap when comparing low-level features, we also (iii) propose a novel method for instance-level domain-alignment, that exploits both subspace and instance-level cues to better align the domains. Finally (iv) these are combined in a matching framework integrating aligned low-level features, mid-level geometric structure and high-level semantic attributes. Extensive experiments conducted on our new dataset demonstrate effectiveness of the proposed method.

  12. Retrieving the unretrievable in electronic imaging systems: emotions, themes, and stories

    Science.gov (United States)

    Joergensen, Corinne

    1999-05-01

    New paradigms such as 'affective computing' and user-based research are extending the realm of facets traditionally addressed in IR systems. This paper builds on previous research reported to the electronic imaging community concerning the need to provide access to more abstract attributes of images than those currently amenable to a variety of content-based and text-based indexing techniques. Empirical research suggest that, for visual materials, in addition to standard bibliographic data and broad subject, and in addition to such visually perceptual attributes such as color, texture, shape, and position or focal point, additional access points such as themes, abstract concepts, emotions, stories, and 'people-related' information such as social status would be useful in image retrieval. More recent research demonstrates that similar results are also obtained with 'fine arts' images, which generally have no access provided for these types of attributes. Current efforts to match image attributes as revealed in empirical research with those addressed both in current textural and content-based indexing systems are discussed, as well as the need for new representations for image attributes and for collaboration among diverse communities of researchers.

  13. Skin parameter map retrieval from a dedicated multispectral imaging system applied to dermatology/cosmetology.

    Science.gov (United States)

    Jolivot, Romuald; Benezeth, Yannick; Marzani, Franck

    2013-01-01

    In vivo quantitative assessment of skin lesions is an important step in the evaluation of skin condition. An objective measurement device can help as a valuable tool for skin analysis. We propose an explorative new multispectral camera specifically developed for dermatology/cosmetology applications. The multispectral imaging system provides images of skin reflectance at different wavebands covering visible and near-infrared domain. It is coupled with a neural network-based algorithm for the reconstruction of reflectance cube of cutaneous data. This cube contains only skin optical reflectance spectrum in each pixel of the bidimensional spatial information. The reflectance cube is analyzed by an algorithm based on a Kubelka-Munk model combined with evolutionary algorithm. The technique allows quantitative measure of cutaneous tissue and retrieves five skin parameter maps: melanin concentration, epidermis/dermis thickness, haemoglobin concentration, and the oxygenated hemoglobin. The results retrieved on healthy participants by the algorithm are in good accordance with the data from the literature. The usefulness of the developed technique was proved during two experiments: a clinical study based on vitiligo and melasma skin lesions and a skin oxygenation experiment (induced ischemia) with healthy participant where normal tissues are recorded at normal state and when temporary ischemia is induced.

  14. Case-based lung image categorization and retrieval for interstitial lung diseases: clinical workflows.

    Science.gov (United States)

    Depeursinge, Adrien; Vargas, Alejandro; Gaillard, Frédéric; Platon, Alexandra; Geissbuhler, Antoine; Poletti, Pierre-Alexandre; Müller, Henning

    2012-01-01

    Clinical workflows and user interfaces of image-based computer-aided diagnosis (CAD) for interstitial lung diseases in high-resolution computed tomography are introduced and discussed. Three use cases are implemented to assist students, radiologists, and physicians in the diagnosis workup of interstitial lung diseases. In a first step, the proposed system shows a three-dimensional map of categorized lung tissue patterns with quantification of the diseases based on texture analysis of the lung parenchyma. Then, based on the proportions of abnormal and normal lung tissue as well as clinical data of the patients, retrieval of similar cases is enabled using a multimodal distance aggregating content-based image retrieval (CBIR) and text-based information search. The global system leads to a hybrid detection-CBIR-based CAD, where detection-based and CBIR-based CAD show to be complementary both on the user's side and on the algorithmic side. The proposed approach is in accordance with the classical workflow of clinicians searching for similar cases in textbooks and personal collections. The developed system enables objective and customizable inter-case similarity assessment, and the performance measures obtained with a leave-one-patient-out cross-validation (LOPO CV) are representative of a clinical usage of the system.

  15. Search moves and tactics for image retrieval in the field of journalism: A pilot study

    Directory of Open Access Journals (Sweden)

    Tsai-Youn Hung

    2005-03-01

    Full Text Available People engage in multiple types of information-seeking strategies within an information-seeking episode. The objective of this pilot study is to investigate search moves and tactics made by end-users when searching for visual information. The pilot study involves 5 undergraduate students from the Department of Journalism and Media Studies at Rutgers University using the AccuNet/AP Photo Archive to retrieve specific, general, and subjective photos. Data were collected through think-aloud protocols and transaction logs. The results outline an overall picture of the five searchers’ image searching behavior in the field of journalism and show that there is a connection between the types of images searched and the patterns of search moves and tactics employed by the searchers.

  16. A multi-layered image format for the web with an adaptive layer selection algorithm

    Directory of Open Access Journals (Sweden)

    Tair Milan

    2017-01-01

    Full Text Available In this paper we present a proposed multi-layered image format for use on the web. The format implements an algorithm for selecting adequate layer images depending on the image container's surroundings and size. The layer selection depends on the weighted average brightness of the underlying web page background within the bounds of the image. The proposed image format supports multiple image layers with adjoined thresholds and activation conditions. Depending on these conditions and the underlying background, a layer's visibility will be adequately set. The selection algorithm takes into account the background brightness, each layer's adjoined threshold values, and other newly introduced layer conditions.

  17. Liquid water path retrieval using the lowest frequency channels of Fengyun-3C Microwave Radiation Imager (MWRI)

    Science.gov (United States)

    Tang, Fei; Zou, Xiaolei

    2017-12-01

    The Microwave Radiation Imager (MWRI) on board Chinese Fengyun-3 (FY-3) satellites provides measurements at 10.65, 18.7, 23.8, 36.5, and 89.0 GHz with both horizontal and vertical polarization channels. Brightness temperature measurements of those channels with their central frequencies higher than 19 GHz from satellite-based microwave imager radiometers had traditionally been used to retrieve cloud liquid water path (LWP) over ocean. The results show that the lowest frequency channels are the most appropriate for retrieving LWP when its values are large. Therefore, a modified LWP retrieval algorithm is developed for retrieving LWP of different magnitudes involving not only the high frequency channels but also the lowest frequency channels of FY-3 MWRI. The theoretical estimates of the LWP retrieval errors are between 0.11 and 0.06 mm for 10.65- and 18.7-GHz channels and between 0.02 and 0.04 mm for 36.5- and 89.0-GHz channels. It is also shown that the brightness temperature observations at 10.65 GHz can be utilized to better retrieve the LWP greater than 3 mm in the eyewall region of Super Typhoon Neoguri (2014). The spiral structure of clouds within and around Typhoon Neoguri can be well captured by combining the LWP retrievals from different frequency channels.

  18. Towards case-based medical learning in radiological decision making using content-based image retrieval

    Directory of Open Access Journals (Sweden)

    Günther Rolf W

    2011-10-01

    Full Text Available Abstract Background Radiologists' training is based on intensive practice and can be improved with the use of diagnostic training systems. However, existing systems typically require laboriously prepared training cases and lack integration into the clinical environment with a proper learning scenario. Consequently, diagnostic training systems advancing decision-making skills are not well established in radiological education. Methods We investigated didactic concepts and appraised methods appropriate to the radiology domain, as follows: (i Adult learning theories stress the importance of work-related practice gained in a team of problem-solvers; (ii Case-based reasoning (CBR parallels the human problem-solving process; (iii Content-based image retrieval (CBIR can be useful for computer-aided diagnosis (CAD. To overcome the known drawbacks of existing learning systems, we developed the concept of image-based case retrieval for radiological education (IBCR-RE. The IBCR-RE diagnostic training is embedded into a didactic framework based on the Seven Jump approach, which is well established in problem-based learning (PBL. In order to provide a learning environment that is as similar as possible to radiological practice, we have analysed the radiological workflow and environment. Results We mapped the IBCR-RE diagnostic training approach into the Image Retrieval in Medical Applications (IRMA framework, resulting in the proposed concept of the IRMAdiag training application. IRMAdiag makes use of the modular structure of IRMA and comprises (i the IRMA core, i.e., the IRMA CBIR engine; and (ii the IRMAcon viewer. We propose embedding IRMAdiag into hospital information technology (IT infrastructure using the standard protocols Digital Imaging and Communications in Medicine (DICOM and Health Level Seven (HL7. Furthermore, we present a case description and a scheme of planned evaluations to comprehensively assess the system. Conclusions The IBCR-RE paradigm

  19. Validation of Rain Rate Retrievals for the Airborne Hurricane Imaging Radiometer (HIRAD)

    Science.gov (United States)

    Jacob, Maria Marta; Salemirad, Matin; Jones, W. Linwood; Biswas, Sayak; Cecil, Daniel

    2015-01-01

    The NASA Hurricane and Severe Storm Sentinel (HS3) mission is an aircraft field measurements program using NASA's unmanned Global Hawk aircraft system for remote sensing and in situ observations of Atlantic and Caribbean Sea hurricanes. One of the principal microwave instruments is the Hurricane Imaging Radiometer (HIRAD), which measures surface wind speeds and rain rates. For validation of the HIRAD wind speed measurement in hurricanes, there exists a comprehensive set of comparisons with the Stepped Frequency Microwave Radiometer (SFMR) with in situ GPS dropwindsondes [1]. However, for rain rate measurements, there are only indirect correlations with rain imagery from other HS3 remote sensors (e.g., the dual-frequency Ka- & Ku-band doppler radar, HIWRAP), which is only qualitative in nature. However, this paper presents results from an unplanned rain rate measurement validation opportunity that occurred in 2013, when HIRAD flew over an intense tropical squall line that was simultaneously observed by the Tampa NEXRAD meteorological radar (Fig. 1). During this experiment, Global Hawk flying at an altitude of 18 km made 3 passes over the rapidly propagating thunderstorm, while the TAMPA NEXRAD perform volume scans on a 5-minute interval. Using the well-documented NEXRAD Z-R relationship, 2D images of rain rate (mm/hr) were obtained at two altitudes (3 km & 6 km), which serve as surface truth for the HIRAD rain rate retrievals. A preliminary comparison of HIRAD rain rate retrievals (image) for the first pass and the corresponding closest NEXRAD rain image is presented in Fig. 2 & 3. This paper describes the HIRAD instrument, which 1D synthetic-aperture thinned array radiometer (STAR) developed by NASA Marshall Space Flight Center [2]. The rain rate retrieval algorithm, developed by Amarin et al. [3], is based on the maximum likelihood estimation (MLE) technique, which compares the observed Tb's at the HIRAD operating frequencies of 4, 5, 6 and 6.6 GHz with

  20. Linear information retrieval method in X-ray grating-based phase contrast imaging and its interchangeability with tomographic reconstruction

    Science.gov (United States)

    Wu, Z.; Gao, K.; Wang, Z. L.; Shao, Q. G.; Hu, R. F.; Wei, C. X.; Zan, G. B.; Wali, F.; Luo, R. H.; Zhu, P. P.; Tian, Y. C.

    2017-06-01

    In X-ray grating-based phase contrast imaging, information retrieval is necessary for quantitative research, especially for phase tomography. However, numerous and repetitive processes have to be performed for tomographic reconstruction. In this paper, we report a novel information retrieval method, which enables retrieving phase and absorption information by means of a linear combination of two mutually conjugate images. Thanks to the distributive law of the multiplication as well as the commutative law and associative law of the addition, the information retrieval can be performed after tomographic reconstruction, thus simplifying the information retrieval procedure dramatically. The theoretical model of this method is established in both parallel beam geometry for Talbot interferometer and fan beam geometry for Talbot-Lau interferometer. Numerical experiments are also performed to confirm the feasibility and validity of the proposed method. In addition, we discuss its possibility in cone beam geometry and its advantages compared with other methods. Moreover, this method can also be employed in other differential phase contrast imaging methods, such as diffraction enhanced imaging, non-interferometric imaging, and edge illumination.

  1. [Review of:] Dirk Lewandowski, Web Information Retrieval: Technologien zur Informationssuche im Internet. Frankfurt am Main: DGI, 2005. 248 S. (DGI-Schrift; Informationswissenschaft 7), ISBN 3-925474-55-2

    OpenAIRE

    Oberhauser, Otto

    2005-01-01

    Book review of Dirk Lewandowski, Web Information Retrieval: Technologien zur Informationssuche im Internet. Frankfurt am Main: DGI, 2005. 248 p. (DGI-Schrift; Informationswissenschaft 7), ISBN 3-925474-55-2. A well-investigated and easy to read state-of-the-art report on web search engines.

  2. A web-based solution for 3D medical image visualization

    Science.gov (United States)

    Hou, Xiaoshuai; Sun, Jianyong; Zhang, Jianguo

    2015-03-01

    In this presentation, we present a web-based 3D medical image visualization solution which enables interactive large medical image data processing and visualization over the web platform. To improve the efficiency of our solution, we adopt GPU accelerated techniques to process images on the server side while rapidly transferring images to the HTML5 supported web browser on the client side. Compared to traditional local visualization solution, our solution doesn't require the users to install extra software or download the whole volume dataset from PACS server. By designing this web-based solution, it is feasible for users to access the 3D medical image visualization service wherever the internet is available.

  3. Antarctic moss stress assessment based on chlorophyll content and leaf density retrieved from imaging spectroscopy data.

    Science.gov (United States)

    Malenovský, Zbyněk; Turnbull, Johanna D; Lucieer, Arko; Robinson, Sharon A

    2015-10-01

    The health of several East Antarctic moss-beds is declining as liquid water availability is reduced due to recent environmental changes. Consequently, a noninvasive and spatially explicit method is needed to assess the vigour of mosses spread throughout rocky Antarctic landscapes. Here, we explore the possibility of using near-distance imaging spectroscopy for spatial assessment of moss-bed health. Turf chlorophyll a and b, water content and leaf density were selected as quantitative stress indicators. Reflectance of three dominant Antarctic mosses Bryum pseudotriquetrum, Ceratodon purpureus and Schistidium antarctici was measured during a drought-stress and recovery laboratory experiment and also with an imaging spectrometer outdoors on water-deficient (stressed) and well-watered (unstressed) moss test sites. The stress-indicating moss traits were derived from visible and near infrared turf reflectance using a nonlinear support vector regression. Laboratory estimates of chlorophyll content and leaf density were achieved with the lowest systematic/unsystematic root mean square errors of 38.0/235.2 nmol g(-1) DW and 0.8/1.6 leaves mm(-1) , respectively. Subsequent combination of these indicators retrieved from field hyperspectral images produced small-scale maps indicating relative moss vigour. Once applied and validated on remotely sensed airborne spectral images, this methodology could provide quantitative maps suitable for long-term monitoring of Antarctic moss-bed health. © 2015 The Authors New Phytologist © 2015 New Phytologist Trust.

  4. Effects of Per-Pixel Variability on Uncertainties in Bathymetric Retrievals from High-Resolution Satellite Images

    Directory of Open Access Journals (Sweden)

    Elizabeth J. Botha

    2016-05-01

    Full Text Available Increased sophistication of high spatial resolution multispectral satellite sensors provides enhanced bathymetric mapping capability. However, the enhancements are counter-acted by per-pixel variability in sunglint, atmospheric path length and directional effects. This case-study highlights retrieval errors from images acquired at non-optimal geometrical combinations. The effects of variations in the environmental noise on water surface reflectance and the accuracy of environmental variable retrievals were quantified. Two WorldView-2 satellite images were acquired, within one minute of each other, with Image 1 placed in a near-optimal sun-sensor geometric configuration and Image 2 placed close to the specular point of the Bidirectional Reflectance Distribution Function (BRDF. Image 2 had higher total environmental noise due to increased surface glint and higher atmospheric path-scattering. Generally, depths were under-estimated from Image 2, compared to Image 1. A partial improvement in retrieval error after glint correction of Image 2 resulted in an increase of the maximum depth to which accurate depth estimations were returned. This case-study indicates that critical analysis of individual images, accounting for the entire sun elevation and azimuth and satellite sensor pointing and geometry as well as anticipated wave height and direction, is required to ensure an image is fit for purpose for aquatic data analysis.

  5. Retrieving Forest Structure Variables from Very High Resolution Satellite Images Using AN Automatic Method

    Science.gov (United States)

    Beguet, B.; Chehata, N.; Boukir, S.; Guyon, D.

    2012-07-01

    were tested using multiple linear regressions. As collinearity is a very perturbing problem in multi-linear regression, this issue is carefully addressed. Different variables subset selection methods are tested. A new stepwise method, derived from LARS (Least Angular Regression), turned out the most convincing, significantly improving the quality of estimation for all the forest structure variables (R2 > 0:98). Validation is done through stand ages retrieval along the whole site. The best estimation results are obtained from subsets combining multi-spectral and panchromatic features, with various values of window size, highlighting the potential of a multi-scale approach for retrieving forest structure variables from VHR satellite images.

  6. WebMARS: a multimedia search engine

    Science.gov (United States)

    Ortega-Binderberger, Michael; Mehrotra, Sharad; Chakrabarti, Kaushik; Porkaew, Kriengkrai

    1999-12-01

    The Web provides a large repository of multimedia data, text, images, etc. Most current search engines focus on textural retrieval. In this paper, we focus on using an integrated textural and visual search engine for Web documents. We support query refinement which proves useful and enables cross-media browsing in addition to regular search.

  7. Using the open Web as an information resource and scholarly Web search engines as retrieval tools for academic and research purposes

    OpenAIRE

    Filistea Naude; Chris Rensleigh; Adeline S.A. du Toit

    2010-01-01

    This study provided insight into the significance of the open Web as an information resource and Web search engines as research tools amongst academics. The academic staff establishment of the University of South Africa (Unisa) was invited to participate in a questionnaire survey and included 1188 staff members from five colleges. This study culminated in a PhD dissertation in 2008. One hundred and eighty seven respondents participated in the survey which gave a response rate of 15.7%. The re...

  8. Data Retrieval Algorithms for Validating the Optical Transient Detector and the Lightning Imaging Sensor

    Science.gov (United States)

    Koshak, W. J.; Blakeslee, R. J.; Bailey, J. C.

    2000-01-01

    A linear algebraic solution is provided for the problem of retrieving the location and time of occurrence of lightning ground strikes from an Advanced Lightning Direction Finder (ALDF) network. The ALDF network measures field strength, magnetic bearing, and arrival time of lightning radio emissions. Solutions for the plane (i.e., no earth curvature) are provided that implement all of these measurements. The accuracy of the retrieval method is tested using computer-simulated datasets, and the relative influence of bearing and arrival time data an the outcome of the final solution is formally demonstrated. The algorithm is sufficiently accurate to validate NASA:s Optical Transient Detector and Lightning Imaging Sensor. A quadratic planar solution that is useful when only three arrival time measurements are available is also introduced. The algebra of the quadratic root results are examined in detail to clarify what portions of the analysis region lead to fundamental ambiguities in sc)iirce location, Complex root results are shown to be associated with the presence of measurement errors when the lightning source lies near an outer sensor baseline of the ALDF network. For arbitrary noncollinear network geometries and in the absence of measurement errors, it is shown that the two quadratic roots are equivalent (no source location ambiguity) on the outer sensor baselines. The accuracy of the quadratic planar method is tested with computer-generated datasets, and the results are generally better than those obtained from the three-station linear planar method when bearing errors are about 2 deg.

  9. Learning binary code via PCA of angle projection for image retrieval

    Science.gov (United States)

    Yang, Fumeng; Ye, Zhiqiang; Wei, Xueqi; Wu, Congzhong

    2018-01-01

    With benefits of low storage costs and high query speeds, binary code representation methods are widely researched for efficiently retrieving large-scale data. In image hashing method, learning hashing function to embed highdimensions feature to Hamming space is a key step for accuracy retrieval. Principal component analysis (PCA) technical is widely used in compact hashing methods, and most these hashing methods adopt PCA projection functions to project the original data into several dimensions of real values, and then each of these projected dimensions is quantized into one bit by thresholding. The variances of different projected dimensions are different, and with real-valued projection produced more quantization error. To avoid the real-valued projection with large quantization error, in this paper we proposed to use Cosine similarity projection for each dimensions, the angle projection can keep the original structure and more compact with the Cosine-valued. We used our method combined the ITQ hashing algorithm, and the extensive experiments on the public CIFAR-10 and Caltech-256 datasets validate the effectiveness of the proposed method.

  10. Stochastic Optimized Relevance Feedback Particle Swarm Optimization for Content Based Image Retrieval

    Directory of Open Access Journals (Sweden)

    Muhammad Imran

    2014-01-01

    Full Text Available One of the major challenges for the CBIR is to bridge the gap between low level features and high level semantics according to the need of the user. To overcome this gap, relevance feedback (RF coupled with support vector machine (SVM has been applied successfully. However, when the feedback sample is small, the performance of the SVM based RF is often poor. To improve the performance of RF, this paper has proposed a new technique, namely, PSO-SVM-RF, which combines SVM based RF with particle swarm optimization (PSO. The aims of this proposed technique are to enhance the performance of SVM based RF and also to minimize the user interaction with the system by minimizing the RF number. The PSO-SVM-RF was tested on the coral photo gallery containing 10908 images. The results obtained from the experiments showed that the proposed PSO-SVM-RF achieved 100% accuracy in 8 feedback iterations for top 10 retrievals and 80% accuracy in 6 iterations for 100 top retrievals. This implies that with PSO-SVM-RF technique high accuracy rate is achieved at a small number of iterations.

  11. Towards a 3-D tomographic retrieval for the air-borne limb-imager GLORIA

    Directory of Open Access Journals (Sweden)

    J. Ungermann

    2010-11-01

    Full Text Available GLORIA (Gimballed Limb Observer for Radiance Imaging of the Atmosphere is a new remote sensing instrument essentially combining a Fourier transform infrared spectrometer with a two-dimensional (2-D detector array in combination with a highly flexible gimbal mount. It will be housed in the belly pod of the German research aircraft HALO (High Altitude and Long Range Research Aircraft. It is unique in its combination of high spatial and state-of-the art spectral resolution. Furthermore, the horizontal view angle with respect to the aircraft flight direction can be varied from 45° to 135°. This allows for tomographic measurements of mesoscale events for a wide variety of atmospheric constituents.

    In this paper, a tomographic retrieval scheme is presented, which is able to fully exploit the manifold radiance observations of the GLORIA limb sounder. The algorithm is optimized for massive 3-D retrievals of several hundred thousands of measurements and atmospheric constituents on common hardware. The new scheme is used to explore the capabilities of GLORIA to sound the atmosphere in full 3-D with respect to the choice of the flightpath and to different measurement modes of the instrument using ozone as a test species. It is demonstrated that the achievable resolution should approach 200 m vertically and 20 km–30 km horizontally. Finally, a comparison of the 3-D inversion with conventional 1-D inversions using the assumption of a horizontally homogeneous atmosphere is performed.

  12. Texture Retrieval from VHR Optical Remote Sensed Images Using the Local Extrema Descriptor with Application to Vineyard Parcel Detection

    Directory of Open Access Journals (Sweden)

    Minh-Tan Pham

    2016-04-01

    Full Text Available In this article, we develop a novel method for the detection of vineyard parcels in agricultural landscapes based on very high resolution (VHR optical remote sensing images. Our objective is to perform texture-based image retrieval and supervised classification algorithms. To do that, the local textural and structural features inside each image are taken into account to measure its similarity to other images. In fact, VHR images usually involve a variety of local textures and structures that may verify a weak stationarity hypothesis. Hence, an approach only based on characteristic points, not on all pixels of the image, is supposed to be relevant. This work proposes to construct the local extrema-based descriptor (LED by using the local maximum and local minimum pixels extracted from the image. The LED descriptor is formed based on the radiometric, geometric and gradient features from these local extrema. We first exploit the proposed LED descriptor for the retrieval task to evaluate its performance on texture discrimination. Then, it is embedded into a supervised classification framework to detect vine parcels using VHR satellite images. Experiments performed on VHR panchromatic PLEIADES image data prove the effectiveness of the proposed strategy. Compared to state-of-the-art methods, an enhancement of about 7% in retrieval rate is achieved. For the detection task, about 90% of vineyards are correctly detected.

  13. PROTOTYPE CONTENT BASED IMAGE RETRIEVAL UNTUK DETEKSI PEN YAKIT KULIT DENGAN METODE EDGE DETECTION

    Directory of Open Access Journals (Sweden)

    Erick Fernando

    2016-05-01

    Full Text Available Dokter spesialis kulit melakukan pemeriksa secara visual objek mata, capture objek dengan kamera digital dan menanyakan riwayat perjalanan penyakit pasien, tanpa melakukan perbandingan terhadap gejala dan tanda yang ada sebelummnya. Sehingga pemeriksaan dan perkiraan jenis penyakit kulit. Pengolahan data citra dalam bentuk digital khususnya citra medis sudah sangat dibutuhkan dengan pra-processing. Banyak pasien yang dilayani di rumah sakit masih menggunakan data citra analog. Data analog ini membutuhkan ruangan khusus untuk menyimpan guna menghindarkan kerusakan mekanis. Uraian mengatasi permasalahan ini, citra medis dibuat dalam bentuk digital dan disimpan dalam sistem database dan dapat melihat kesamaan citra kulit yang baru. Citra akan dapat ditampilkan dengan pra- processing dengan identifikasi kesamaan dengan Content Based Image Retrieval (CBIR bekerja dengan cara mengukur kemiripan citra query dengan semua citra yang ada dalam database sehingga query cost berbanding lurus dengan jumlah citra dalam database.

  14. Hybridization of phase retrieval and off-axis digital holography for high resolution imaging of complex shape objects

    Science.gov (United States)

    Wang, Fengpeng; Wang, Dayong; Rong, Lu; Wang, Yunxin; Zhao, Jie

    2017-05-01

    In this paper, a hybrid method of phase retrieval and off-axis digital holography is proposed for imaging of the complex shape objects. Off-axis digital hologram and in-line hologram are recorded. The approximate phase distributions in the recording plane and object plane are obtained by constrained optimization approach from the off-axis hologram, and they are used as the initial value and the constraints in the phase retrieval for eliminating the twin image of in-line holography. Numerical simulations and optical experiments were carried out to validate the proposed method.

  15. Use of web-based simulators and YouTube for teaching of Magnetic Resonance Imaging

    DEFF Research Database (Denmark)

    Hanson, Lars G.

    Interactive web-based software for teaching of 3D vector dynamics involved in Magnetic Resonance Imaging (MRI) was developed. The software is briefly discussed along with the background, design, implementation, dissemination and educational value....

  16. Use of web-based simulators and YouTube for teaching of Magnetic Resonance Imaging

    OpenAIRE

    Hanson, Lars G.

    2012-01-01

    Interactive web-based software for teaching of 3D vector dynamics involved in Magnetic Resonance Imaging (MRI) was developed. The software is briefly discussed along with the background, design, implementation, dissemination and educational value.

  17. GPM GROUND VALIDATION ENVIRONMENT CANADA (EC) WEB CAMERA IMAGES GCPEX V1

    Data.gov (United States)

    National Aeronautics and Space Administration — The GPM Ground Validation Environment Canada (EC) Web Camera Images GCPEx were taken at 5 site locations in Ontario, Canada during the GPM Cold-season Precipitation...

  18. Application of Google Maps API service for creating web map of information retrieved from CORINE land cover databases

    Directory of Open Access Journals (Sweden)

    Kilibarda Milan

    2010-01-01

    Full Text Available Today, Google Maps API application based on Ajax technology as standard web service; facilitate users with publication interactive web maps, thus opening new possibilities in relation to the classical analogue maps. CORINE land cover databases are recognized as the fundamental reference data sets for numerious spatial analysis. The theoretical and applicable aspects of Google Maps API cartographic service are considered on the case of creating web map of change in urban areas in Belgrade and surround from 2000. to 2006. year, obtained from CORINE databases.

  19. Use of web-based simulators and YouTube for teaching of Magnetic Resonance Imaging

    DEFF Research Database (Denmark)

    Hanson, Lars G.

    Interactive web-based software for teaching of 3D vector dynamics involved in Magnetic Resonance Imaging (MRI) was developed. The software is briefly discussed along with the background, design, implementation, dissemination and educational value.......Interactive web-based software for teaching of 3D vector dynamics involved in Magnetic Resonance Imaging (MRI) was developed. The software is briefly discussed along with the background, design, implementation, dissemination and educational value....

  20. A World Wide Web Region-Based Image Search Engine

    OpenAIRE

    Kompatsiaris, Ioannis; Triantafyllou, Evangelia; Strintzis, Michael G.

    2001-01-01

    In this paper the development of an intelligent imagecontent-based search engine for the World Wide Web is presented.This system will offer a new form of media representationand access of content available in WWW. InformationWeb Crawlers continuously traverse the Internet and collectimages that are subsequently indexed based on integratedfeature vectors. As a basis for the indexing, the K-Meansalgorithm is used, modified so as to take into account thecoherence of the regions. Based on the ext...

  1. Content-based image retrieval using scale invariant feature transform and gray level co-occurrence matrix

    Science.gov (United States)

    Srivastava, Prashant; Khare, Manish; Khare, Ashish

    2017-06-01

    The rapid growth of different types of images has posed a great challenge to the scientific fraternity. As the images are increasing everyday, it is becoming a challenging task to organize the images for efficient and easy access. The field of image retrieval attempts to solve this problem through various techniques. This paper proposes a novel technique of image retrieval by combining Scale Invariant Feature Transform (SIFT) and Co-occurrence matrix. For construction of feature vector, SIFT descriptors of gray scale images are computed and normalized using z-score normalization followed by construction of Gray-Level Co-occurrence Matrix (GLCM) of normalized SIFT keypoints. The constructed feature vector is matched with those of images in database to retrieve visually similar images. The proposed method is tested on Corel-1K dataset and the performance is measured in terms of precision and recall. The experimental results demonstrate that the proposed method outperforms some of the other state-of-the-art methods.

  2. Content-Based High-Resolution Remote Sensing Image Retrieval via Unsupervised Feature Learning and Collaborative Affinity Metric Fusion

    Directory of Open Access Journals (Sweden)

    Yansheng Li

    2016-08-01

    Full Text Available With the urgent demand for automatic management of large numbers of high-resolution remote sensing images, content-based high-resolution remote sensing image retrieval (CB-HRRS-IR has attracted much research interest. Accordingly, this paper proposes a novel high-resolution remote sensing image retrieval approach via multiple feature representation and collaborative affinity metric fusion (IRMFRCAMF. In IRMFRCAMF, we design four unsupervised convolutional neural networks with different layers to generate four types of unsupervised features from the fine level to the coarse level. In addition to these four types of unsupervised features, we also implement four traditional feature descriptors, including local binary pattern (LBP, gray level co-occurrence (GLCM, maximal response 8 (MR8, and scale-invariant feature transform (SIFT. In order to fully incorporate the complementary information among multiple features of one image and the mutual information across auxiliary images in the image dataset, this paper advocates collaborative affinity metric fusion to measure the similarity between images. The performance evaluation of high-resolution remote sensing image retrieval is implemented on two public datasets, the UC Merced (UCM dataset and the Wuhan University (WH dataset. Large numbers of experiments show that our proposed IRMFRCAMF can significantly outperform the state-of-the-art approaches.

  3. Fractional Fourier domain optical image hiding using phase retrieval algorithm based on iterative nonlinear double random phase encoding.

    Science.gov (United States)

    Wang, Xiaogang; Chen, Wen; Chen, Xudong

    2014-09-22

    We present a novel image hiding method based on phase retrieval algorithm under the framework of nonlinear double random phase encoding in fractional Fourier domain. Two phase-only masks (POMs) are efficiently determined by using the phase retrieval algorithm, in which two cascaded phase-truncated fractional Fourier transforms (FrFTs) are involved. No undesired information disclosure, post-processing of the POMs or digital inverse computation appears in our proposed method. In order to achieve the reduction in key transmission, a modified image hiding method based on the modified phase retrieval algorithm and logistic map is further proposed in this paper, in which the fractional orders and the parameters with respect to the logistic map are regarded as encryption keys. Numerical results have demonstrated the feasibility and effectiveness of the proposed algorithms.

  4. Ontology-oriented retrieval of putative microRNAs in Vitis vinifera via GrapeMiRNA: a web database of de novo predicted grape microRNAs

    Directory of Open Access Journals (Sweden)

    Fontana Paolo

    2009-06-01

    Full Text Available Abstract Background Two complete genome sequences are available for Vitis vinifera Pinot noir. Based on the sequence and gene predictions produced by the IASMA, we performed an in silico detection of putative microRNA genes and of their targets, and collected the most reliable microRNA predictions in a web database. The application is available at http://www.itb.cnr.it/ptp/grapemirna/. Description The program FindMiRNA was used to detect putative microRNA genes in the grape genome. A very high number of predictions was retrieved, calling for validation. Nine parameters were calculated and, based on the grape microRNAs dataset available at miRBase, thresholds were defined and applied to FindMiRNA predictions having targets in gene exons. In the resulting subset, predictions were ranked according to precursor positions and sequence similarity, and to target identity. To further validate FindMiRNA predictions, comparisons to the Arabidopsis genome, to the grape Genoscope genome, and to the grape EST collection were performed. Results were stored in a MySQL database and a web interface was prepared to query the database and retrieve predictions of interest. Conclusion The GrapeMiRNA database encompasses 5,778 microRNA predictions spanning the whole grape genome. Predictions are integrated with information that can be of use in selection procedures. Tools added in the web interface also allow to inspect predictions according to gene ontology classes and metabolic pathways of targets. The GrapeMiRNA database can be of help in selecting candidate microRNA genes to be validated.

  5. Encryption of QR code and grayscale image in interference-based scheme with high quality retrieval and silhouette problem removal

    Science.gov (United States)

    Qin, Yi; Wang, Hongjuan; Wang, Zhipeng; Gong, Qiong; Wang, Danchen

    2016-09-01

    In optical interference-based encryption (IBE) scheme, the currently available methods have to employ the iterative algorithms in order to encrypt two images and retrieve cross-talk free decrypted images. In this paper, we shall show that this goal can be achieved via an analytical process if one of the two images is QR code. For decryption, the QR code is decrypted in the conventional architecture and the decryption has a noisy appearance. Nevertheless, the robustness of QR code against noise enables the accurate acquisition of its content from the noisy retrieval, as a result of which the primary QR code can be exactly regenerated. Thereafter, a novel optical architecture is proposed to recover the grayscale image by aid of the QR code. In addition, the proposal has totally eliminated the silhouette problem existing in the previous IBE schemes, and its effectiveness and feasibility have been demonstrated by numerical simulations.

  6. Land surface temperature retrieval from the Advanced Himawari Imager using a pratical split-window algorithm

    Science.gov (United States)

    Li, H.; Liu, C.; Yongming, D.; Cao, B.; Qinhuo, L.

    2016-12-01

    Land surface temperature (LST) is a key parameter for hydrological, meteorological, climatological and environmental studies. During the past decades, many efforts have been devoted to the establishment of methodology for retrieving the LST from remote sensing data and significant progress has been achieved. Himawari-8, a new generation of Japanese geostationary meteorological satellites-carry state-of-the-art optical sensors with significantly higher radiometric, spectral, and spatial resolution than those previously available in the geostationary orbit. The satellite has a new payload called Advanced Himawari Imager (AHI), which has 16 observation bands, and their spatial resolution is 0.5 or 1 km for visible and near-infrared bands and 2 km for infrared bands. In this study, a practical split-window (SW) algorithm were developed to retrieve the LST from AHI data. The coefficients of the SW algorithm were determined based on atmospheric water vapor (WV) and view zenith angle sub-ranges, and the WV were obtained through a simple method based on two split-window channels. In order to improve the accuracy of the land surface emissivity (LSE), the ASTER Global Emissivity Dataset (GED) v4 was used for estimating the LSE. Seven months of AHI LST products in China from June 2015 to December 2015 were generated. The LST products were evaluated against observations collected from three ground sites in an arid area of northwest China during the Heihe Watershed Allied Telemetry Experimental Research (HiWATER) experiment. The results show that the developed algorithm demonstrate a good accuracy, with an average bias of 0.86 K and -1.46 K and an average root mean square error (RMSE) of 2.82 K and 2.43 K for the three sites during daytime and nighttime, respectively.

  7. Development of image diagnosis support system by Web3D. No.1

    Energy Technology Data Exchange (ETDEWEB)

    Kota, Akio [Tokai Univ., School of High-Technology for Human Welfare, Numazu, Shizuoka (Japan); Suto, Yasuzo; Tan, Gakuko; Otaki, Makoto; Horii, Minoru [Tokai Univ., Tokyo (Japan); Yamaguchi, Reo; Higuchi, Masasato [SACCO Co., Ltd., Tokyo (Japan)

    2003-07-01

    Virtual endoscopy image is the three-dimensional surface image, constructed by multi-sliced CT/MRI image for the inside of tuber organ such as stomach and intestine. The image is very useful for diagnosis and surgical support. However, the information volume is very large, then the manipulation such as dynamic viewing and navigation is relatively difficult under the network environment by internet and so on. We attempt to develop the effective virtual endoscopy system on web. (author)

  8. Development of image diagnosis support system by Web3D. No.2

    Energy Technology Data Exchange (ETDEWEB)

    Yamaguchi, Reo; Higuchi, Masato [SACCO Co., Ltd., Tokyo (Japan); Kota, Akio; Suto, Yasuzo; Tan, Gakuko; Otaki, Makoto; Horii, Minoru [Tokai Univ., Tokyo (Japan)

    2003-07-01

    Virtual endoscopy image is three-dimensional surface images, constructed by multi-sliced CT/MRI image for the inside of tuber organ such as stomach and intestine. The image is very useful for diagnosis and surgical support. However, information volume is very large, then the manipulation such as dynamic viewing and navigation is relatively difficult under network environment by internet and so on. We attempt to develop effective virtual endoscopy system on web. (author)

  9. Facilitating medical information search using Google Glass connected to a content-based medical image retrieval system.

    Science.gov (United States)

    Widmer, Antoine; Schaer, Roger; Markonis, Dimitrios; Muller, Henning

    2014-01-01

    Wearable computing devices are starting to change the way users interact with computers and the Internet. Among them, Google Glass includes a small screen located in front of the right eye, a camera filming in front of the user and a small computing unit. Google Glass has the advantage to provide online services while allowing the user to perform tasks with his/her hands. These augmented glasses uncover many useful applications, also in the medical domain. For example, Google Glass can easily provide video conference between medical doctors to discuss a live case. Using these glasses can also facilitate medical information search by allowing the access of a large amount of annotated medical cases during a consultation in a non-disruptive fashion for medical staff. In this paper, we developed a Google Glass application able to take a photo and send it to a medical image retrieval system along with keywords in order to retrieve similar cases. As a preliminary assessment of the usability of the application, we tested the application under three conditions (images of the skin; printed CT scans and MRI images; and CT and MRI images acquired directly from an LCD screen) to explore whether using Google Glass affects the accuracy of the results returned by the medical image retrieval system. The preliminary results show that despite minor problems due to the relative stability of the Google Glass, images can be sent to and processed by the medical image retrieval system and similar images are returned to the user, potentially helping in the decision making process.

  10. Automated storage and retrieval of thin-section CT images to assist diagnosis: system description and preliminary assessment.

    Science.gov (United States)

    Aisen, Alex M; Broderick, Lynn S; Winer-Muram, Helen; Brodley, Carla E; Kak, Avinash C; Pavlopoulou, Christina; Dy, Jennifer; Shyu, Chi-Ren; Marchiori, Alan

    2003-07-01

    A software system and database for computer-aided diagnosis with thin-section computed tomographic (CT) images of the chest was designed and implemented. When presented with an unknown query image, the system uses pattern recognition to retrieve visually similar images with known diagnoses from the database. A preliminary validation trial was conducted with 11 volunteers who were asked to select the best diagnosis for a series of test images, with and without software assistance. The percentage of correct answers increased from 29% to 62% with computer assistance. This finding suggests that this system may be useful for computer-assisted diagnosis.

  11. Just for the Image? The Impact of Web 2.0 for Public Institutions

    Directory of Open Access Journals (Sweden)

    Stephanie Bettina Linek

    2013-01-01

    Full Text Available Web 2.0 is of growing importance and nowadays also a hot topic for public institutions. However, it is still an open question if users appreciate and recognize the merit of Web 2.0 applications in the context of public institutions. The presented paper describes first empirical findings on users’ reactions on the linkage of a modern library 2.0 with Web 2.0 applications, namely the presence in social networks and the integration of blogs and wikis. The results showed that most users didn’t recognize the benefit of Web 2.0 in the context of the homepage of a library 2.0. However, even though they didn’t use the accordingly Web 2.0 links by themselves, they thought that the connection to Web 2.0 is a necessity for the image of a modern library. These findings imply that the connection to Web 2.0 is important for the image of a modern public institution but the surplus benefit has to be better communicated and to be made more visible on the conventional homepage in Web 1.0.

  12. BIRD: Bio-Image Referral Database. Design and implementation of a new web based and patient multimedia data focused system for effective medical diagnosis and therapy.

    Science.gov (United States)

    Pinciroli, Francesco; Masseroli, Marco; Acerbo, Livio A; Bonacina, Stefano; Ferrari, Roberto; Marchente, Mario

    2004-01-01

    This paper presents a low cost software platform prototype supporting health care personnel in retrieving patient referral multimedia data. These information are centralized in a server machine and structured by using a flexible eXtensible Markup Language (XML) Bio-Image Referral Database (BIRD). Data are distributed on demand to requesting client in an Intranet network and transformed via eXtensible Stylesheet Language (XSL) to be visualized in an uniform way on market browsers. The core server operation software has been developed in PHP Hypertext Preprocessor scripting language, which is very versatile and useful for crafting a dynamic Web environment.

  13. Enhanced x-ray imaging for a thin film cochlear implant with metal artefacts using phase retrieval tomography

    Energy Technology Data Exchange (ETDEWEB)

    Arhatari, B. D. [Department of Physics, La Trobe University, Victoria 3086 (Australia); ARC Centre of Excellence for Coherent X-ray Science, Melbourne (Australia); Harris, A. R.; Paolini, A. G. [School of Psychological Science, La Trobe University, Victoria 3086 (Australia); ARC Centre of Excellence for Electromaterials Science, Melbourne (Australia); Peele, A. G. [Department of Physics, La Trobe University, Victoria 3086 (Australia); ARC Centre of Excellence for Coherent X-ray Science, Melbourne (Australia); Australian Synchrotron, Victoria 3168 (Australia)

    2012-06-01

    Phase retrieval tomography has been successfully used to enhance imaging in systems that exhibit poor absorption contrast. However, when highly absorbing regions are present in a sample, so-called metal artefacts can appear in the tomographic reconstruction. We demonstrate that straightforward approaches for metal artefact reconstruction, developed in absorption contrast tomography, can be applied when using phase retrieval. Using a prototype thin film cochlear implant that has high and low absorption components made from iridium (or platinum) and plastic, respectively, we show that segmentation of the various components is possible and hence measurement of the electrode geometry and relative location to other regions of interest can be achieved.

  14. A Web simulation of medical image reconstruction and processing as an educational tool.

    Science.gov (United States)

    Papamichail, Dimitrios; Pantelis, Evaggelos; Papagiannis, Panagiotis; Karaiskos, Pantelis; Georgiou, Evangelos

    2015-02-01

    Web educational resources integrating interactive simulation tools provide students with an in-depth understanding of the medical imaging process. The aim of this work was the development of a purely Web-based, open access, interactive application, as an ancillary learning tool in graduate and postgraduate medical imaging education, including a systematic evaluation of learning effectiveness. The pedagogic content of the educational Web portal was designed to cover the basic concepts of medical imaging reconstruction and processing, through the use of active learning and motivation, including learning simulations that closely resemble actual tomographic imaging systems. The user can implement image reconstruction and processing algorithms under a single user interface and manipulate various factors to understand the impact on image appearance. A questionnaire for pre- and post-training self-assessment was developed and integrated in the online application. The developed Web-based educational application introduces the trainee in the basic concepts of imaging through textual and graphical information and proceeds with a learning-by-doing approach. Trainees are encouraged to participate in a pre- and post-training questionnaire to assess their knowledge gain. An initial feedback from a group of graduate medical students showed that the developed course was considered as effective and well structured. An e-learning application on medical imaging integrating interactive simulation tools was developed and assessed in our institution.

  15. Imaging the 3D structure of secondary osteons in human cortical bone using phase-retrieval tomography

    Energy Technology Data Exchange (ETDEWEB)

    Arhatari, B D; Peele, A G [Department of Physics, La Trobe University, Victoria 3086 (Australia); Cooper, D M L [Department of Anatomy and Cell Biology, University of Saskatchewan, Saskatoon (Canada); Thomas, C D L; Clement, J G [Melbourne Dental School, University of Melbourne, Victoria 3010 (Australia)

    2011-08-21

    By applying a phase-retrieval step before carrying out standard filtered back-projection reconstructions in tomographic imaging, we were able to resolve structures with small differences in density within a densely absorbing sample. This phase-retrieval tomography is particularly suited for the three-dimensional segmentation of secondary osteons (roughly cylindrical structures) which are superimposed upon an existing cortical bone structure through the process of turnover known as remodelling. The resulting images make possible the analysis of the secondary osteon structure and the relationship between an osteon and the surrounding tissue. Our observations have revealed many different and complex 3D structures of osteons that could not be studied using previous methods. This work was carried out using a laboratory-based x-ray source, which makes obtaining these sorts of images readily accessible.

  16. Fusion of Deep Features and Weighted VLAD Vectors based on Multiple Features for Image Retrieval

    Directory of Open Access Journals (Sweden)

    Wang Yanhong.

    2017-01-01

    Full Text Available In traditional vector of locally aggregated descriptors (VLAD method, the final VLAD vector is reshaped by summing up the residuals between each descriptor and its corresponding visual word. The norm of the residuals varies significantly, and it can make “visual burst”. This is caused by a fact that the contribution of each descriptor to VLAD vector is not the same. To address this problem, we add a different weight to each residual such that the contribution of each descriptor to the VLAD vector becomes even to a certain degree. Also, traditional VLAD method only uses the local gradient features of images. Thus it has a low discrimination. In this paper, local color features are extracted and used to the VLAD method. Moreover, we fuse deep features and the multiple VLAD vectors based on local gradient and color information. Also, in order to reduce running time and improve retrieval accuracy, PCA and whitening operations are used for VLAD vectors. Our proposed method is evaluated on three benchmark datasets, i.e., Holidays, Ukbench and Oxford5k. Experimental results show that our proposed method achieves good performance.

  17. Retrieval of Garstang's emission function from all-sky camera images

    Science.gov (United States)

    Kocifaj, Miroslav; Solano Lamphar, Héctor Antonio; Kundracik, František

    2015-10-01

    The emission function from ground-based light sources predetermines the skyglow features to a large extent, while most mathematical models that are used to predict the night sky brightness require the information on this function. The radiant intensity distribution on a clear sky is experimentally determined as a function of zenith angle using the theoretical approach published only recently in MNRAS, 439, 3405-3413. We have made the experiments in two localities in Slovakia and Mexico by means of two digital single lens reflex professional cameras operating with different lenses that limit the system's field-of-view to either 180º or 167º. The purpose of using two cameras was to identify variances between two different apertures. Images are taken at different distances from an artificial light source (a city) with intention to determine the ratio of zenith radiance relative to horizontal irradiance. Subsequently, the information on the fraction of the light radiated directly into the upward hemisphere (F) is extracted. The results show that inexpensive devices can properly identify the upward emissions with adequate reliability as long as the clear sky radiance distribution is dominated by a largest ground-based light source. Highly unstable turbidity conditions can also make the parameter F difficult to find or even impossible to retrieve. The measurements at low elevation angles should be avoided due to a potentially parasitic effect of direct light emissions from luminaires surrounding the measuring site.

  18. Retrieving forest background reflectance in a boreal region from Compact Airborne Spectrographic Imager (CASI) data

    Science.gov (United States)

    Pisek, J.; Chen, J. M.; Miller, J. R.

    2008-05-01

    Leaf area index (LAI) is one of the most important Earth's surface parameters in carbon balance and climate models. However, vegetation background (soil/moss/grass/shrub in forests) is a recognized problem that limits the accuracy of satellite-estimated forest LAI. In this study, we verify the feasibility of using multi-angle remote sensing to determine the optical properties of the vegetation background. Studies of the bidirectional behavior of forest canopy have shown that the total reflectance of a forest canopy is the combination of illuminated and shaded components of the tree crown as well as the background. Data from Compact Airborne Spectrographic Imager (CASI) positioned at the nadir and 40° forward directions over a boreal forest site near Sudbury, Ontario, Canada, were used to derive the reflectivity of the forest background based on the probabilities of viewing the illuminated tree crown and background at those view angles. The probabilities were estimated using the Four-Scale model. The derived values were then compared with in-situ background reflectance measured at the test sites. The modification of the background by white and black plastic sheets, together with the unmodified understory in the immediate neighborhood of the sites, provided us with two extreme limits and intermediate cases for the development and testing of the model for the successful retrieval of the information from the data. The results show that the developed methodology is capable of obtaining background reflectance for various forest stand conditions. This verification of the concept in the field is an important step towards the operational estimation of the forest background reflectance from the bidirectional reflections observed by Multi-angle Imaging SpectroRadiometer (MISR) instrument. Preliminary Canada-wide maps of background reflectance in red and NIR bands have been produced using MISR data.

  19. Nuclear expert web mining system: monitoring and analysis of nuclear acceptance by information retrieval and opinion extraction on the Internet

    Energy Technology Data Exchange (ETDEWEB)

    Reis, Thiago; Barroso, Antonio C.O.; Imakuma, Kengo, E-mail: thiagoreis@usp.b, E-mail: barroso@ipen.b, E-mail: kimakuma@ipen.b [Instituto de Pesquisas Energeticas e Nucleares (IPEN/CNEN-SP), Sao Paulo, SP (Brazil)

    2011-07-01

    This paper presents a research initiative that aims to collect nuclear related information and to analyze opinionated texts by mining the hypertextual data environment and social networks web sites on the Internet. Different from previous approaches that employed traditional statistical techniques, it is being proposed a novel Web Mining approach, built using the concept of Expert Systems, for massive and autonomous data collection and analysis. The initial step has been accomplished, resulting in a framework design that is able to gradually encompass a set of evolving techniques, methods, and theories in such a way that this work will build a platform upon which new researches can be performed more easily by just substituting modules or plugging in new ones. Upon completion it is expected that this research will contribute to the understanding of the population views on nuclear technology and its acceptance. (author)

  20. Joint aerosol and water-leaving radiance retrieval from Airborne Multi-angle SpectroPolarimeter Imager

    Science.gov (United States)

    Xu, F.; Dubovik, O.; Zhai, P.; Kalashnikova, O. V.; Diner, D. J.

    2015-12-01

    The Airborne Multiangle SpectroPolarimetric Imager (AirMSPI) [1] has been flying aboard the NASA ER-2 high altitude aircraft since October 2010. In step-and-stare operation mode, AirMSPI typically acquires observations of a target area at 9 view angles between ±67° off the nadir. Its spectral channels are centered at 355, 380, 445, 470*, 555, 660*, and 865* nm, where the asterisk denotes the polarimetric bands. In order to retrieve information from the AirMSPI observations, we developed a efficient and flexible retrieval code that can jointly retrieve aerosol and water leaving radiance simultaneously. The forward model employs a coupled Markov Chain (MC) [2] and adding/doubling [3] radiative transfer method which is fully linearized and integrated with a multi-patch retrieval algorithm to obtain aerosol and water leaving radiance/Chl-a information. Various constraints are imposed to improve convergence and retrieval stability. We tested the aerosol and water leaving radiance retrievals using the AirMSPI radiance and polarization measurements by comparing to the retrieved aerosol concentration, size distribution, water-leaving radiance, and chlorophyll concentration to the values reported by the USC SeaPRISM AERONET-OC site off the coast of Southern California. In addition, the MC-based retrievals of aerosol properties were compared with GRASP ([4-5]) retrievals for selected cases. The MC-based retrieval approach was then used to systematically explore the benefits of AirMSPI's ultraviolet and polarimetric channels, the use of multiple view angles, and constraints provided by inclusion of bio-optical models of the water-leaving radiance. References [1]. D. J. Diner, et al. Atmos. Meas. Tech. 6, 1717 (2013). [2]. F. Xu et al. Opt. Lett. 36, 2083 (2011). [3]. J. E. Hansen and L.D. Travis. Space Sci. Rev. 16, 527 (1974). [4]. O. Dubovik et al. Atmos. Meas. Tech., 4, 975 (2011). [5]. O. Dubovik et al. SPIE: Newsroom, DOI:10.1117/2.1201408.005558 (2014).

  1. Joint Segmentation and Recognition of Categorized Objects from Noisy Web Image Collection.

    Science.gov (United States)

    Wang, Le; Hua, Gang; Xue, Jianru; Gao, Zhanning; Zheng, Nanning

    2014-07-14

    The segmentation of categorized objects addresses the problem of joint segmentation of a single category of object across a collection of images, where categorized objects are referred to objects in the same category. Most existing methods of segmentation of categorized objects made the assumption that all images in the given image collection contain the target object. In other words, the given image collection is noise free. Therefore, they may not work well when there are some noisy images which are not in the same category, such as those image collections gathered by a text query from modern image search engines. To overcome this limitation, we propose a method for automatic segmentation and recognition of categorized objects from noisy Web image collections. This is achieved by cotraining an automatic object segmentation algorithm that operates directly on a collection of images, and an object category recognition algorithm that identifies which images contain the target object. The object segmentation algorithm is trained on a subset of images from the given image collection which are recognized to contain the target object with high confidence, while training the object category recognition model is guided by the intermediate segmentation results obtained from the object segmentation algorithm. This way, our co-training algorithm automatically identifies the set of true positives in the noisy Web image collection, and simultaneously extracts the target objects from all the identified images. Extensive experiments validated the efficacy of our proposed approach on four datasets: 1) the Weizmann horse dataset, 2) the MSRC object category dataset, 3) the iCoseg dataset, and 4) a new 30-categories dataset including 15,634 Web images with both hand-annotated category labels and ground truth segmentation labels. It is shown that our method compares favorably with the state-of-the-art, and has the ability to deal with noisy image collections.

  2. A Multi-Channel Method for Retrieving Surface Temperature for High-Emissivity Surfaces from Hyperspectral Thermal Infrared Images.

    Science.gov (United States)

    Zhong, Xinke; Labed, Jelila; Zhou, Guoqing; Shao, Kun; Li, Zhao-Liang

    2015-06-08

    The surface temperature (ST) of high-emissivity surfaces is an important parameter in climate systems. The empirical methods for retrieving ST for high-emissivity surfaces from hyperspectral thermal infrared (HypTIR) images require spectrally continuous channel data. This paper aims to develop a multi-channel method for retrieving ST for high-emissivity surfaces from space-borne HypTIR data. With an assumption of land surface emissivity (LSE) of 1, ST is proposed as a function of 10 brightness temperatures measured at the top of atmosphere by a radiometer having a spectral interval of 800-1200 cm(-1) and a spectral sampling frequency of 0.25 cm(-1). We have analyzed the sensitivity of the proposed method to spectral sampling frequency and instrumental noise, and evaluated the proposed method using satellite data. The results indicated that the parameters in the developed function are dependent on the spectral sampling frequency and that ST of high-emissivity surfaces can be accurately retrieved by the proposed method if appropriate values are used for each spectral sampling frequency. The results also showed that the accuracy of the retrieved ST is of the order of magnitude of the instrumental noise and that the root mean square error (RMSE) of the ST retrieved from satellite data is 0.43 K in comparison with the AVHRR SST product.

  3. Prototype of Partial Cutting Tool of Geological Map Images Distributed by Geological Web Map Service

    Science.gov (United States)

    Nonogaki, S.; Nemoto, T.

    2014-12-01

    Geological maps and topographical maps play an important role in disaster assessment, resource management, and environmental preservation. These map information have been distributed in accordance with Web services standards such as Web Map Service (WMS) and Web Map Tile Service (WMTS) recently. In this study, a partial cutting tool of geological map images distributed by geological WMTS was implemented with Free and Open Source Software. The tool mainly consists of two functions: display function and cutting function. The former function was implemented using OpenLayers. The latter function was implemented using Geospatial Data Abstraction Library (GDAL). All other small functions were implemented by PHP and Python. As a result, this tool allows not only displaying WMTS layer on web browser but also generating a geological map image of intended area and zoom level. At this moment, available WTMS layers are limited to the ones distributed by WMTS for the Seamless Digital Geological Map of Japan. The geological map image can be saved as GeoTIFF format and WebGL format. GeoTIFF is one of the georeferenced raster formats that is available in many kinds of Geographical Information System. WebGL is useful for confirming a relationship between geology and geography in 3D. In conclusion, the partial cutting tool developed in this study would contribute to create better conditions for promoting utilization of geological information. Future work is to increase the number of available WMTS layers and the types of output file format.

  4. Foreign Body Retrieval

    Medline Plus

    Full Text Available ... story about radiology? Share your patient story here Images × Image Gallery Radiologic technologist preparing patient for x-ray. ... possible charges you will incur. Web page review process: This Web page is reviewed regularly by a ...

  5. Gradient descent algorithm applied to wavefront retrieval from through-focus images by an extreme ultraviolet microscope with partially coherent source.

    Science.gov (United States)

    Yamazoe, Kenji; Mochi, Iacopo; Goldberg, Kenneth A

    2014-12-01

    The wavefront retrieval by gradient descent algorithm that is typically applied to coherent or incoherent imaging is extended to retrieve a wavefront from a series of through-focus images by partially coherent illumination. For accurate retrieval, we modeled partial coherence as well as object transmittance into the gradient descent algorithm. However, this modeling increases the computation time due to the complexity of partially coherent imaging simulation that is repeatedly used in the optimization loop. To accelerate the computation, we incorporate not only the Fourier transform but also an eigenfunction decomposition of the image. As a demonstration, the extended algorithm is applied to retrieve a field-dependent wavefront of a microscope operated at extreme ultraviolet wavelength (13.4 nm). The retrieved wavefront qualitatively matches the expected characteristics of the lens design.

  6. Natural Language Object Retrieval

    OpenAIRE

    Hu, Ronghang; Xu, Huazhe; Rohrbach, Marcus; Feng, Jiashi; Saenko, Kate; Darrell, Trevor

    2015-01-01

    In this paper, we address the task of natural language object retrieval, to localize a target object within a given image based on a natural language query of the object. Natural language object retrieval differs from text-based image retrieval task as it involves spatial information about objects within the scene and global scene context. To address this issue, we propose a novel Spatial Context Recurrent ConvNet (SCRC) model as scoring function on candidate boxes for object retrieval, integ...

  7. AN ENSEMBLE TEMPLATE MATCHING AND CONTENT-BASED IMAGE RETRIEVAL SCHEME TOWARDS EARLY STAGE DETECTION OF MELANOMA

    Directory of Open Access Journals (Sweden)

    Spiros Kostopoulos

    2016-12-01

    Full Text Available Malignant melanoma represents the most dangerous type of skin cancer. In this study we present an ensemble classification scheme, employing the mutual information, the cross-correlation and the clustering based on proximity of image features methods, for early stage assessment of melanomas on plain photography images. The proposed scheme performs two main operations. First, it retrieves the most similar, to the unknown case, image samples from an available image database with verified benign moles and malignant melanoma cases. Second, it provides an automated estimation regarding the nature of the unknown image sample based on the majority of the most similar images retrieved from the available database. Clinical material comprised 75 melanoma and 75 benign plain photography images collected from publicly available dermatological atlases. Results showed that the ensemble scheme outperformed all other methods tested in terms of accuracy with 94.9±1.5%, following an external cross-validation evaluation methodology. The proposed scheme may benefit patients by providing a second opinion consultation during the self-skin examination process and the physician by providing a second opinion estimation regarding the nature of suspicious moles that may assist towards decision making especially for ambiguous cases, safeguarding, in this way from potential diagnostic misinterpretations.

  8. Prototype Web-based continuing medical education using FlashPix images.

    Science.gov (United States)

    Landman, A; Yagi, Y; Gilbertson, J; Dawson, R; Marchevsky, A; Becich, M J

    2000-01-01

    Continuing Medical Education (CME) is a requirement among practicing physicians to promote continuous enhancement of clinical knowledge to reflect new developments in medical care. Previous research has harnessed the Web to disseminate complete pathology CME case studies including history, images, diagnoses, and discussions to the medical community. Users submit real-time diagnoses and receive instantaneous feedback, eliminating the need for hard copies of case material and case evaluation forms. This project extends the Web-based CME paradigm with the incorporation of multi-resolution FlashPix images and an intuitive, interactive user interface. The FlashPix file format combines a high-resolution version of an image with a hierarchy of several lower resolution copies, providing real-time magnification via a single image file. The Web interface was designed specifically to simulate microscopic analysis, using the latest Javascript, Java and Common Gateway Interface tools. As the project progresses to the evaluation stage, it is hoped that this active learning format will provide a practical and efficacious environment for continuing medical education with additional application potential in classroom demonstrations, proficiency testing, and telepathology. Using Microsoft Internet Explorer 4.0 and above, the working prototype Web-based CME environment is accessible at http://telepathology.upmc.edu/WebInterface/NewInterface/welcome.html.

  9. Retrieval of the ocean wave spectrum in open and thin ice covered ocean waters from ERS Synthetic Aperture Radar images

    Energy Technology Data Exchange (ETDEWEB)

    De Carolis, G. [Consiglio Nazionale delle Ricerche, Istituto di Tecnologia Informatica Spaziale, Centro di Geodesia Spaziale G. Colombo, Terlecchia, MT (Italy)

    2001-02-01

    This paper concerns with the task of retrieving ocean wave spectra form imagery provided by space-borne SAR systems such as that on board ERS satellite. SAR imagery of surface wave fields travelling into open ocean and into thin sea ice covers composed of frazil and pancake icefields is considered. The major purpose is to gain insight on how the spectral changes can be related to sea ice properties of geophysical interest such as the thickness. Starting from SAR image cross spectra computed from Single Look Complex (SLC) SAR images, the ocean wave spectrum is retrieved using an inversion procedure based on the gradient descent algorithm. The capability of this method when applied to satellite SAR sensors is investigated. Interest in the SAR image cross spectrum exploitation is twofold: first, the directional properties of the ocean wave spectra are retained; second, external wave information needed to initialize the inversion procedure may be greatly reduced using only information included in the SAR image cross spectrum itself. The main drawback is that the wind waves spectrum could be partly lost and its spectral peak wave number underestimated. An ERS-SAR SLC image acquired on April 10, 1993 over the Greenland Sea was selected as test image. A pair of windows that include open-sea only and sea ice cover, respectively, were selected. The inversions were carried out using different guess wave spectra taken from SAR image cross spectra. Moreover, care was taken to properly handle negative values eventually occurring during the inversion runs. This results in a modification of the gradient descending the technique that is required if a non-negative solution of the wave spectrum is searched for. Results are discussed in view of the possibility of SAR data to detect ocean wave dispersion as a means for the retrieval of ice thickness.

  10. A statistical retrieval of cloud parameters for the millimeter wave Ice Cloud Imager on board MetOp-SG

    Science.gov (United States)

    Prigent, Catherine; Wang, Die; Aires, Filipe; Jimenez, Carlos

    2017-04-01

    The meteorological observations from satellites in the microwave domain are currently limited to below 190 GHz. However, the next generation of European Organisation for the Exploitation of Meteorological Satellites (EUMETSAT) Polar System-Second Generation-EPS-SG will carry an instrument, the Ice Cloud Imager (ICI), with frequencies up to 664 GHz, to improve the characterization of the cloud frozen phase. In this paper, a statistical retrieval of cloud parameters for ICI is developed, trained on a synthetic database derived from the coupling of a mesoscale cloud model and radiative transfer calculations. The hydrometeor profiles simulated with the Weather Research and Forecasting model (WRF) for twelve diverse European mid-latitude situations are used to simulate the brightness temperatures with the Atmospheric Radiative Transfer Simulator (ARTS) to prepare the retrieval database. The WRF+ARTS simulations have been compared to the Special Sensor Microwave Imager/Sounder (SSMIS) observations up to 190 GHz: this successful evaluation gives us confidence in the simulations at the ICI channels from 183 to 664 GHz. Statistical analyses have been performed on this simulated retrieval database, showing that it is not only physically realistic but also statistically satisfactory for retrieval purposes. A first Neural Network (NN) classifier is used to detect the cloud presence. A second NN is developed to retrieve the liquid and ice integrated cloud quantities over sea and land separately. The detection and retrieval of the hydrometeor quantities (i.e., ice, snow, graupel, rain, and liquid cloud) are performed with ICI-only, and with ICI combined with observations from the MicroWave Imager (MWI, with frequencies from 19 to 190 GHz, also on board MetOp-SG). The ICI channels have been optimized for the detection and quantification of the cloud frozen phases: adding the MWI channels improves the performance of the vertically integrated hydrometeor contents, especially for

  11. The Landsat Image Mosaic of the Antarctica Web Portal

    Directory of Open Access Journals (Sweden)

    Christopher J Rusanowski

    2007-06-01

    Full Text Available People believe what they can see. The Poles exist as a frozen dream to most people. The International Polar Year wants to break the ice (so to speak, open up the Poles to the general public, support current polar research, and encourage new research projects.  The IPY officially begins in March, 2007. As part of this effort, the U.S. Geological Survey (USGS and the British Antarctic Survey (BAS, with funding from the National Science Foundation (NSF, are developing three Landsat mosaics of Antarctica and an Antarctic Web Portal with a Community site and an online map viewer. When scientists are able to view the entire scope of polar research, they will be better able to collaborate and locate the resources they need. When the general public more readily sees what is happening in the polar environments, they will understand how changes to the polar areas affect everyone.

  12. A novel multi-temporal approach to wet snow retrieval with Sentinel-1 images (Conference Presentation)

    Science.gov (United States)

    Marin, Carlo; Callegari, Mattia; Notarnicola, Claudia

    2016-10-01

    by training the proposed method with examples extracted by [1] and refine this information by deriving additional training for the complex cases where the state-of-the-art algorithm fails. In addition, the multi-temporal information is fully exploited by modelling it as a series of statistical moments. Indeed, with a proper time sampling, statistical moments can describe the shape of the probability density function (pdf) of the backscattering time series ([3-4]). Given the description of the shape of the multi-temporal VV and VH backscattering pdfs, it is not necessary to explicitly identify which time instants in the time series are to be assigned to the reference image as done in the bi-temporal approach. This information is implicit in the shape of the pdf and it is used in the training procedure for solving the wet snow detection problem based on the available training samples. The proposed approach is designed to work in an alpine environment and it is validated considering ground truth measurements provided by automatic weather stations that record snow depth and snow temperature over 10 sites deployed in the South Tyrol region in northern Italy. References: [1] Nagler, T.; Rott, H., "Retrieval of wet snow by means of multitemporal SAR data," in Geoscience and Remote Sensing, IEEE Transactions on , vol.38, no.2, pp.754-765, Mar 2000. [2] Storvold, R., Malnes, E., and Lauknes, I., "Using ENVISAT ASAR wideswath data to retrieve snow covered area in mountainous regions", EARSeL eProceedings 4, 2/2006 [3] Inglada, J and Mercier, G., "A New Statistical Similarity Measure for Change Detection in Multitemporal SAR Images and Its Extension to Multiscale Change Analysis," in IEEE Transactions on Geoscience and Remote Sensing, vol. 45, no. 5, pp. 1432-1445, May 2007. [4] Bujor, F., Trouve, E., Valet, L., Nicolas J. M., and Rudant, J. P., "Application of log-cumulants to the detection of spatiotemporal discontinuities in multitemporal SAR images," in IEEE Transactions on

  13. Differences of perceived image generated through the Web site: Empirical Evidence Obtained in Spanish Destinations

    Directory of Open Access Journals (Sweden)

    Juan Jose Blazquez-Resino

    2016-11-01

    Full Text Available In this paper, a study of the perceived destination image created by promotional Web Pages is expounded in an attempt to identify their differences as generators of destination image in the consumers’ mind. Specifically, it seeks to analyse whether the web sites of different Spanish regions improve the image that consumers have of the destination, identifying their main dimensions and analysing its effect on satisfaction and intentions of the future behaviour of potential visitors. To achieve these objectives and verify the hypotheses, a laboratory experiment was performed, where it was determined what changes are produced in the tourist´s previous image after browsing the tourist webs of three different regions. Moreover, it analyses the differences in the effect of the perceived image on satisfaction and potential visitors´ future behavioural intentions. The results obtained enable us to identify differences in the composition of the perceived image according to the destination, while confirming the significant effect of different perceived image dimensions regarding satisfaction. The results allow managers to gain a better understanding of the effectiveness of their sites from a consumer perspective as well as suggestions to follow in order to achieve greater efficiency in their communication actions in order to improve the motivation of visitors to go to the destination.

  14. RayPlus: a Web-Based Platform for Medical Image Processing.

    Science.gov (United States)

    Yuan, Rong; Luo, Ming; Sun, Zhi; Shi, Shuyue; Xiao, Peng; Xie, Qingguo

    2017-04-01

    Medical image can provide valuable information for preclinical research, clinical diagnosis, and treatment. As the widespread use of digital medical imaging, many researchers are currently developing medical image processing algorithms and systems in order to accommodate a better result to clinical community, including accurate clinical parameters or processed images from the original images. In this paper, we propose a web-based platform to present and process medical images. By using Internet and novel database technologies, authorized users can easily access to medical images and facilitate their workflows of processing with server-side powerful computing performance without any installation. We implement a series of algorithms of image processing and visualization in the initial version of Rayplus. Integration of our system allows much flexibility and convenience for both research and clinical communities.

  15. Visualizing color term differences based on images from the web

    Directory of Open Access Journals (Sweden)

    Nobuyuki Umezu

    2017-01-01

    Full Text Available Color terms are used to express light spectrum characteristics captured by human vision, and color naming across languages partition color spaces differently. Such partition differences have been surveyed through several empirical experiments that employ Munsell color chips. We propose a novel visualization method for color terms based on thousands of images collected from query results provided by an image search engines such as Google. A series of experiments was conducted using eight basic color terms in seven languages. Pixel values in the images are counted to form color histograms according to the color pallet used in the world color survey. The visualization results can be summarized as follows: (1 Japanese and Korean color terms have wider distributions in the color space than terms in other languages do and (2 color visualizations for color terms pink and brown are affected by their links to proper nouns.

  16. Development and evaluation of an interactive Web-based breast imaging game for medical students.

    Science.gov (United States)

    Roubidoux, Marilyn A; Chapman, Chris M; Piontek, Mary E

    2002-10-01

    The purpose of this study was to develop and evaluate by student survey an interactive computer tool for teaching breast imaging to 4th-year medical students. An interactive computer game was designed for competitive play between two students or between one student and one of two cyber players. Content was determined from a survey of faculty members in breast imaging, and the survey results were grouped into 10 learning objectives. Pre-existing knowledge of these objectives in 16 4th-year medical students was tested by a quiz. On the basis of the learning objectives, case scenarios and questions were incorporated into the game, which was programmed in JavaScript and available on a Web site. Preliminary and final versions of the game were used for teaching 55 4th-year medical students. A subgroup of 42 students received an informational handout. Student surveys were performed. Mean quiz score for pre-existing knowledge of the learning objectives was 45% (range, 13%-67%). Survey results showed that images contributed to educational value (92%), the Web site was more interesting to students than the handout (93.6%), and the Web site provided additional reinforcement of learning beyond that of the handout or lecture (88.8%). Students liked the Web site accessibility (96%), and more than 70% agreed the Web site was also appropriate for other medical specialties. An Internet search identified no other Web-based computer games for medical students. Students surveyed found the Web site to be worthwhile, convenient, and applicable to other specialties.

  17. MIIP: a web-based platform for medical image interpretation training and evaluation focusing on ultrasound

    Science.gov (United States)

    Lindseth, Frank; Nordrik Hallan, Marte; Schiller Tønnessen, Martin; Smistad, Erik; Vâpenstad, Cecilie

    2017-03-01

    Introduction: Medical imaging technology has revolutionized health care over the past 30 years. This is especially true for ultrasound, a modality that an increasing amount of medical personal is starting to use. Purpose: The purpose of this study was to develop and evaluate a platform for improving medical image interpretation skills regardless of time and space and without the need for expensive imaging equipment or a patient to scan. Methods, results and conclusions: A stable web application with the needed functionality for image interpretation training and evaluation has been implemented. The system has been extensively tested internally and used during an international course in ultrasound-guided neurosurgery. The web application was well received and got very good System Usability Scale (SUS) scores.

  18. Information Retrieval Models

    NARCIS (Netherlands)

    Hiemstra, Djoerd; Göker, Ayse; Davies, John

    2009-01-01

    Many applications that handle information on the internet would be completely inadequate without the support of information retrieval technology. How would we find information on the world wide web if there were no web search engines? How would we manage our email without spam filtering? Much of the

  19. Atmospheric Profile Retrieval Algorithm for Next Generation Geostationary Satellite of Korea and Its Application to the Advanced Himawari Imager

    Directory of Open Access Journals (Sweden)

    Su Jeong Lee

    2017-12-01

    Full Text Available In preparation for the 2nd geostationary multi-purpose satellite of Korea with a 16-channel Advanced Meteorological Imager; an algorithm has been developed to retrieve clear-sky vertical profiles of temperature (T and humidity (Q based on a nonlinear optimal estimation method. The performance and characteristics of the algorithm have been evaluated using the measured data of the Advanced Himawari Imager (AHI on board the Himawari-8 of Japan, launched in 2014. Constraints for the optimal estimation solution are provided by the forecasted T and Q profiles from a global numerical weather prediction model and their error covariance. Although the information contents for temperature is quite low due to the limited number of channels used in the retrieval; the study reveals that useful moisture information (2~3 degrees of freedom for signal is provided from the three water vapor channels; contributing to the increase in the moisture retrieval accuracy upon the model forecast. The improvements are consistent throughout the tropospheric atmosphere with almost zero mean bias and 9% (relative humidity of root mean square error between 100 and 1000 hPa when compared with the quality-controlled radiosonde data from 2016 August.

  20. Time-series MODIS image-based retrieval and distribution analysis of total suspended matter concentrations in Lake Taihu (China).

    Science.gov (United States)

    Zhang, Yuchao; Lin, Shan; Liu, Jianping; Qian, Xin; Ge, Yi

    2010-09-01

    Although there has been considerable effort to use remotely sensed images to provide synoptic maps of total suspended matter (TSM), there are limited studies on universal TSM retrieval models. In this paper, we have developed a TSM retrieval model for Lake Taihu using TSM concentrations measured in situ and a time series of quasi-synchronous MODIS 250 m images from 2005. After simple geometric and atmospheric correction, we found a significant relationship (R = 0.8736, N = 166) between in situ measured TSM concentrations and MODIS band normalization difference of band 3 and band 1. From this, we retrieved TSM concentrations in eight regions of Lake Taihu in 2007 and analyzed the characteristic distribution and variation of TSM. Synoptic maps of model-estimated TSM of 2007 showed clear geographical and seasonal variations. TSM in Central Lake and Southern Lakeshore were consistently higher than in other regions, while TSM in East Taihu was generally the lowest among the regions throughout the year. Furthermore, a wide range of TSM concentrations appeared from winter to summer. TSM in winter could be several times that in summer.

  1. Impacts of Cross-Platform Vicarious Calibration on the Deep Blue Aerosol Retrievals for Moderate Resolution Imaging Spectroradiometer Aboard Terra

    Science.gov (United States)

    Jeong, Myeong-Jae; Hsu, N. Christina; Kwiatkowska, Ewa J.; Franz, Bryan A.; Meister, Gerhard; Salustro, Clare E.

    2012-01-01

    The retrieval of aerosol properties from spaceborne sensors requires highly accurate and precise radiometric measurements, thus placing stringent requirements on sensor calibration and characterization. For the Terra/Moderate Resolution Imaging Spedroradiometer (MODIS), the characteristics of the detectors of certain bands, particularly band 8 [(B8); 412 nm], have changed significantly over time, leading to increased calibration uncertainty. In this paper, we explore a possibility of utilizing a cross-calibration method developed for characterizing the Terral MODIS detectors in the ocean bands by the National Aeronautics and Space Administration Ocean Biology Processing Group to improve aerosol retrieval over bright land surfaces. We found that the Terra/MODIS B8 reflectance corrected using the cross calibration method resulted in significant improvements for the retrieved aerosol optical thickness when compared with that from the Multi-angle Imaging Spectroradiometer, Aqua/MODIS, and the Aerosol Robotic Network. The method reported in this paper is implemented for the operational processing of the Terra/MODIS Deep Blue aerosol products.

  2. A Midas Plugin to Enable Construction of Reproducible Web-based Image Processing Pipelines

    Directory of Open Access Journals (Sweden)

    Michael eGrauer

    2013-12-01

    Full Text Available Image processing is an important quantitative technique for neuroscience researchers, but difficult for those who lack experience in the field. In this paper we present a web-based platform that allows an expert to create a brain image processing pipeline, enabling execution of that pipeline even by those biomedical researchers with limited image processing knowledge. These tools are implemented as a plugin for Midas, an open-source toolkit for creating web based scientific data storage and processing platforms. Using this plugin, an image processing expert can construct a pipeline, create a web-based UI, manage jobs, and visualize intermediate results. Pipelines are executed on a grid computing platform using BatchMake and HTCondor. This represents a new capability for biomedical researchers and offers an innovative platform for scientific collaboration. Current tools work well, but can be inaccessible for those lacking image processing expertise. Using this plugin, researchers in collaboration with image processing experts can create workflows with reasonable default settings and streamlined user interfaces, and data can be processed easily from a lab environment without the need for a powerful desktop computer. This platform allows simplified troubleshooting, centralized maintenance, and easy data sharing with collaborators. These capabilities enable reproducible science by sharing datasets and processing pipelines between collaborators. In this paper, we present a description of this innovative Midas plugin, along with results obtained from building and executing several ITK based image processing workflows for diffusion weighted MRI (DW MRI of rodent brain images, as well as recommendations for building automated image processing pipelines. Although the particular image processing pipelines developed were focused on rodent brain MRI, the presented plugin can be used to support any executable or script-based pipeline.

  3. A midas plugin to enable construction of reproducible web-based image processing pipelines.

    Science.gov (United States)

    Grauer, Michael; Reynolds, Patrick; Hoogstoel, Marion; Budin, Francois; Styner, Martin A; Oguz, Ipek

    2013-01-01

    Image processing is an important quantitative technique for neuroscience researchers, but difficult for those who lack experience in the field. In this paper we present a web-based platform that allows an expert to create a brain image processing pipeline, enabling execution of that pipeline even by those biomedical researchers with limited image processing knowledge. These tools are implemented as a plugin for Midas, an open-source toolkit for creating web based scientific data storage and processing platforms. Using this plugin, an image processing expert can construct a pipeline, create a web-based User Interface, manage jobs, and visualize intermediate results. Pipelines are executed on a grid computing platform using BatchMake and HTCondor. This represents a new capability for biomedical researchers and offers an innovative platform for scientific collaboration. Current tools work well, but can be inaccessible for those lacking image processing expertise. Using this plugin, researchers in collaboration with image processing experts can create workflows with reasonable default settings and streamlined user interfaces, and data can be processed easily from a lab environment without the need for a powerful desktop computer. This platform allows simplified troubleshooting, centralized maintenance, and easy data sharing with collaborators. These capabilities enable reproducible science by sharing datasets and processing pipelines between collaborators. In this paper, we present a description of this innovative Midas plugin, along with results obtained from building and executing several ITK based image processing workflows for diffusion weighted MRI (DW MRI) of rodent brain images, as well as recommendations for building automated image processing pipelines. Although the particular image processing pipelines developed were focused on rodent brain MRI, the presented plugin can be used to support any executable or script-based pipeline.

  4. Retrieval of cirrus optical thickness and assessment of ice crystal shape from ground-based imaging spectrometry

    Directory of Open Access Journals (Sweden)

    M. Schäfer

    2013-08-01

    Full Text Available A ground-based hyperspectral imaging spectrometer (AisaEAGLE, manufactured by Specim Ltd., Finland is applied to measure downward spectral radiance fields with high spatial (1024 spatial pixels within 36.7° field of view, spectral (488 spectral pixels, 400–970 nm, 1.25 nm full width at half maximum, and temporal (4–30 Hz resolution. The calibration, measurement and data evaluation procedures are introduced. A new method is presented to retrieve the cirrus optical thickness (τci using the spectral radiance data collected by AisaEAGLE. The data were collected during the Cloud Aerosol Radiation and tuRbulence of trade wInd cumuli over BArbados (CARRIBA project in 2011. The spatial inhomogeneity of the investigated cirrus is characterised by the standard deviation of the retrieved τci as well as the width of its frequency distribution. By comparing measured and simulated downward solar spectral radiance as a function of scattering angle, some evidence of the prevailing cirrus ice crystal shape can be obtained and subsequently used to substantiate the retrieval of τci. The sensitivity of the retrieval method with respect to surface albedo, effective radius (reff, cloud height and ice crystal shape is quantified. An enhanced sensitivity of the retrieved τci is found with respect to the surface albedo (up to 30% and ice crystal shape (up to 90%. The sensitivity with regard to the effective ice crystal radius (≤ 5% and the cloud height (≤ 0.5% is rather small and can be neglected.

  5. THE IMAGE OF INVESTMENT AND FINANCIAL SERVICES COMPANIES IN WWW LANDSCAPE (WORLD WIDE WEB

    Directory of Open Access Journals (Sweden)

    Iancu Ioana Ancuta

    2011-07-01

    Full Text Available In a world where the internet and its image are becoming more and more important, this study is about the importance of Investment and Financial Services Companies web sites. Market competition, creates the need of studies, focused on assessing and analyzing the websites of companies who are active in this sector. Our study wants to respond at several questions related to Romanian Investment and Financial Services Companies web sites through four dimensions: content, layout, handling and interactivity. Which web sites are best and from what point of view? Where should financial services companies direct their investments to differentiate themselves and their sites? In fact we want to rank the 58 Investment and Financial Services Companies web sites based on 127 criteria. There are numerous methods for evaluating web pages. The evaluation methods are similar from the structural point of view and the most popular are: Serqual, Sitequal, Webqual / Equal EtailQ, Ewam, e-Serqual, WebQEM (Badulescu, 2008:58. In the paper: "Assessment of Romanian Banks E-Image: A Marketing Perspective" (Catana, Catana and Constantinescu, 2006: 4 the authors point out that there are at least four complex variables: accessibility, functionality, performance and usability. Each of these can be decomposed into simple ones. We used the same method, and we examined from the utility point of view, 58 web sites of Investment and Financial Services Companies based on 127 criteria following a procedure developed by Institut fur ProfNet Internet Marketing, Munster (Germany. The data collection period was 1-30 September 2010. The results show that there are very large differences between corporate sites; their creators are concentrating on the information required by law and aesthetics, neglecting other aspects as communication and online service. In the future we want to extend this study at international level, by applying the same methods of research in 5 countries from

  6. ImageMiner: a software system for comparative analysis of tissue microarrays using content-based image retrieval, high-performance computing, and grid technology.

    Science.gov (United States)

    Foran, David J; Yang, Lin; Chen, Wenjin; Hu, Jun; Goodell, Lauri A; Reiss, Michael; Wang, Fusheng; Kurc, Tahsin; Pan, Tony; Sharma, Ashish; Saltz, Joel H

    2011-01-01

    The design and implementation of ImageMiner, a software platform for performing comparative analysis of expression patterns in imaged microscopy specimens such as tissue microarrays (TMAs), is described. ImageMiner is a federated system of services that provides a reliable set of analytical and data management capabilities for investigative research applications in pathology. It provides a library of image processing methods, including automated registration, segmentation, feature extraction, and classification, all of which have been tailored, in these studies, to support TMA analysis. The system is designed to leverage high-performance computing machines so that investigators can rapidly analyze large ensembles of imaged TMA specimens. To support deployment in collaborative, multi-institutional projects, ImageMiner features grid-enabled, service-based components so that multiple instances of ImageMiner can be accessed remotely and federated. The experimental evaluation shows that: (1) ImageMiner is able to support reliable detection and feature extraction of tumor regions within imaged tissues; (2) images and analysis results managed in ImageMiner can be searched for and retrieved on the basis of image-based features, classification information, and any correlated clinical data, including any metadata that have been generated to describe the specified tissue and TMA; and (3) the system is able to reduce computation time of analyses by exploiting computing clusters, which facilitates analysis of larger sets of tissue samples.

  7. Fast segmentation of satellite images using SLIC, WebGL and Google Earth Engine

    Science.gov (United States)

    Donchyts, Gennadii; Baart, Fedor; Gorelick, Noel; Eisemann, Elmar; van de Giesen, Nick

    2017-04-01

    Google Earth Engine (GEE) is a parallel geospatial processing platform, which harmonizes access to petabytes of freely available satellite images. It provides a very rich API, allowing development of dedicated algorithms to extract useful geospatial information from these images. At the same time, modern GPUs provide thousands of computing cores, which are mostly not utilized in this context. In the last years, WebGL became a popular and well-supported API, allowing fast image processing directly in web browsers. In this work, we will evaluate the applicability of WebGL to enable fast segmentation of satellite images. A new implementation of a Simple Linear Iterative Clustering (SLIC) algorithm using GPU shaders will be presented. SLIC is a simple and efficient method to decompose an image in visually homogeneous regions. It adapts a k-means clustering approach to generate superpixels efficiently. While this approach will be hard to scale, due to a significant amount of data to be transferred to the client, it should significantly improve exploratory possibilities and simplify development of dedicated algorithms for geoscience applications. Our prototype implementation will be used to improve surface water detection of the reservoirs using multispectral satellite imagery.

  8. Smart Images in a Web 2.0 World: The Virtual Astronomy Multimedia Project (VAMP)

    Science.gov (United States)

    Hurt, R. L.; Christensen, L. L.; Gauthier, A.; Wyatt, R.

    2008-06-01

    High quality astronomical images, accompanied by rich caption and background information, abound on the web and yet are notoriously difficult to locate efficiently using common search engines. ``Flat'' searches can return dozens of hits for a single popular image but miss equally important related images from other observatories. The Virtual Astronomy Multimedia Project (VAMP) is developing the architecture for an online index of astronomical imagery and video that will simplify access and provide a service around which innovative applications can be developed (e.g. digital planetariums). Current progress includes design prototyping around existing Astronomy Visualization Metadata (AVM) standards. Growing VAMP partnerships include a cross-section of observatories, data centers, and planetariums.

  9. An efficient image processing method based on web services for mobile devices

    Science.gov (United States)

    Senthilkumar, K.; Vivek, N. K.; Vijayan, E.

    2017-11-01

    The traditional image processing system which is based on a centralized computing model has a disadvantage of limiting resources. This can cause troubles in the smooth execution of system in mobile devices. We propose a new system which unlike traditional system uses a distributed model. It can be achieved by adopting web service based image processing system. Our system has the advantages of being component oriented and has low coupling and high encapsulation. Thus our system can solve the main problem of traditional image processing system and avoid the resource bottleneck situation.

  10. Complex dark-field contrast and its retrieval in x-ray phase contrast imaging implemented with Talbot interferometry.

    Science.gov (United States)

    Yang, Yi; Tang, Xiangyang

    2014-10-01

    Under the existing theoretical framework of x-ray phase contrast imaging methods implemented with Talbot interferometry, the dark-field contrast refers to the reduction in interference fringe visibility due to small-angle x-ray scattering of the subpixel microstructures of an object to be imaged. This study investigates how an object's subpixel microstructures can also affect the phase of the intensity oscillations. Instead of assuming that the object's subpixel microstructures distribute in space randomly, the authors' theoretical derivation starts by assuming that an object's attenuation projection and phase shift vary at a characteristic size that is not smaller than the period of analyzer grating G₂ and a characteristic length dc. Based on the paraxial Fresnel-Kirchhoff theory, the analytic formulae to characterize the zeroth- and first-order Fourier coefficients of the x-ray irradiance recorded at each detector cell are derived. Then the concept of complex dark-field contrast is introduced to quantify the influence of the object's microstructures on both the interference fringe visibility and the phase of intensity oscillations. A method based on the phase-attenuation duality that holds for soft tissues and high x-ray energies is proposed to retrieve the imaginary part of the complex dark-field contrast for imaging. Through computer simulation study with a specially designed numerical phantom, they evaluate and validate the derived analytic formulae and the proposed retrieval method. Both theoretical analysis and computer simulation study show that the effect of an object's subpixel microstructures on x-ray phase contrast imaging method implemented with Talbot interferometry can be fully characterized by a complex dark-field contrast. The imaginary part of complex dark-field contrast quantifies the influence of the object's subpixel microstructures on the phase of intensity oscillations. Furthermore, at relatively high energies, for soft tissues it can be

  11. Enhancing Web applications in radiology with Java: estimating MR imaging relaxation times.

    Science.gov (United States)

    Dagher, A P; Fitzpatrick, M; Flanders, A E; Eng, J

    1998-01-01

    Java is a relatively new programming language that has been used to develop a World Wide Web-based tool for estimating magnetic resonance (MR) imaging relaxation times, thereby demonstrating how Java may be used for Web-based radiology applications beyond improving the user interface of teaching files. A standard processing algorithm coded with Java is downloaded along with the hypertext markup language (HTML) document. The user (client) selects the desired pulse sequence and inputs data obtained from a region of interest on the MR images. The algorithm is used to modify selected MR imaging parameters in an equation that models the phenomenon being evaluated. MR imaging relaxation times are estimated, and confidence intervals and a P value expressing the accuracy of the final results are calculated. Design features such as simplicity, object-oriented programming, and security restrictions allow Java to expand the capabilities of HTML by offering a more versatile user interface that includes dynamic annotations and graphics. Java also allows the client to perform more sophisticated information processing and computation than is usually associated with Web applications. Java is likely to become a standard programming option, and the development of stand-alone Java applications may become more common as Java is integrated into future versions of computer operating systems.

  12. A New Content-Based Image Retrieval Using the Multidimensional Generalization of Wald-Wolfowitz Runs Test

    Science.gov (United States)

    Leauhatong, Thurdsak; Hamamoto, Kazuhiko; Atsuta, Kiyoaki; Kondo, Shozo

    This paper proposes two new similarity measures for the content-based image retrieval (CBIR) systems. The similarity measures are based on the k-means clustering algorithm and the multidimensional generalization of the Wald-Wolfowitz (MWW) runs test. The performance comparisons between the proposed similarity measures and a current CBIR similarity measure based on the MWW runs test were performed, and it can be seen that the proposed similarity measures outperform the current similarity measure with respect to the precision and the computational time.

  13. A Novel Relevance Feedback Approach Based on Similarity Measure Modification in an X-Ray Image Retrieval System Based on Fuzzy Representation Using Fuzzy Attributed Relational Graph

    Directory of Open Access Journals (Sweden)

    Hossien Pourghassem

    2011-04-01

    Full Text Available Relevance feedback approaches is used to improve the performance of content-based image retrieval systems. In this paper, a novel relevance feedback approach based on similarity measure modification in an X-ray image retrieval system based on fuzzy representation using fuzzy attributed relational graph (FARG is presented. In this approach, optimum weight of each feature in feature vector is calculated using similarity rate between query image and relevant and irrelevant images in user feedback. The calculated weight is used to tune fuzzy graph matching algorithm as a modifier parameter in similarity measure. The standard deviation of the retrieved image features is applied to calculate the optimum weight. The proposed image retrieval system uses a FARG for representation of images, a fuzzy matching graph algorithm as similarity measure and a semantic classifier based on merging scheme for determination of the search space in image database. To evaluate relevance feedback approach in the proposed system, a standard X-ray image database consisting of 10000 images in 57 classes is used. The improvement of the evaluation parameters shows proficiency and efficiency of the proposed system.

  14. Efficient 3D rendering for web-based medical imaging software: a proof of concept

    Science.gov (United States)

    Cantor-Rivera, Diego; Bartha, Robert; Peters, Terry

    2011-03-01

    Medical Imaging Software (MIS) found in research and in clinical practice, such as in Picture and Archiving Communication Systems (PACS) and Radiology Information Systems (RIS), has not been able to take full advantage of the Internet as a deployment platform. MIS is usually tightly coupled to algorithms that have substantial hardware and software requirements. Consequently, MIS is deployed on thick clients which usually leads project managers to allocate more resources during the deployment phase of the application than the resources that would be allocated if the application were deployed through a web interface.To minimize the costs associated with this scenario, many software providers use or develop plug-ins to provide the delivery platform (internet browser) with the features to load, interact and analyze medical images. Nevertheless there has not been a successful standard means to achieve this goal so far. This paper presents a study of WebGL as an alternative to plug-in development for efficient rendering of 3D medical models and DICOM images. WebGL is a technology that enables the internet browser to have access to the local graphics hardware in a native fashion. Because it is based in OpenGL, a widely accepted graphic industry standard, WebGL is being implemented in most of the major commercial browsers. After a discussion on the details of the technology, a series of experiments are presented to determine the operational boundaries in which WebGL is adequate for MIS. A comparison with current alternatives is also addressed. Finally conclusions and future work are discussed.

  15. Web tools for large-scale 3D biological images and atlases

    Directory of Open Access Journals (Sweden)

    Husz Zsolt L

    2012-06-01

    Full Text Available Abstract Background Large-scale volumetric biomedical image data of three or more dimensions are a significant challenge for distributed browsing and visualisation. Many images now exceed 10GB which for most users is too large to handle in terms of computer RAM and network bandwidth. This is aggravated when users need to access tens or hundreds of such images from an archive. Here we solve the problem for 2D section views through archive data delivering compressed tiled images enabling users to browse through very-large volume data in the context of a standard web-browser. The system provides an interactive visualisation for grey-level and colour 3D images including multiple image layers and spatial-data overlay. Results The standard Internet Imaging Protocol (IIP has been extended to enable arbitrary 2D sectioning of 3D data as well a multi-layered images and indexed overlays. The extended protocol is termed IIP3D and we have implemented a matching server to deliver the protocol and a series of Ajax/Javascript client codes that will run in an Internet browser. We have tested the server software on a low-cost linux-based server for image volumes up to 135GB and 64 simultaneous users. The section views are delivered with response times independent of scale and orientation. The exemplar client provided multi-layer image views with user-controlled colour-filtering and overlays. Conclusions Interactive browsing of arbitrary sections through large biomedical-image volumes is made possible by use of an extended internet protocol and efficient server-based image tiling. The tools open the possibility of enabling fast access to large image archives without the requirement of whole image download and client computers with very large memory configurations. The system was demonstrated using a range of medical and biomedical image data extending up to 135GB for a single image volume.

  16. Web tools for large-scale 3D biological images and atlases

    Science.gov (United States)

    2012-01-01

    Background Large-scale volumetric biomedical image data of three or more dimensions are a significant challenge for distributed browsing and visualisation. Many images now exceed 10GB which for most users is too large to handle in terms of computer RAM and network bandwidth. This is aggravated when users need to access tens or hundreds of such images from an archive. Here we solve the problem for 2D section views through archive data delivering compressed tiled images enabling users to browse through very-large volume data in the context of a standard web-browser. The system provides an interactive visualisation for grey-level and colour 3D images including multiple image layers and spatial-data overlay. Results The standard Internet Imaging Protocol (IIP) has been extended to enable arbitrary 2D sectioning of 3D data as well a multi-layered images and indexed overlays. The extended protocol is termed IIP3D and we have implemented a matching server to deliver the protocol and a series of Ajax/Javascript client codes that will run in an Internet browser. We have tested the server software on a low-cost linux-based server for image volumes up to 135GB and 64 simultaneous users. The section views are delivered with response times independent of scale and orientation. The exemplar client provided multi-layer image views with user-controlled colour-filtering and overlays. Conclusions Interactive browsing of arbitrary sections through large biomedical-image volumes is made possible by use of an extended internet protocol and efficient server-based image tiling. The tools open the possibility of enabling fast access to large image archives without the requirement of whole image download and client computers with very large memory configurations. The system was demonstrated using a range of medical and biomedical image data extending up to 135GB for a single image volume. PMID:22676296

  17. A Web Service for File-Level Access to Disk Images

    Directory of Open Access Journals (Sweden)

    Sunitha Misra

    2014-07-01

    Full Text Available Digital forensics tools have many potential applications in the curation of digital materials in libraries, archives and museums (LAMs. Open source digital forensics tools can help LAM professionals to extract digital contents from born-digital media and make more informed preservation decisions. Many of these tools have ways to display the metadata of the digital media, but few provide file-level access without having to mount the device or use complex command-line utilities. This paper describes a project to develop software that supports access to the contents of digital media without having to mount or download the entire image. The work examines two approaches in creating this tool: First, a graphical user interface running on a local machine. Second, a web-based application running in web browser. The project incorporates existing open source forensics tools and libraries including The Sleuth Kit and libewf along with the Flask web application framework and custom Python scripts to generate web pages supporting disk image browsing.

  18. Towards a semantic PACS: Using Semantic Web technology to represent imaging data.

    Science.gov (United States)

    Van Soest, Johan; Lustberg, Tim; Grittner, Detlef; Marshall, M Scott; Persoon, Lucas; Nijsten, Bas; Feltens, Peter; Dekker, Andre

    2014-01-01

    The DICOM standard is ubiquitous within medicine. However, improved DICOM semantics would significantly enhance search operations. Furthermore, databases of current PACS systems are not flexible enough for the demands within image analysis research. In this paper, we investigated if we can use Semantic Web technology, to store and represent metadata of DICOM image files, as well as linking additional computational results to image metadata. Therefore, we developed a proof of concept containing two applications: one to store commonly used DICOM metadata in an RDF repository, and one to calculate imaging biomarkers based on DICOM images, and store the biomarker values in an RDF repository. This enabled us to search for all patients with a gross tumor volume calculated to be larger than 50 cc. We have shown that we can successfully store the DICOM metadata in an RDF repository and are refining our proof of concept with regards to volume naming, value representation, and the applications themselves.

  19. Body-wave retrieval and imaging from ambient seismic fields with very dense arrays

    Science.gov (United States)

    Nakata, N.; Boué, P.; Beroza, G. C.

    2015-12-01

    Correlation-based analyses of ambient seismic wavefields is a powerful tool for retrieving subsurface information such as stiffness, anisotropy, and heterogeneity at a variety of scales. These analyses can be considered to be data-driven wavefield modeling. Studies of ambient-field tomography have been mostly focused on the surface waves, especially fundamental-mode Rayleigh waves. Although the surface-wave tomography is useful to model 3D velocities, the spatial resolution is limited due to the extended depth sensitivity of the surface wave measurements. Moreover, to represent elastic media, we need at least two stiffness parameters (e.g., shear and bulk moduli). We develop a technique to retrieve P diving waves from the ambient field observed by the dense geophone network (~2500 receivers with 100-m spacing) at Long Beach, California. With two-step filtering, we improve the signal-to-noise ratio of body waves to extract P wave observations that we use for tomography to estimate 3D P-wave velocity structure. The small scale-length heterogeneity of the velocity model follows a power law with ellipsoidal anisotropy. We also discuss possibilities to retrieve reflected waves from the ambient field and show other applications of the body-wave extraction at different locations and scales. Note that reflected waves penetrate deeper than diving waves and have the potential to provide much higher spatial resolution.

  20. A web server for interactive and zoomable Chaos Game Representation images.

    Science.gov (United States)

    Arakawa, Kazuharu; Oshita, Kazuki; Tomita, Masaru

    2009-09-17

    Chaos Game Representation (CGR) is a generalized scale-independent Markov transition table, which is useful for the visualization and comparative study of genomic signature, or for the study of characteristic sequence motifs. However, in order to fully utilize the scale-independent properties of CGR, it should be accessible through scale-independent user interface instead of static images. Here we describe a web server and Perl library for generating zoomable CGR images utilizing Google Maps API, which is also easily searchable for specific motifs. The web server is freely accessible at http://www.g-language.org/wiki/cgr/, and the Perl library as well as the source code is distributed with the G-language Genome Analysis Environment under GNU General Public License.

  1. A web server for interactive and zoomable Chaos Game Representation images

    Directory of Open Access Journals (Sweden)

    Oshita Kazuki

    2009-09-01

    Full Text Available Abstract Chaos Game Representation (CGR is a generalized scale-independent Markov transition table, which is useful for the visualization and comparative study of genomic signature, or for the study of characteristic sequence motifs. However, in order to fully utilize the scale-independent properties of CGR, it should be accessible through scale-independent user interface instead of static images. Here we describe a web server and Perl library for generating zoomable CGR images utilizing Google Maps API, which is also easily searchable for specific motifs. The web server is freely accessible at http://www.g-language.org/wiki/cgr/, and the Perl library as well as the source code is distributed with the G-language Genome Analysis Environment under GNU General Public License.

  2. J-Plus Web Portal

    Science.gov (United States)

    Civera Lorenzo, Tamara

    2017-10-01

    Brief presentation about the J-PLUS EDR data access web portal (http://archive.cefca.es/catalogues/jplus-edr) where the different services available to retrieve images and catalogues data have been presented.J-PLUS Early Data Release (EDR) archive includes two types of data: images and dual and single catalogue data which include parameters measured from images. J-PLUS web portal offers catalogue data and images through several different online data access tools or services each suited to a particular need. The different services offered are: Coverage map Sky navigator Object visualization Image search Cone search Object list search Virtual observatory services: Simple Cone Search Simple Image Access Protocol Simple Spectral Access Protocol Table Access Protocol

  3. Early burst detection for memory-efficient image retrieval : – Extended version –

    OpenAIRE

    Shi, Miaojing; Avrithis, Yannis; Jégou, Hervé

    2015-01-01

    Recent works show that image comparison based on local descriptors is corrupted by visual bursts, which tend to dominate the image similarity. The existing strategies, like power-law normalization, improve the results by discounting the contribution of visual bursts to the image similarity. In this paper, we propose to explicitly detect the visual bursts in an image at an early stage. We compare several detection strategies jointly taking into account feature similarity and geometrical quanti...

  4. What Can Pictures Tell Us About Web Pages? Improving Document Search Using Images.

    Science.gov (United States)

    Rodriguez-Vaamonde, Sergio; Torresani, Lorenzo; Fitzgibbon, Andrew W

    2015-06-01

    Traditional Web search engines do not use the images in the HTML pages to find relevant documents for a given query. Instead, they typically operate by computing a measure of agreement between the keywords provided by the user and only the text portion of each page. In this paper we study whether the content of the pictures appearing in a Web page can be used to enrich the semantic description of an HTML document and consequently boost the performance of a keyword-based search engine. We present a Web-scalable system that exploits a pure text-based search engine to find an initial set of candidate documents for a given query. Then, the candidate set is reranked using visual information extracted from the images contained in the pages. The resulting system retains the computational efficiency of traditional text-based search engines with only a small additional storage cost needed to encode the visual information. We test our approach on one of the TREC Million Query Track benchmarks where we show that the exploitation of visual content yields improvement in accuracies for two distinct text-based search engines, including the system with the best reported performance on this benchmark. We further validate our approach by collecting document relevance judgements on our search results using Amazon Mechanical Turk. The results of this experiment confirm the improvement in accuracy produced by our image-based reranker over a pure text-based system.

  5. A real-time framework for fast data retrieval in an image database of volcano activity scenarios

    Science.gov (United States)

    Aliotta, Marco Antonio; Cannata, Andrea; Cassisi, Carmelo; Ciancitto, Francesco; Montalto, Placido; Prestifilippo, Michele

    2015-04-01

    Explosive Activity at Stromboli Volcano (Aeolian Islands) is continuously monitored by INGV-OE in order to analyze its eruptive dynamics and specific scenarios. In particular, the images acquired from thermal cameras represent a big collection of data. In order to extract useful information from thermal image sequences, we need an efficient way to explore and retrieve information from a huge amount of data. In this work, a novel framework capable of fast data retrieval, using the "metric space" concept, is shown. In the light of it, we implemented an indexing algorithm related to similarity laws. The focal point is finding objects of a set that are "close" in relation to a given query, according to a similarity criterion. In order to perform this task, we performed morphological image processing techniques to each video frame, in order to map the shape area of each explosion into a closed curve, representing the explosion contour itself. In order to constitute a metric space, we chose a certain number of features obtained from parameters related to this closed curve and used them as objects of this metric space where similarity can be evaluated, using an appropriate "metric" function to calculate the distances. Unfortunately, this approach has to deal with an intrinsic issue involving the complexity and the number of distance functions to be calculated on a large amount of data. To overcome this drawback, we used a novel abstract data structure called "K-Pole Tree", having the property of minimizing the number of distances to be calculated among objects. Our method allows for fast retrieval of similar objects using an euclidean distance function among the features of the metric space. Thus, we can cluster explosions related to different kinds of volcanic activity, using "pivot" items. For example, given a known image sequence related to a particular type of explosion, it is possible to quickly and easily find all the image sequences that contain only similar

  6. Gender differences in autobiographical memory for everyday events: retrieval elicited by SenseCam images versus verbal cues.

    Science.gov (United States)

    St Jacques, Peggy L; Conway, Martin A; Cabeza, Roberto

    2011-10-01

    Gender differences are frequently observed in autobiographical memory (AM). However, few studies have investigated the neural basis of potential gender differences in AM. In the present functional MRI (fMRI) study we investigated gender differences in AMs elicited using dynamic visual images vs verbal cues. We used a novel technology called a SenseCam, a wearable device that automatically takes thousands of photographs. SenseCam differs considerably from other prospective methods of generating retrieval cues because it does not disrupt the ongoing experience. This allowed us to control for potential gender differences in emotional processing and elaborative rehearsal, while manipulating how the AMs were elicited. We predicted that males would retrieve more richly experienced AMs elicited by the SenseCam images vs the verbal cues, whereas females would show equal sensitivity to both cues. The behavioural results indicated that there were no gender differences in subjective ratings of reliving, importance, vividness, emotion, and uniqueness, suggesting that gender differences in brain activity were not due to differences in these measures of phenomenological experience. Consistent with our predictions, the fMRI results revealed that males showed a greater difference in functional activity associated with the rich experience of SenseCam vs verbal cues, than did females.

  7. Probabilistic person identification in TV news programs using image web database

    Science.gov (United States)

    Battisti, F.; Carli, M.; Leo, M.; Neri, A.

    2014-02-01

    The automatic labeling of faces in TV broadcasting is still a challenging problem. The high variability in view points, facial expressions, general appearance, and lighting conditions, as well as occlusions, rapid shot changes, and camera motions, produce significant variations in image appearance. The application of automatic tools for face recognition is not yet fully established and the human intervention is needed. In this paper, we deal with the automatic face recognition in TV broadcasting programs. The target of the proposed method is to identify the presence of a specific person in a video by means of a set of images downloaded from Web using a specific search key.

  8. Retrieval of the vacuolar H-ATPase from phagosomes revealed by live cell imaging.

    Directory of Open Access Journals (Sweden)

    Margaret Clarke

    2010-01-01

    Full Text Available The vacuolar H+-ATPase, or V-ATPase, is a highly-conserved multi-subunit enzyme that transports protons across membranes at the expense of ATP. The resulting proton gradient serves many essential functions, among them energizing transport of small molecules such as neurotransmitters, and acidifying organelles such as endosomes. The enzyme is not present in the plasma membrane from which a phagosome is formed, but is rapidly delivered by fusion with endosomes that already bear the V-ATPase in their membranes. Similarly, the enzyme is thought to be retrieved from phagosome membranes prior to exocytosis of indigestible material, although that process has not been directly visualized.To monitor trafficking of the V-ATPase in the phagocytic pathway of Dictyostelium discoideum, we fed the cells yeast, large particles that maintain their shape during trafficking. To track pH changes, we conjugated the yeast with fluorescein isothiocyanate. Cells were labeled with VatM-GFP, a fluorescently-tagged transmembrane subunit of the V-ATPase, in parallel with stage-specific endosomal markers or in combination with mRFP-tagged cytoskeletal proteins.We find that the V-ATPase is commonly retrieved from the phagosome membrane by vesiculation shortly before exocytosis. However, if the cells are kept in confined spaces, a bulky phagosome may be exocytosed prematurely. In this event, a large V-ATPase-rich vacuole coated with actin typically separates from the acidic phagosome shortly before exocytosis. This vacuole is propelled by an actin tail and soon acquires the properties of an early endosome, revealing an unexpected mechanism for rapid recycling of the V-ATPase. Any V-ATPase that reaches the plasma membrane is also promptly retrieved.Thus, live cell microscopy has revealed both a usual route and alternative means of recycling the V-ATPase in the endocytic pathway.

  9. A single-sided homogeneous Green's function representation for holographic imaging, inverse scattering, time-reversal acoustics and interferometric Green's function retrieval

    NARCIS (Netherlands)

    Wapenaar, C.P.A.; Thorbecke, J.W.; van der Neut, J.R.

    2016-01-01

    Green's theorem plays a fundamental role in a diverse range of wavefield imaging applications, such as holographic imaging, inverse scattering, time-reversal acoustics and interferometric Green's function retrieval. In many of those applications, the homogeneous Green's function (i.e. the Green's

  10. Bit-Scalable Deep Hashing With Regularized Similarity Learning for Image Retrieval and Person Re-Identification.

    Science.gov (United States)

    Zhang, Ruimao; Lin, Liang; Zhang, Rui; Zuo, Wangmeng; Zhang, Lei

    2015-12-01

    Extracting informative image features and learning effective approximate hashing functions are two crucial steps in image retrieval. Conventional methods often study these two steps separately, e.g., learning hash functions from a predefined hand-crafted feature space. Meanwhile, the bit lengths of output hashing codes are preset in the most previous methods, neglecting the significance level of different bits and restricting their practical flexibility. To address these issues, we propose a supervised learning framework to generate compact and bit-scalable hashing codes directly from raw images. We pose hashing learning as a problem of regularized similarity learning. In particular, we organize the training images into a batch of triplet samples, each sample containing two images with the same label and one with a different label. With these triplet samples, we maximize the margin between the matched pairs and the mismatched pairs in the Hamming space. In addition, a regularization term is introduced to enforce the adjacency consistency, i.e., images of similar appearances should have similar codes. The deep convolutional neural network is utilized to train the model in an end-to-end fashion, where discriminative image features and hash functions are simultaneously optimized. Furthermore, each bit of our hashing codes is unequally weighted, so that we can manipulate the code lengths by truncating the insignificant bits. Our framework outperforms state-of-the-arts on public benchmarks of similar image search and also achieves promising results in the application of person re-identification in surveillance. It is also shown that the generated bit-scalable hashing codes well preserve the discriminative powers with shorter code lengths.

  11. Introduction to information retrieval

    CERN Document Server

    Manning, Christopher D; Schütze, Hinrich

    2008-01-01

    Class-tested and coherent, this textbook teaches classical and web information retrieval, including web search and the related areas of text classification and text clustering from basic concepts. It gives an up-to-date treatment of all aspects of the design and implementation of systems for gathering, indexing, and searching documents; methods for evaluating systems; and an introduction to the use of machine learning methods on text collections. All the important ideas are explained using examples and figures, making it perfect for introductory courses in information retrieval for advanced un

  12. Mining big data sets of plankton images: a zero-shot learning approach to retrieve labels without training data

    Science.gov (United States)

    Orenstein, E. C.; Morgado, P. M.; Peacock, E.; Sosik, H. M.; Jaffe, J. S.

    2016-02-01

    Technological advances in instrumentation and computing have allowed oceanographers to develop imaging systems capable of collecting extremely large data sets. With the advent of in situ plankton imaging systems, scientists must now commonly deal with "big data" sets containing tens of millions of samples spanning hundreds of classes, making manual classification untenable. Automated annotation methods are now considered to be the bottleneck between collection and interpretation. Typically, such classifiers learn to approximate a function that predicts a predefined set of classes for which a considerable amount of labeled training data is available. The requirement that the training data span all the classes of concern is problematic for plankton imaging systems since they sample such diverse, rapidly changing populations. These data sets may contain relatively rare, sparsely distributed, taxa that will not have associated training data; a classifier trained on a limited set of classes will miss these samples. The computer vision community, leveraging advances in Convolutional Neural Networks (CNNs), has recently attempted to tackle such problems using "zero-shot" object categorization methods. Under a zero-shot framework, a classifier is trained to map samples onto a set of attributes rather than a class label. These attributes can include visual and non-visual information such as what an organism is made out of, where it is distributed globally, or how it reproduces. A second stage classifier is then used to extrapolate a class. In this work, we demonstrate a zero-shot classifier, implemented with a CNN, to retrieve out-of-training-set labels from images. This method is applied to data from two continuously imaging, moored instruments: the Scripps Plankton Camera System (SPCS) and the Imaging FlowCytobot (IFCB). Results from simulated deployment scenarios indicate zero-shot classifiers could be successful at recovering samples of rare taxa in image sets. This

  13. Activity Detection and Retrieval for Image and Video Data with Limited Training

    Science.gov (United States)

    2015-06-10

    Number of graduating undergraduates funded by a DoD funded Center of Excellence grant for Education , Research and Engineering: The number of...geometric snakes to segment the image into constant intensity regions. The Chan-Vese framework proposes to partition the image f()(x ∈  Ω ⊆ ℝ

  14. Data ontology and an information system realization for web-based management of image measurements.

    Science.gov (United States)

    Prodanov, Dimiter

    2011-01-01

    Image acquisition, processing, and quantification of objects (morphometry) require the integration of data inputs and outputs originating from heterogeneous sources. Management of the data exchange along this workflow in a systematic manner poses several challenges, notably the description of the heterogeneous meta-data and the interoperability between the software used. The use of integrated software solutions for morphometry and management of imaging data in combination with ontologies can reduce meta-data loss and greatly facilitate subsequent data analysis. This paper presents an integrated information system, called LabIS. The system has the objectives to automate (i) the process of storage, annotation, and querying of image measurements and (ii) to provide means for data sharing with third party applications consuming measurement data using open standard communication protocols. LabIS implements 3-tier architecture with a relational database back-end and an application logic middle tier realizing web-based user interface for reporting and annotation and a web-service communication layer. The image processing and morphometry functionality is backed by interoperability with ImageJ, a public domain image processing software, via integrated clients. Instrumental for the latter feat was the construction of a data ontology representing the common measurement data model. LabIS supports user profiling and can store arbitrary types of measurements, regions of interest, calibrations, and ImageJ settings. Interpretation of the stored measurements is facilitated by atlas mapping and ontology-based markup. The system can be used as an experimental workflow management tool allowing for description and reporting of the performed experiments. LabIS can be also used as a measurements repository that can be transparently accessed by computational environments, such as Matlab. Finally, the system can be used as a data sharing tool.

  15. Data ontology and an information system realization for web-based management of image measurements

    Directory of Open Access Journals (Sweden)

    Dimiter eProdanov

    2011-11-01

    Full Text Available Image acquisition, processing and quantification of objects (morphometry require the integration of data inputs and outputs originating from heterogeneous sources. Management of the data exchange along this workflow in a systematic manner poses several challenges, notably the description of the heterogeneous meta data and the interoperability between the software used. The use of integrated software solutions for morphometry and management of imaging data in combination ontologies can reduce metadata data loss and greatly facilitate subsequent data analysis. This paper presents an integrated information system, called LabIS. The system has the objectives to automate (i the process of storage, annotation and querying of image measurements and (ii to provide means for data sharing with 3rd party applications consuming measurement data using open standard communication protocols. LabIS implements 3-tier architecture with a relational database back-end and an application logic middle tier realizing web-based user interface for reporting and annotation and a web service communication layer. The image processing and morphometry functionality is backed by interoperability with ImageJ, a public domain image processing software, via integrated clients. Instrumental for the latter was the construction of a data ontology representing the common measurement data model. LabIS supports user profiling and can store arbitrary types of measurements, regions of interest, calibrations and ImageJ settings. Integration of the stored measurements is facilitated by atlas mapping and ontology-based markup. The system can be used as an experimental workflow management tool allowing for description and reporting of the performed experiments. LabIS can be also used as a measurements repository that can be transparently accessed by computational environments, such as Matlab. Finally, the system can be used as a data sharing tool.

  16. Automatic segmentation of subfigure image panels for multimodal biomedical document retrieval

    Science.gov (United States)

    Cheng, Beibei; Antani, Sameer; Stanley, R. Joe; Thoma, George R.

    2011-01-01

    Biomedical images are often referenced for clinical decision support (CDS), educational purposes, and research. The task of automatically finding the images in a scientific article that are most useful for the purpose of determining relevance to a clinical situation is traditionally done using text and is quite challenging. We propose to improve this by associating image features from the entire image and from relevant regions of interest with biomedical concepts described in the figure caption or discussion in the article. However, images used in scientific article figures are often composed of multiple panels where each sub-figure (panel) is referenced in the caption using alphanumeric labels, e.g. Figure 1(a), 2(c), etc. It is necessary to separate individual panels from a multi-panel figure as a first step toward automatic annotation of images. In this work we present methods that add make robust our previous efforts reported here. Specifically, we address the limitation in segmenting figures that do not exhibit explicit inter-panel boundaries, e.g. illustrations, graphs, and charts. We present a novel hybrid clustering algorithm based on particle swarm optimization (PSO) with fuzzy logic controller (FLC) to locate related figure components in such images. Results from our evaluation are very promising with 93.64% panel detection accuracy for regular (non-illustration) figure images and 92.1% accuracy for illustration images. A computational complexity analysis also shows that PSO is an optimal approach with relatively low computation time. The accuracy of separating these two type images is 98.11% and is achieved using decision tree.

  17. Panoramic-image-based rendering solutions for visualizing remote locations via the web

    Science.gov (United States)

    Obeysekare, Upul R.; Egts, David; Bethmann, John

    2000-05-01

    With advances in panoramic image-based rendering techniques and the rapid expansion of web advertising, new techniques are emerging for visualizing remote locations on the WWW. Success of these techniques depends on how easy and inexpensive it is to develop a new type of web content that provides pseudo 3D visualization at home, 24-hours a day. Furthermore, the acceptance of this new visualization medium depends on the effectiveness of the familiarization tools by a segment of the population that was never exposed to this type of visualization. This paper addresses various hardware and software solutions available to collect, produce, and view panoramic content. While cost and effectiveness of building the content is being addressed using a few commercial hardware solutions, effectiveness of familiarization tools is evaluated using a few sample data sets.

  18. ALDF Data Retrieval Algorithms for Validating the Optical Transient Detector (OTD) and the Lightning Imaging Sensor (LIS)

    Science.gov (United States)

    Koshak, W. J.; Blakeslee, R. J.; Bailey, J. C.

    1997-01-01

    A linear algebraic solution is provided for the problem of retrieving the location and time of occurrence of lightning ground strikes from in Advanced Lightning Direction Finder (ALDF) network. The ALDF network measures field strength, magnetic bearing, and arrival time of lightning radio emissions and solutions for the plane (i.e.. no Earth curvature) are provided that implement all of these measurements. The accuracy of the retrieval method is tested using computer-simulated data sets and the relative influence of bearing and arrival time data on the outcome of the final solution is formally demonstrated. The algorithm is sufficiently accurate to validate NASA's Optical Transient Detector (OTD) and Lightning Imaging System (LIS). We also introduce a quadratic planar solution that is useful when only three arrival time measurements are available. The algebra of the quadratic root results are examined in detail to clarify what portions of the analysis region lead to fundamental ambiguities in source location. Complex root results are shown to be associated with the presence of measurement errors when the lightning source lies near an outer sensor baseline of the ALDF network. For arbitrary noncollinear network geometries and in the absence of measurement errors, it is shown that the two quadratic roots are equivalent (no source location ambiguity) on the outer sensor baselines. The accuracy of the quadratic planar method is tested with computer-generated data sets and the results are generally better than those obtained from the three station linear planar method when bearing errors are about 2 degrees.

  19. Classification and assessment of retrieved electron density maps in coherent X-ray diffraction imaging using multivariate analysis.

    Science.gov (United States)

    Sekiguchi, Yuki; Oroguchi, Tomotaka; Nakasako, Masayoshi

    2016-01-01

    Coherent X-ray diffraction imaging (CXDI) is one of the techniques used to visualize structures of non-crystalline particles of micrometer to submicrometer size from materials and biological science. In the structural analysis of CXDI, the electron density map of a sample particle can theoretically be reconstructed from a diffraction pattern by using phase-retrieval (PR) algorithms. However, in practice, the reconstruction is difficult because diffraction patterns are affected by Poisson noise and miss data in small-angle regions due to the beam stop and the saturation of detector pixels. In contrast to X-ray protein crystallography, in which the phases of diffracted waves are experimentally estimated, phase retrieval in CXDI relies entirely on the computational procedure driven by the PR algorithms. Thus, objective criteria and methods to assess the accuracy of retrieved electron density maps are necessary in addition to conventional parameters monitoring the convergence of PR calculations. Here, a data analysis scheme, named ASURA, is proposed which selects the most probable electron density maps from a set of maps retrieved from 1000 different random seeds for a diffraction pattern. Each electron density map composed of J pixels is expressed as a point in a J-dimensional space. Principal component analysis is applied to describe characteristics in the distribution of the maps in the J-dimensional space. When the distribution is characterized by a small number of principal components, the distribution is classified using the k-means clustering method. The classified maps are evaluated by several parameters to assess the quality of the maps. Using the proposed scheme, structure analysis of a diffraction pattern from a non-crystalline particle is conducted in two stages: estimation of the overall shape and determination of the fine structure inside the support shape. In each stage, the most accurate and probable density maps are objectively selected. The validity

  20. A fully automatic end-to-end method for content-based image retrieval of CT scans with similar liver lesion annotations.

    Science.gov (United States)

    Spanier, A B; Caplan, N; Sosna, J; Acar, B; Joskowicz, L

    2018-01-01

    The goal of medical content-based image retrieval (M-CBIR) is to assist radiologists in the decision-making process by retrieving medical cases similar to a given image. One of the key interests of radiologists is lesions and their annotations, since the patient treatment depends on the lesion diagnosis. Therefore, a key feature of M-CBIR systems is the retrieval of scans with the most similar lesion annotations. To be of value, M-CBIR systems should be fully automatic to handle large case databases. We present a fully automatic end-to-end method for the retrieval of CT scans with similar liver lesion annotations. The input is a database of abdominal CT scans labeled with liver lesions, a query CT scan, and optionally one radiologist-specified lesion annotation of interest. The output is an ordered list of the database CT scans with the most similar liver lesion annotations. The method starts by automatically segmenting the liver in the scan. It then extracts a histogram-based features vector from the segmented region, learns the features' relative importance, and ranks the database scans according to the relative importance measure. The main advantages of our method are that it fully automates the end-to-end querying process, that it uses simple and efficient techniques that are scalable to large datasets, and that it produces quality retrieval results using an unannotated CT scan. Our experimental results on 9 CT queries on a dataset of 41 volumetric CT scans from the 2014 Image CLEF Liver Annotation Task yield an average retrieval accuracy (Normalized Discounted Cumulative Gain index) of 0.77 and 0.84 without/with annotation, respectively. Fully automatic end-to-end retrieval of similar cases based on image information alone, rather that on disease diagnosis, may help radiologists to better diagnose liver lesions.

  1. Imaging on the surfaces of an uneven thickness medium based on hybrid phase retrieval with the assistance of off-axis digital holography

    Science.gov (United States)

    Wang, Fengpeng; Wang, Dayong; Panezai, Spozmai; Rong, Lu; Wang, Yunxin; Zhao, Jie

    2017-10-01

    A hybrid phase retrieval method with the assistance of off-axis digital holography is proposed for imaging objects on the surfaces of a transparent medium with uneven thickness. The approximate phase distribution is obtained by a constrained optimization approach from the off-axis hologram, and it is used in an iterative procedure for retrieving the complex field of the object from the Gabor hologram. Furthermore, principal component analysis is introduced for compensating for phase aberrations caused by the medium. Numerical simulations and optical experiments were carried out to validate the proposed method. The quality of the reconstructed image is improved remarkably compared to only off-axis digital holography.

  2. A New Era of Search Engines: Not Just Web Pages Anymore.

    Science.gov (United States)

    Hock, Ran

    2002-01-01

    Discusses various types of information that can be retrieved from the Web via search engines. Highlights include Web pages; time frames, including historical coverage and currentness; text pages in formats other than HTML; directory sites; news articles; discussion groups; images; and audio and video. (LRW)

  3. Computer-Aided Diagnosis in Mammography Using Content-Based Image Retrieval Approaches: Current Status and Future Perspectives

    Directory of Open Access Journals (Sweden)

    Bin Zheng

    2009-06-01

    Full Text Available As the rapid advance of digital imaging technologies, the content-based image retrieval (CBIR has became one of the most vivid research areas in computer vision. In the last several years, developing computer-aided detection and/or diagnosis (CAD schemes that use CBIR to search for the clinically relevant and visually similar medical images (or regions depicting suspicious lesions has also been attracting research interest. CBIR-based CAD schemes have potential to provide radiologists with “visual aid” and increase their confidence in accepting CAD-cued results in the decision making. The CAD performance and reliability depends on a number of factors including the optimization of lesion segmentation, feature selection, reference database size, computational efficiency, and relationship between the clinical relevance and visual similarity of the CAD results. By presenting and comparing a number of approaches commonly used in previous studies, this article identifies and discusses the optimal approaches in developing CBIR-based CAD schemes and assessing their performance. Although preliminary studies have suggested that using CBIR-based CAD schemes might improve radiologists’ performance and/or increase their confidence in the decision making, this technology is still in the early development stage. Much research work is needed before the CBIR-based CAD schemes can be accepted in the clinical practice.

  4. Green's Function Retrieval and Marchenko Imaging in a Dissipative Acoustic Medium.

    Science.gov (United States)

    Slob, Evert

    2016-04-22

    Single-sided Marchenko equations for Green's function construction and imaging relate the measured reflection response of a lossless heterogeneous medium to an acoustic wave field inside this medium. I derive two sets of single-sided Marchenko equations for the same purpose, each in a heterogeneous medium, with one medium being dissipative and the other a corresponding medium with negative dissipation. Double-sided scattering data of the dissipative medium are required as input to compute the surface reflection response in the corresponding medium with negative dissipation. I show that each set of single-sided Marchenko equations leads to Green's functions with a virtual receiver inside the medium: one exists inside the dissipative medium and one in the medium with negative dissipation. This forms the basis of imaging inside a dissipative heterogeneous medium. I relate the Green's functions to the reflection response inside each medium, from which the image can be constructed. I illustrate the method with a one-dimensional example that shows the image quality. The method has a potentially wide range of imaging applications where the material under test is accessible from two sides.

  5. Lateral heterogeneity imaged by small-aperture ScS retrieval from the ambient seismic field

    Science.gov (United States)

    Spica, Zack; Perton, Mathieu; Beroza, Gregory C.

    2017-08-01

    Interpreting core-related body wave traveltimes is challenging for seismologists because Earth's heterogeneities are averaged over thousands of kilometers and the sparsity of earthquake measurements makes these heterogeneities difficult to localize. We show how the ambient seismic wave field can be used to overcome these limitations. We present a regional-scale analysis of core-mantle boundary reflections (ScS) under Mexico. We show that body wave arrivals (i.e., P and ScS) are retrieved from higher-order cross correlations (C3), a technique that provides a more uniform and controlled source distribution using the scattered waves of the coda of classical ambient field cross correlations (C1). Then, we extract ScS traveltimes along a dense linear array in Mexico and find that lithospheric lateral heterogeneity, such as the subducting Cocos slab beneath Mexico, may have a strong impact on ScS traveltimes. In parallel, we show that lateral heterogeneity such as a possible ultralow velocity zone (ULVZ) near the core-mantle boundary might also affect, although to a lesser extent, the traveltime anomalies. Our results and interpretation are supported through numerical simulations that account for slab and ULVZ properties.

  6. Development of a 3D WebGIS System for Retrieving and Visualizing CityGML Data Based on their Geometric and Semantic Characteristics by Using Free and Open Source Technology

    Science.gov (United States)

    Pispidikis, I.; Dimopoulou, E.

    2016-10-01

    CityGML is considered as an optimal standard for representing 3D city models. However, international experience has shown that visualization of the latter is quite difficult to be implemented on the web, due to the large size of data and the complexity of CityGML. As a result, in the context of this paper, a 3D WebGIS application is developed in order to successfully retrieve and visualize CityGML data in accordance with their respective geometric and semantic characteristics. Furthermore, the available web technologies and the architecture of WebGIS systems are investigated, as provided by international experience, in order to be utilized in the most appropriate way for the purposes of this paper. Specifically, a PostgreSQL/ PostGIS Database is used, in compliance with the 3DCityDB schema. At Server tier, Apache HTTP Server and GeoServer are utilized, while a Server Side programming language PHP is used. At Client tier, which implemented the interface of the application, the following technologies were used: JQuery, AJAX, JavaScript, HTML5, WebGL and Ol3-Cesium. Finally, it is worth mentioning that the application's primary objectives are a user-friendly interface and a fully open source development.

  7. Spectral embedding-based multiview features fusion for content-based image retrieval

    Science.gov (United States)

    Feng, Lin; Yu, Laihang; Zhu, Hai

    2017-09-01

    In many computer vision applications, an object can be described by multiple features from different views. For instance, to characterize an image well, a variety of visual features is exploited to represent color, texture, and shape information and encode each feature into a vector. Recently, we have witnessed a surge of interests of combining multiview features for image recognition and classification. However, these features are always located in different high-dimensional spaces, which challenge the features fusion, and many conventional methods fail to integrate compatible and complementary information from multiple views. To address the above issues, multifeatures fusion framework is proposed, which utilizes multiview spectral embedding and a unified distance metric to integrate features, the alternating optimization is reconstructed by learning the complementarities between different views. This method exploits complementary property of different views and obtains a low-dimensional embedding wherein the different dimensional subspace. Various experiments on several benchmark datasets have verified the excellent performance of the proposed method.

  8. An Algorithm for Surface Current Retrieval from X-band Marine Radar Images

    Directory of Open Access Journals (Sweden)

    Chengxi Shen

    2015-06-01

    Full Text Available In this paper, a novel current inversion algorithm from X-band marine radar images is proposed. The routine, for which deep water is assumed, begins with 3-D FFT of the radar image sequence, followed by the extraction of the dispersion shell from the 3-D image spectrum. Next, the dispersion shell is converted to a polar current shell (PCS using a polar coordinate transformation. After removing outliers along each radial direction of the PCS, a robust sinusoidal curve fitting is applied to the data points along each circumferential direction of the PCS. The angle corresponding to the maximum of the estimated sinusoid function is determined to be the current direction, and the amplitude of this sinusoidal function is the current speed. For validation, the algorithm is tested against both simulated radar images and field data collected by a vertically-polarized X-band system and ground-truthed with measurements from an acoustic Doppler current profiler (ADCP. From the field data, it is observed that when the current speed is less than 0.5 m/s, the root mean square differences between the radar-derived and the ADCP-measured current speed and direction are 7.3 cm/s and 32.7°, respectively. The results indicate that the proposed procedure, unlike most existing current inversion schemes, is not susceptible to high current speeds and circumvents the need to consider aliasing. Meanwhile, the relatively low computational cost makes it an excellent choice in practical marine applications.

  9. Multimedia human brain database system for surgical candidacy determination in temporal lobe epilepsy with content-based image retrieval

    Science.gov (United States)

    Siadat, Mohammad-Reza; Soltanian-Zadeh, Hamid; Fotouhi, Farshad A.; Elisevich, Kost

    2003-01-01

    This paper presents the development of a human brain multimedia database for surgical candidacy determination in temporal lobe epilepsy. The focus of the paper is on content-based image management, navigation and retrieval. Several medical image-processing methods including our newly developed segmentation method are utilized for information extraction/correlation and indexing. The input data includes T1-, T2-Weighted MRI and FLAIR MRI and ictal and interictal SPECT modalities with associated clinical data and EEG data analysis. The database can answer queries regarding issues such as the correlation between the attribute X of the entity Y and the outcome of a temporal lobe epilepsy surgery. The entity Y can be a brain anatomical structure such as the hippocampus. The attribute X can be either a functionality feature of the anatomical structure Y, calculated with SPECT modalities, such as signal average, or a volumetric/morphological feature of the entity Y such as volume or average curvature. The outcome of the surgery can be any surgery assessment such as memory quotient. A determination is made regarding surgical candidacy by analysis of both textual and image data. The current database system suggests a surgical determination for the cases with relatively small hippocampus and high signal intensity average on FLAIR images within the hippocampus. This indication pretty much fits with the surgeons" expectations/observations. Moreover, as the database gets more populated with patient profiles and individual surgical outcomes, using data mining methods one may discover partially invisible correlations between the contents of different modalities of data and the outcome of the surgery.

  10. Comparing image quality of print-on-demand books and photobooks from web-based vendors

    Science.gov (United States)

    Phillips, Jonathan; Bajorski, Peter; Burns, Peter; Fredericks, Erin; Rosen, Mitchell

    2010-01-01

    Because of the emergence of e-commerce and developments in print engines designed for economical output of very short runs, there are increased business opportunities and consumer options for print-on-demand books and photobooks. The current state of these printing modes allows for direct uploading of book files via the web, printing on nonoffset printers, and distributing by standard parcel or mail delivery services. The goal of this research is to assess the image quality of print-on-demand books and photobooks produced by various Web-based vendors and to identify correlations between psychophysical results and objective metrics. Six vendors were identified for one-off (single-copy) print-on-demand books, and seven vendors were identified for photobooks. Participants rank ordered overall quality of a subset of individual pages from each book, where the pages included text, photographs, or a combination of the two. Observers also reported overall quality ratings and price estimates for the bound books. Objective metrics of color gamut, color accuracy, accuracy of International Color Consortium profile usage, eye-weighted root mean square L*, and cascaded modulation transfer acutance were obtained and compared to the observer responses. We introduce some new methods for normalizing data as well as for strengthening the statistical significance of the results. Our approach includes the use of latent mixed-effect models. We found statistically significant correlation with overall image quality and some of the spatial metrics, but correlations between psychophysical results and other objective metrics were weak or nonexistent. Strong correlation was found between psychophysical results of overall quality assessment and estimated price associated with quality. The photobook set of vendors reached higher image-quality ratings than the set of print-on-demand vendors. However, the photobook set had higher image-quality variability.

  11. The Keck Cosmic Web Imager (KCWI): A Powerful New Integral Field Spectrograph for the Keck Observatory

    Science.gov (United States)

    Morrissey, Patrick; KCWI Team

    2013-01-01

    The Keck Cosmic Web Imager (KCWI) is a new facility instrument being developed for the W. M. Keck Observatory and funded for construction by the Telescope System Instrumentation Program (TSIP) of the National Science Foundation (NSF). KCWI is a bench-mounted spectrograph for the Keck II right Nasmyth focal station, providing integral field spectroscopy over a seeing-limited field up to 20"x33" in extent. Selectable Volume Phase Holographic (VPH) gratings provide high efficiency and spectral resolution in the range of 1000 to 20000. The dual-beam design of KCWI passed a Preliminary Design Review in summer 2011. The detailed design of the KCWI blue channel (350 to 700 nm) is now nearly complete, with the red channel (530 to 1050 nm) planned for a phased implementation contingent upon additional funding. KCWI builds on the experience of the Caltech team in implementing the Cosmic Web Imager (CWI), in operation since 2009 at Palomar Observatory. KCWI adds considerable flexibility to the CWI design, and will take full advantage of the excellent seeing and dark sky above Mauna Kea with a selectable nod-and-shuffle observing mode. The KCWI team is lead by Caltech (project management, design and implementation) in partnership with the University of California at Santa Cruz (camera optical and mechanical design) and the W. M. Keck Observatory (program oversight and observatory interfaces).

  12. Workflow management of content-based image retrieval for CAD support in PACS environments based on IHE.

    Science.gov (United States)

    Welter, Petra; Hocken, Christian; Deserno, Thomas M; Grouls, Christoph; Günther, Rolf W

    2010-07-01

    Content-based image retrieval (CBIR) bears great potential for computer-aided diagnosis (CAD). However, current CBIR systems are not able to integrate with clinical workflow and PACS generally. One essential factor in this setting is scheduling. Applied and proved with modalities and the acquisition of images for a long time, we now establish scheduling with CBIR. Our workflow is based on the IHE integration profile 'Post-Processing Workflow' (PPW) and the use of a DICOM work list. We configured dcm4chee PACS and its including IHE actors for the application of CBIR. In order to achieve a convenient interface for integrating arbitrary CBIR systems, we realized an adapter between the CBIR system and PACS. Our system architecture constitutes modular components communicating over standard protocols. The proposed workflow management system offers the possibility to embed CBIR conveniently into PACS environments. We achieve a chain of references that fills the information gap between acquisition and post-processing. Our approach takes into account the tight and solid organization of scheduled and performed tasks in clinical settings.

  13. Satellite image simulations for model-supervised, dynamic retrieval of crop type and land use intensity

    Science.gov (United States)

    Bach, H.; Klug, P.; Ruf, T.; Migdall, S.; Schlenz, F.; Hank, T.; Mauser, W.

    2015-04-01

    To support food security, information products about the actual cropping area per crop type, the current status of agricultural production and estimated yields, as well as the sustainability of the agricultural management are necessary. Based on this information, well-targeted land management decisions can be made. Remote sensing is in a unique position to contribute to this task as it is globally available and provides a plethora of information about current crop status. M4Land is a comprehensive system in which a crop growth model (PROMET) and a reflectance model (SLC) are coupled in order to provide these information products by analyzing multi-temporal satellite images. SLC uses modelled surface state parameters from PROMET, such as leaf area index or phenology of different crops to simulate spatially distributed surface reflectance spectra. This is the basis for generating artificial satellite images considering sensor specific configurations (spectral bands, solar and observation geometries). Ensembles of model runs are used to represent different crop types, fertilization status, soil colour and soil moisture. By multi-temporal comparisons of simulated and real satellite images, the land cover/crop type can be classified in a dynamically, model-supervised way and without in-situ training data. The method is demonstrated in an agricultural test-site in Bavaria. Its transferability is studied by analysing PROMET model results for the rest of Germany. Especially the simulated phenological development can be verified on this scale in order to understand whether PROMET is able to adequately simulate spatial, as well as temporal (intra- and inter-season) crop growth conditions, a prerequisite for the model-supervised approach. This sophisticated new technology allows monitoring of management decisions on the field-level using high resolution optical data (presently RapidEye and Landsat). The M4Land analysis system is designed to integrate multi-mission data and is

  14. Large Scale Hierarchical K-Means Based Image Retrieval With MapReduce

    Science.gov (United States)

    2014-03-27

    version of Hadoop at the time of this research. Hadoop is the open-source implementation of MapReduce and HDFS. • OpenCV [10]: C++ library for Computer...Boost.Python[1]: Allows for wrapping C++ code to create Python modules. Exposes full C++ OpenCV libraries, as well as native speeds for computation intensive...Yield (′0′, (ImageName, f eature)) end for end function Figure 3.3: MapReduce Feature Extraction Algorithm C++ using the OpenCV libraby. The C

  15. CometCIEF: A Web-based Image Enhancement Facility to digitally enhance images of cometary comae

    Science.gov (United States)

    Samarasinha, N.; Martin, P.; Larson, S.

    2014-07-01

    The detailed analysis of cometary comae provides an observational basis to investigate both the nucleus as well as the coma of comets. The structures in the coma are indicative of the anisotropic emission of gas and dust from the nucleus. Therefore, accurate identifications and measurements of spatial information related to coma structures are needed for realistic quantitative interpretation of coma observations. In many instances, the coma features are only a few percent above the ambient background coma and require enhancement of such features to unambiguously identify them, to make measurements on them, and to carry out subsequent detailed analyses. There is a number of image enhancement techniques used by cometary scientists. Despite this, the wider applicability of many advanced enhancement techniques is limited due to the non-availability of relevant software as open source. To alleviate this, we are making available a number of such techniques using a user-friendly web interface. In this image enhancement facility available at http://www.psi.edu/research/cometimen one can upload a fits format image of a cometary coma and digitally enhance it using an image enhancement technique of the user's choice. The user can then easily download the enhanced image as well as any associated images generated during the enhancement as fits files for detailed analysis later at the user's institution. The available image enhancement techniques at the facility are: (a) division by azimuthal average; (b) division by azimuthal median; (c) azimuthal renormalization; (d) division by 1/ρ profile, where ρ is the sky-plane projected distance from the nucleus; and (e) radially variable spatial filtering. The site provides documentation describing the above enhancement techniques as well as a tutorial showing the application of the enhancement techniques to actual cometary images and how the results may vary with different input parameters. In addition, the source codes as well as

  16. HIDDEN WEB EXTRACTOR DYNAMIC WAY TO UNCOVER THE DEEP WEB

    OpenAIRE

    DR. ANURADHA; BABITA AHUJA

    2012-01-01

    In this era of digital tsunami of information on the web, everyone is completely dependent on the WWW for information retrieval. This has posed a challenging problem in extracting relevant data. Traditional web crawlers focus only on the surface web while the deep web keeps expanding behind the scene. The web databases are hidden behind the query interfaces. In this paper, we propose a Hidden Web Extractor (HWE) that can automatically discover and download data from the Hidden Web databases. ...

  17. An On-Demand Retrieval Method Based on Hybrid NoSQL for Multi-Layer Image Tiles in Disaster Reduction Visualization

    Directory of Open Access Journals (Sweden)

    Linyao Qiu

    2017-01-01

    Full Text Available Monitoring, response, mitigation and damage assessment of disasters places a wide variety of demands on the spatial and temporal resolutions of remote sensing images. Images are divided into tile pyramids by data sources or resolutions and published as independent image services for visualization. A disaster-affected area is commonly covered by multiple image layers to express hierarchical surface information, which generates a large amount of namesake tiles from different layers that overlay the same location. The traditional tile retrieval method for visualization cannot distinguish between distinct layers and traverses all image datasets for each tile query. This process produces redundant queries and invalid access that can seriously affect the visualization performance of clients, servers and network transmission. This paper proposes an on-demand retrieval method for multi-layer images and defines semantic annotations to enrich the description of each dataset. By matching visualization demands with the semantic information of datasets, this method automatically filters inappropriate layers and finds the most suitable layer for the final tile query. The design and implementation are based on a two-layer NoSQL database architecture that provides scheduling optimization and concurrent processing capability. The experimental results reflect the effectiveness and stability of the approach for multi-layer retrieval in disaster reduction visualization.

  18. Interoperative fundus image and report sharing in compliance with integrating the healthcare enterprise conformance and web access todigital imaging and communication in medicinepersistent object protocol

    Directory of Open Access Journals (Sweden)

    Hui-Qun Wu

    2013-12-01

    Full Text Available AIM:To address issues in interoperability between different fundus image systems, we proposed a web eye-picture archiving and communication system (PACS framework in conformance with digital imaging and communication in medicine (DICOM and health level 7 (HL7 protocol to realize fundus images and reports sharing and communication through internet.METHODS: Firstly, a telemedicine-based eye care work flow was established based on integrating the healthcare enterprise (IHE Eye Care technical framework. Then, a browser/server architecture eye-PACS system was established in conformance with the web access to DICOM persistent object (WADO protocol, which contains three tiers.RESULTS:In any client system installed with web browser, clinicians could log in the eye-PACS to observe fundus images and reports. Multipurpose internet mail extensions (MIME type of a structured report is saved as pdf/html with reference link to relevant fundus image using the WADO syntax could provide enough information for clinicians. Some functions provided by open-source Oviyam could be used to query, zoom, move, measure, view DICOM fundus images.CONCLUSION:Such web eye-PACS in compliance to WADO protocol could be used to store and communicate fundus images and reports, therefore is of great significance for teleophthalmology.

  19. Interoperative fundus image and report sharing in compliance with integrating the healthcare enterprise conformance and web access to digital imaging and communication in medicine persistent object protocol.

    Science.gov (United States)

    Wu, Hui-Qun; Lv, Zheng-Min; Geng, Xing-Yun; Jiang, Kui; Tang, Le-Min; Zhou, Guo-Min; Dong, Jian-Cheng

    2013-01-01

    To address issues in interoperability between different fundus image systems, we proposed a web eye-picture archiving and communication system (PACS) framework in conformance with digital imaging and communication in medicine (DICOM) and health level 7 (HL7) protocol to realize fundus images and reports sharing and communication through internet. Firstly, a telemedicine-based eye care work flow was established based on integrating the healthcare enterprise (IHE) Eye Care technical framework. Then, a browser/server architecture eye-PACS system was established in conformance with the web access to DICOM persistent object (WADO) protocol, which contains three tiers. In any client system installed with web browser, clinicians could log in the eye-PACS to observe fundus images and reports. Multipurpose internet mail extensions (MIME) type of a structured report is saved as pdf/html with reference link to relevant fundus image using the WADO syntax could provide enough information for clinicians. Some functions provided by open-source Oviyam could be used to query, zoom, move, measure, view DICOM fundus images. Such web eye-PACS in compliance to WADO protocol could be used to store and communicate fundus images and reports, therefore is of great significance for teleophthalmology.

  20. Stress distribution retrieval in granular materials: A multi-scale model and digital image correlation measurements

    Science.gov (United States)

    Bruno, Luigi; Decuzzi, Paolo; Gentile, Francesco

    2016-01-01

    The promise of nanotechnology lies in the possibility of engineering matter on the nanoscale and creating technological interfaces that, because of their small scales, may directly interact with biological objects, creating new strategies for the treatment of pathologies that are otherwise beyond the reach of conventional medicine. Nanotechnology is inherently a multiscale, multiphenomena challenge. Fundamental understanding and highly accurate predictive methods are critical to successful manufacturing of nanostructured materials, bio/mechanical devices and systems. In biomedical engineering, and in the mechanical analysis of biological tissues, classical continuum approaches are routinely utilized, even if these disregard the discrete nature of tissues, that are an interpenetrating network of a matrix (the extra cellular matrix, ECM) and a generally large but finite number of cells with a size falling in the micrometer range. Here, we introduce a nano-mechanical theory that accounts for the-non continuum nature of bio systems and other discrete systems. This discrete field theory, doublet mechanics (DM), is a technique to model the mechanical behavior of materials over multiple scales, ranging from some millimeters down to few nanometers. In the paper, we use this theory to predict the response of a granular material to an external applied load. Such a representation is extremely attractive in modeling biological tissues which may be considered as a spatial set of a large number of particulate (cells) dispersed in an extracellular matrix. Possibly more important of this, using digital image correlation (DIC) optical methods, we provide an experimental verification of the model.