WorldWideScience

Sample records for web image retrieval

  1. An Effective Combined Feature For Web Based Image Retrieval

    Directory of Open Access Journals (Sweden)

    H.M.R.B Herath

    2015-08-01

    Full Text Available Abstract Technology advances as well as the emergence of large scale multimedia applications and the revolution of the World Wide Web has changed the world into a digital age. Anybody can use their mobile phone to take a photo at any time anywhere and upload that image to ever growing image databases. Development of effective techniques for visual and multimedia retrieval systems is one of the most challenging and important directions of the future research. This paper proposes an effective combined feature for web based image retrieval. Frequently used colour and texture features are explored in order to develop a combined feature for this purpose. Widely used three colour features Colour moments Colour coherence vector and Colour Correlogram and three texture features Grey Level Co-occurrence matrix Tamura features and Gabor filter were analyzed for their performance. Precision and Recall were used to evaluate the performance of each of these techniques. By comparing precision and recall values the methods that performed best were taken and combined to form a hybrid feature. The developed combined feature was evaluated by developing a web based CBIR system. A web crawler was used to first crawl through Web sites and images found in those sites are downloaded and the combined feature representation technique was used to extract image features. The test results indicated that this web system can be used to index web images with the combined feature representation schema and to find similar images. Random image retrievals using the web system shows that the combined feature can be used to retrieve images belonging to the general image domain. Accuracy of the retrieval can be noted high for natural images like outdoor scenes images of flowers etc. Also images which have a similar colour and texture distribution were retrieved as similar even though the images were belonging to deferent semantic categories. This can be ideal for an artist who wants

  2. Web tools for effective retrieval, visualization, and evaluation of cardiology medical images and records

    Science.gov (United States)

    Masseroli, Marco; Pinciroli, Francesco

    2000-12-01

    To provide easy retrieval, integration and evaluation of multimodal cardiology images and data in a web browser environment, distributed application technologies and java programming were used to implement a client-server architecture based on software agents. The server side manages secure connections and queries to heterogeneous remote databases and file systems containing patient personal and clinical data. The client side is a Java applet running in a web browser and providing a friendly medical user interface to perform queries on patient and medical test dat and integrate and visualize properly the various query results. A set of tools based on Java Advanced Imaging API enables to process and analyze the retrieved cardiology images, and quantify their features in different regions of interest. The platform-independence Java technology makes the developed prototype easy to be managed in a centralized form and provided in each site where an intranet or internet connection can be located. Giving the healthcare providers effective tools for querying, visualizing and evaluating comprehensively cardiology medical images and records in all locations where they can need them- i.e. emergency, operating theaters, ward, or even outpatient clinics- the developed prototype represents an important aid in providing more efficient diagnoses and medical treatments.

  3. Image retrieval

    DEFF Research Database (Denmark)

    Ørnager, Susanne

    1997-01-01

    The paper touches upon indexing and retrieval for effective searches of digitized images. Different conceptions of what subject indexing means are described as a basis for defining an operational subject indexing strategy for images. The methodology is based on the art historian Erwin Panofsky......), special knowledge about image codes, and special knowledge about history of ideas. The semiologist Roland Barthes has established a semiology for pictorial expressions based on advertising photos. Barthes uses the concepts denotation/connotation where denotations can be explained as the sober expression...

  4. Design of a web portal for interdisciplinary image retrieval from multiple online image resources.

    Science.gov (United States)

    Kammerer, F J; Frankewitsch, T; Prokosch, H-U

    2009-01-01

    Images play an important role in medicine. Finding the desired images within the multitude of online image databases is a time-consuming and frustrating process. Existing websites do not meet all the requirements for an ideal learning environment for medical students. This work intends to establish a new web portal providing a centralized access point to a selected number of online image databases. A back-end system locates images on given websites and extracts relevant metadata. The images are indexed using UMLS and the MetaMap system provided by the US National Library of Medicine. Specially developed functions allow to create individual navigation structures. The front-end system suits the specific needs of medical students. A navigation structure consisting of several medical fields, university curricula and the ICD-10 was created. The images may be accessed via the given navigation structure or using different search functions. Cross-references are provided by the semantic relations of the UMLS. Over 25,000 images were identified and indexed. A pilot evaluation among medical students showed good first results concerning the acceptance of the developed navigation structures and search features. The integration of the images from different sources into the UMLS semantic network offers a quick and an easy-to-use learning environment.

  5. Mobile medical image retrieval

    Science.gov (United States)

    Duc, Samuel; Depeursinge, Adrien; Eggel, Ivan; Müller, Henning

    2011-03-01

    Images are an integral part of medical practice for diagnosis, treatment planning and teaching. Image retrieval has gained in importance mainly as a research domain over the past 20 years. Both textual and visual retrieval of images are essential. In the process of mobile devices becoming reliable and having a functionality equaling that of formerly desktop clients, mobile computing has gained ground and many applications have been explored. This creates a new field of mobile information search & access and in this context images can play an important role as they often allow understanding complex scenarios much quicker and easier than free text. Mobile information retrieval in general has skyrocketed over the past year with many new applications and tools being developed and all sorts of interfaces being adapted to mobile clients. This article describes constraints of an information retrieval system including visual and textual information retrieval from the medical literature of BioMedCentral and of the RSNA journals Radiology and Radiographics. Solutions for mobile data access with an example on an iPhone in a web-based environment are presented as iPhones are frequently used and the operating system is bound to become the most frequent smartphone operating system in 2011. A web-based scenario was chosen to allow for a use by other smart phone platforms such as Android as well. Constraints of small screens and navigation with touch screens are taken into account in the development of the application. A hybrid choice had to be taken to allow for taking pictures with the cell phone camera and upload them for visual similarity search as most producers of smart phones block this functionality to web applications. Mobile information access and in particular access to images can be surprisingly efficient and effective on smaller screens. Images can be read on screen much faster and relevance of documents can be identified quickly through the use of images contained in

  6. An Image Retrieval and Processing Expert System for the World Wide Web

    Science.gov (United States)

    Rodriguez, Ricardo; Rondon, Angelica; Bruno, Maria I.; Vasquez, Ramon

    1998-01-01

    This paper presents a system that is being developed in the Laboratory of Applied Remote Sensing and Image Processing at the University of P.R. at Mayaguez. It describes the components that constitute its architecture. The main elements are: a Data Warehouse, an Image Processing Engine, and an Expert System. Together, they provide a complete solution to researchers from different fields that make use of images in their investigations. Also, since it is available to the World Wide Web, it provides remote access and processing of images.

  7. Retrieve An Image

    Indian Academy of Sciences (India)

    First page Back Continue Last page Overview Graphics. Retrieve An Image. “A building”. “Box-shaped”. “Brown Color”. “Foreshortened view”. OR. Why not specify a similar looking picture? -- Main Motivation!

  8. Emergent web intelligence advanced information retrieval

    CERN Document Server

    Badr, Youakim; Abraham, Ajith; Hassanien, Aboul-Ella

    2010-01-01

    Web Intelligence explores the impact of artificial intelligence and advanced information technologies representing the next generation of Web-based systems, services, and environments, and designing hybrid web systems that serve wired and wireless users more efficiently. Multimedia and XML-based data are produced regularly and in increasing way in our daily digital activities, and their retrieval must be explored and studied in this emergent web-based era. 'Emergent Web Intelligence: Advanced information retrieval, provides reviews of the related cutting-edge technologies and insights. It is v

  9. The Wikipedia Image Retrieval Task

    NARCIS (Netherlands)

    T. Tsikrika (Theodora); J. Kludas

    2010-01-01

    htmlabstractThe wikipedia image retrieval task at ImageCLEF provides a testbed for the system-oriented evaluation of visual information retrieval from a collection of Wikipedia images. The aim is to investigate the effectiveness of retrieval approaches that exploit textual and visual evidence in the

  10. Web information retrieval for health professionals.

    Science.gov (United States)

    Ting, S L; See-To, Eric W K; Tse, Y K

    2013-06-01

    This paper presents a Web Information Retrieval System (WebIRS), which is designed to assist the healthcare professionals to obtain up-to-date medical knowledge and information via the World Wide Web (WWW). The system leverages the document classification and text summarization techniques to deliver the highly correlated medical information to the physicians. The system architecture of the proposed WebIRS is first discussed, and then a case study on an application of the proposed system in a Hong Kong medical organization is presented to illustrate the adoption process and a questionnaire is administrated to collect feedback on the operation and performance of WebIRS in comparison with conventional information retrieval in the WWW. A prototype system has been constructed and implemented on a trial basis in a medical organization. It has proven to be of benefit to healthcare professionals through its automatic functions in classification and summarizing the medical information that the physicians needed and interested. The results of the case study show that with the use of the proposed WebIRS, significant reduction of searching time and effort, with retrieval of highly relevant materials can be attained.

  11. Geospatial metadata retrieval from web services

    Directory of Open Access Journals (Sweden)

    Ivanildo Barbosa

    Full Text Available Nowadays, producers of geospatial data in either raster or vector formats are able to make them available on the World Wide Web by deploying web services that enable users to access and query on those contents even without specific software for geoprocessing. Several providers around the world have deployed instances of WMS (Web Map Service, WFS (Web Feature Service and WCS (Web Coverage Service, all of them specified by the Open Geospatial Consortium (OGC. In consequence, metadata about the available contents can be retrieved to be compared with similar offline datasets from other sources. This paper presents a brief summary and describes the matching process between the specifications for OGC web services (WMS, WFS and WCS and the specifications for metadata required by the ISO 19115 - adopted as reference for several national metadata profiles, including the Brazilian one. This process focuses on retrieving metadata about the identification and data quality packages as well as indicates the directions to retrieve metadata related to other packages. Therefore, users are able to assess whether the provided contents fit to their purposes.

  12. Web information retrieval based on ontology

    Science.gov (United States)

    Zhang, Jian

    2013-03-01

    The purpose of the Information Retrieval (IR) is to find a set of documents that are relevant for a specific information need of a user. Traditional Information Retrieval model commonly used in commercial search engine is based on keyword indexing system and Boolean logic queries. One big drawback of traditional information retrieval is that they typically retrieve information without an explicitly defined domain of interest to the users so that a lot of no relevance information returns to users, which burden the user to pick up useful answer from these no relevance results. In order to tackle this issue, many semantic web information retrieval models have been proposed recently. The main advantage of Semantic Web is to enhance search mechanisms with the use of Ontology's mechanisms. In this paper, we present our approach to personalize web search engine based on ontology. In addition, key techniques are also discussed in our paper. Compared to previous research, our works concentrate on the semantic similarity and the whole process including query submission and information annotation.

  13. IMAGE DESCRIPTIONS FOR SKETCH BASED IMAGE RETRIEVAL

    OpenAIRE

    SAAVEDRA RONDO, JOSE MANUEL; SAAVEDRA RONDO, JOSE MANUEL

    2008-01-01

    Due to the massive use of Internet together with the proliferation of media devices, content based image retrieval has become an active discipline in computer science. A common content based image retrieval approach requires that the user gives a regular image (e.g, a photo) as a query. However, having a regular image as query may be a serious problem. Indeed, people commonly use an image retrieval system because they do not count on the desired image. An easy alternative way t...

  14. Interactive Exploration for Image Retrieval

    Directory of Open Access Journals (Sweden)

    Jérôme Fournier

    2005-08-01

    Full Text Available We present a new version of our content-based image retrieval system RETIN. It is based on adaptive quantization of the color space, together with new features aiming at representing the spatial relationship between colors. Color analysis is also extended to texture. Using these powerful indexes, an original interactive retrieval strategy is introduced. The process is based on two steps for handling the retrieval of very large image categories. First, a controlled exploration method of the database is presented. Second, a relevance feedback method based on statistical learning is proposed. All the steps are evaluated by experiments on a generalist database.

  15. Blueprint of a Cross-Lingual Web Retrieval Collection

    NARCIS (Netherlands)

    Sigurbjörnsson, B.; Kamps, J.; de Rijke, M.; van Zwol, R.

    2005-01-01

    The world wide web is a natural setting for cross-lingual information retrieval; web content is essentially multilingual, and web searchers are often polyglots. Even though English has emerged as the lingua franca of the web, planning for a business trip or holiday usually involves digesting pages

  16. Intelligent image retrieval based on radiology reports

    Energy Technology Data Exchange (ETDEWEB)

    Gerstmair, Axel; Langer, Mathias; Kotter, Elmar [University Medical Center Freiburg, Department of Diagnostic Radiology, Freiburg (Germany); Daumke, Philipp; Simon, Kai [Averbis GmbH, Freiburg (Germany)

    2012-12-15

    To create an advanced image retrieval and data-mining system based on in-house radiology reports. Radiology reports are semantically analysed using natural language processing (NLP) techniques and stored in a state-of-the-art search engine. Images referenced by sequence and image number in the reports are retrieved from the picture archiving and communication system (PACS) and stored for later viewing. A web-based front end is used as an interface to query for images and show the results with the retrieved images and report text. Using a comprehensive radiological lexicon for the underlying terminology, the search algorithm also finds results for synonyms, abbreviations and related topics. The test set was 108 manually annotated reports analysed by different system configurations. Best results were achieved using full syntactic and semantic analysis with a precision of 0.929 and recall of 0.952. Operating successfully since October 2010, 258,824 reports have been indexed and a total of 405,146 preview images are stored in the database. Data-mining and NLP techniques provide quick access to a vast repository of images and radiology reports with both high precision and recall values. Consequently, the system has become a valuable tool in daily clinical routine, education and research. (orig.)

  17. Intelligent image retrieval based on radiology reports

    International Nuclear Information System (INIS)

    Gerstmair, Axel; Langer, Mathias; Kotter, Elmar; Daumke, Philipp; Simon, Kai

    2012-01-01

    To create an advanced image retrieval and data-mining system based on in-house radiology reports. Radiology reports are semantically analysed using natural language processing (NLP) techniques and stored in a state-of-the-art search engine. Images referenced by sequence and image number in the reports are retrieved from the picture archiving and communication system (PACS) and stored for later viewing. A web-based front end is used as an interface to query for images and show the results with the retrieved images and report text. Using a comprehensive radiological lexicon for the underlying terminology, the search algorithm also finds results for synonyms, abbreviations and related topics. The test set was 108 manually annotated reports analysed by different system configurations. Best results were achieved using full syntactic and semantic analysis with a precision of 0.929 and recall of 0.952. Operating successfully since October 2010, 258,824 reports have been indexed and a total of 405,146 preview images are stored in the database. Data-mining and NLP techniques provide quick access to a vast repository of images and radiology reports with both high precision and recall values. Consequently, the system has become a valuable tool in daily clinical routine, education and research. (orig.)

  18. The Use of QBIC Content-Based Image Retrieval System

    Directory of Open Access Journals (Sweden)

    Ching-Yi Wu

    2004-03-01

    Full Text Available The fast increase in digital images has caught increasing attention on the development of image retrieval technologies. Content-based image retrieval (CBIR has become an important approach in retrieving image data from a large collection. This article reports our results on the use and users study of a CBIR system. Thirty-eight students majored in art and design were invited to use the IBM’s OBIC (Query by Image Content system through the Internet. Data from their information needs, behaviors, and retrieval strategies were collected through an in-depth interview, observation, and self-described think-aloud process. Important conclusions are:(1)There are four types of information needs for image data: implicit, inspirational, ever-changing, and purposive. The types of needs may change during the retrieval process. (2)CBIR is suitable for the example-type query, text retrieval is suitable for the scenario-type query, and image browsing is suitable for the symbolic query. (3)Different from text retrieval, detailed description of the query condition may lead to retrieval failure more easily. (4)CBIR is suitable for the domain-specific image collection, not for the images on the Word-Wide Web.[Article content in Chinese

  19. Quantifying retrieval bias in Web archive search

    NARCIS (Netherlands)

    Samar, Thaer; Traub, Myriam C.; van Ossenbruggen, Jacco; Hardman, Lynda; de Vries, Arjen P.

    2018-01-01

    A Web archive usually contains multiple versions of documents crawled from the Web at different points in time. One possible way for users to access a Web archive is through full-text search systems. However, previous studies have shown that these systems can induce a bias, known as the

  20. Document image retrieval through word shape coding.

    Science.gov (United States)

    Lu, Shijian; Li, Linlin; Tan, Chew Lim

    2008-11-01

    This paper presents a document retrieval technique that is capable of searching document images without OCR (optical character recognition). The proposed technique retrieves document images by a new word shape coding scheme, which captures the document content through annotating each word image by a word shape code. In particular, we annotate word images by using a set of topological shape features including character ascenders/descenders, character holes, and character water reservoirs. With the annotated word shape codes, document images can be retrieved by either query keywords or a query document image. Experimental results show that the proposed document image retrieval technique is fast, efficient, and tolerant to various types of document degradation.

  1. Towards an Intelligent Possibilistic Web Information Retrieval Using Multiagent System

    Science.gov (United States)

    Elayeb, Bilel; Evrard, Fabrice; Zaghdoud, Montaceur; Ahmed, Mohamed Ben

    2009-01-01

    Purpose: The purpose of this paper is to make a scientific contribution to web information retrieval (IR). Design/methodology/approach: A multiagent system for web IR is proposed based on new technologies: Hierarchical Small-Worlds (HSW) and Possibilistic Networks (PN). This system is based on a possibilistic qualitative approach which extends the…

  2. Transformation invariant image indexing and retrieval for image databases

    NARCIS (Netherlands)

    Gevers, Th.; Smeulders, A.W.M.

    1994-01-01

    This paper presents a novel design of an image database system which supports storage, indexing and retrieval of images by content. The image retrieval methodology is based on the observation that images can be discriminated by the presence of image objects and their spatial relations. Images in the

  3. Network and User-Perceived Performance of Web Page Retrievals

    Science.gov (United States)

    Kruse, Hans; Allman, Mark; Mallasch, Paul

    1998-01-01

    The development of the HTTP protocol has been driven by the need to improve the network performance of the protocol by allowing the efficient retrieval of multiple parts of a web page without the need for multiple simultaneous TCP connections between a client and a server. We suggest that the retrieval of multiple page elements sequentially over a single TCP connection may result in a degradation of the perceived performance experienced by the user. We attempt to quantify this perceived degradation through the use of a model which combines a web retrieval simulation and an analytical model of TCP operation. Starting with the current HTTP/l.1 specification, we first suggest a client@side heuristic to improve the perceived transfer performance. We show that the perceived speed of the page retrieval can be increased without sacrificing data transfer efficiency. We then propose a new client/server extension to the HTTP/l.1 protocol to allow for the interleaving of page element retrievals. We finally address the issue of the display of advertisements on web pages, and in particular suggest a number of mechanisms which can make efficient use of IP multicast to send advertisements to a number of clients within the same network.

  4. MR imaging of carotid webs

    International Nuclear Information System (INIS)

    Boesen, Mari E.; Eswaradass, Prasanna Venkatesan; Singh, Dilip; Mitha, Alim P.; Menon, Bijoy K.; Goyal, Mayank; Frayne, Richard

    2017-01-01

    We propose a magnetic resonance (MR) imaging protocol for the characterization of carotid web morphology, composition, and vessel wall dynamics. The purpose of this case series was to determine the feasibility of imaging carotid webs with MR imaging. Five patients diagnosed with carotid web on CT angiography were recruited to undergo a 30-min MR imaging session. MR angiography (MRA) images of the carotid artery bifurcation were acquired. Multi-contrast fast spin echo (FSE) images were acquired axially about the level of the carotid web. Two types of cardiac phase resolved sequences (cineFSE and cine phase contrast) were acquired to visualize the elasticity of the vessel wall affected by the web. Carotid webs were identified on MRA in 5/5 (100%) patients. Multi-contrast FSE revealed vessel wall thickening and cineFSE demonstrated regional changes in distensibility surrounding the webs in these patients. Our MR imaging protocol enables an in-depth evaluation of patients with carotid webs: morphology (by MRA), composition (by multi-contrast FSE), and wall dynamics (by cineFSE). (orig.)

  5. MR imaging of carotid webs

    Energy Technology Data Exchange (ETDEWEB)

    Boesen, Mari E. [University of Calgary, Department of Biomedical Engineering, Calgary (Canada); Foothills Medical Centre, Seaman Family MR Research Centre, Calgary (Canada); Eswaradass, Prasanna Venkatesan; Singh, Dilip; Mitha, Alim P.; Menon, Bijoy K. [University of Calgary, Department of Clinical Neurosciences, Calgary (Canada); Foothills Medical Centre, Calgary Stroke Program, Calgary (Canada); Goyal, Mayank [Foothills Medical Centre, Calgary Stroke Program, Calgary (Canada); University of Calgary, Department of Radiology, Calgary (Canada); Frayne, Richard [Foothills Medical Centre, Seaman Family MR Research Centre, Calgary (Canada); University of Calgary, Hotchkiss Brain Institute, Calgary (Canada)

    2017-04-15

    We propose a magnetic resonance (MR) imaging protocol for the characterization of carotid web morphology, composition, and vessel wall dynamics. The purpose of this case series was to determine the feasibility of imaging carotid webs with MR imaging. Five patients diagnosed with carotid web on CT angiography were recruited to undergo a 30-min MR imaging session. MR angiography (MRA) images of the carotid artery bifurcation were acquired. Multi-contrast fast spin echo (FSE) images were acquired axially about the level of the carotid web. Two types of cardiac phase resolved sequences (cineFSE and cine phase contrast) were acquired to visualize the elasticity of the vessel wall affected by the web. Carotid webs were identified on MRA in 5/5 (100%) patients. Multi-contrast FSE revealed vessel wall thickening and cineFSE demonstrated regional changes in distensibility surrounding the webs in these patients. Our MR imaging protocol enables an in-depth evaluation of patients with carotid webs: morphology (by MRA), composition (by multi-contrast FSE), and wall dynamics (by cineFSE). (orig.)

  6. Feature hashing for fast image retrieval

    Science.gov (United States)

    Yan, Lingyu; Fu, Jiarun; Zhang, Hongxin; Yuan, Lu; Xu, Hui

    2018-03-01

    Currently, researches on content based image retrieval mainly focus on robust feature extraction. However, due to the exponential growth of online images, it is necessary to consider searching among large scale images, which is very timeconsuming and unscalable. Hence, we need to pay much attention to the efficiency of image retrieval. In this paper, we propose a feature hashing method for image retrieval which not only generates compact fingerprint for image representation, but also prevents huge semantic loss during the process of hashing. To generate the fingerprint, an objective function of semantic loss is constructed and minimized, which combine the influence of both the neighborhood structure of feature data and mapping error. Since the machine learning based hashing effectively preserves neighborhood structure of data, it yields visual words with strong discriminability. Furthermore, the generated binary codes leads image representation building to be of low-complexity, making it efficient and scalable to large scale databases. Experimental results show good performance of our approach.

  7. Dialog-based Interactive Image Retrieval

    OpenAIRE

    Guo, Xiaoxiao; Wu, Hui; Cheng, Yu; Rennie, Steven; Feris, Rogerio Schmidt

    2018-01-01

    Existing methods for interactive image retrieval have demonstrated the merit of integrating user feedback, improving retrieval results. However, most current systems rely on restricted forms of user feedback, such as binary relevance responses, or feedback based on a fixed set of relative attributes, which limits their impact. In this paper, we introduce a new approach to interactive image search that enables users to provide feedback via natural language, allowing for more natural and effect...

  8. Web User Profile Using XUL and Information Retrieval Techniques

    Directory of Open Access Journals (Sweden)

    Dan MUNTEANU

    2008-12-01

    Full Text Available This paper presents the importance of user profile in information retrieval, information filtering and recommender systems using explicit and implicit feedback. A Firefox extension (based on XUL used for gathering data needed to infer a web user profile and an example file with collected data are presented. Also an algorithm for creating and updating the user profile and keeping track of a fixed number k of subjects of interest is presented.

  9. Improving life sciences information retrieval using semantic web technology.

    Science.gov (United States)

    Quan, Dennis

    2007-05-01

    The ability to retrieve relevant information is at the heart of every aspect of research and development in the life sciences industry. Information is often distributed across multiple systems and recorded in a way that makes it difficult to piece together the complete picture. Differences in data formats, naming schemes and network protocols amongst information sources, both public and private, must be overcome, and user interfaces not only need to be able to tap into these diverse information sources but must also assist users in filtering out extraneous information and highlighting the key relationships hidden within an aggregated set of information. The Semantic Web community has made great strides in proposing solutions to these problems, and many efforts are underway to apply Semantic Web techniques to the problem of information retrieval in the life sciences space. This article gives an overview of the principles underlying a Semantic Web-enabled information retrieval system: creating a unified abstraction for knowledge using the RDF semantic network model; designing semantic lenses that extract contextually relevant subsets of information; and assembling semantic lenses into powerful information displays. Furthermore, concrete examples of how these principles can be applied to life science problems including a scenario involving a drug discovery dashboard prototype called BioDash are provided.

  10. Simultenious binary hash and features learning for image retrieval

    Science.gov (United States)

    Frantc, V. A.; Makov, S. V.; Voronin, V. V.; Marchuk, V. I.; Semenishchev, E. A.; Egiazarian, K. O.; Agaian, S.

    2016-05-01

    Content-based image retrieval systems have plenty of applications in modern world. The most important one is the image search by query image or by semantic description. Approaches to this problem are employed in personal photo-collection management systems, web-scale image search engines, medical systems, etc. Automatic analysis of large unlabeled image datasets is virtually impossible without satisfactory image-retrieval technique. It's the main reason why this kind of automatic image processing has attracted so much attention during recent years. Despite rather huge progress in the field, semantically meaningful image retrieval still remains a challenging task. The main issue here is the demand to provide reliable results in short amount of time. This paper addresses the problem by novel technique for simultaneous learning of global image features and binary hash codes. Our approach provide mapping of pixel-based image representation to hash-value space simultaneously trying to save as much of semantic image content as possible. We use deep learning methodology to generate image description with properties of similarity preservation and statistical independence. The main advantage of our approach in contrast to existing is ability to fine-tune retrieval procedure for very specific application which allow us to provide better results in comparison to general techniques. Presented in the paper framework for data- dependent image hashing is based on use two different kinds of neural networks: convolutional neural networks for image description and autoencoder for feature to hash space mapping. Experimental results confirmed that our approach has shown promising results in compare to other state-of-the-art methods.

  11. Toward privacy-preserving JPEG image retrieval

    Science.gov (United States)

    Cheng, Hang; Wang, Jingyue; Wang, Meiqing; Zhong, Shangping

    2017-07-01

    This paper proposes a privacy-preserving retrieval scheme for JPEG images based on local variance. Three parties are involved in the scheme: the content owner, the server, and the authorized user. The content owner encrypts JPEG images for privacy protection by jointly using permutation cipher and stream cipher, and then, the encrypted versions are uploaded to the server. With an encrypted query image provided by an authorized user, the server may extract blockwise local variances in different directions without knowing the plaintext content. After that, it can calculate the similarity between the encrypted query image and each encrypted database image by a local variance-based feature comparison mechanism. The authorized user with the encryption key can decrypt the returned encrypted images with plaintext content similar to the query image. The experimental results show that the proposed scheme not only provides effective privacy-preserving retrieval service but also ensures both format compliance and file size preservation for encrypted JPEG images.

  12. Retrieval Architecture with Classified Query for Content Based Image Recognition

    Directory of Open Access Journals (Sweden)

    Rik Das

    2016-01-01

    Full Text Available The consumer behavior has been observed to be largely influenced by image data with increasing familiarity of smart phones and World Wide Web. Traditional technique of browsing through product varieties in the Internet with text keywords has been gradually replaced by the easy accessible image data. The importance of image data has portrayed a steady growth in application orientation for business domain with the advent of different image capturing devices and social media. The paper has described a methodology of feature extraction by image binarization technique for enhancing identification and retrieval of information using content based image recognition. The proposed algorithm was tested on two public datasets, namely, Wang dataset and Oliva and Torralba (OT-Scene dataset with 3688 images on the whole. It has outclassed the state-of-the-art techniques in performance measure and has shown statistical significance.

  13. Region-Based Color Image Indexing and Retrieval

    DEFF Research Database (Denmark)

    Kompatsiaris, Ioannis; Triantafyllou, Evangelia; Strintzis, Michael G.

    2001-01-01

    In this paper a region-based color image indexing and retrieval algorithm is presented. As a basis for the indexing, a novel K-Means segmentation algorithm is used, modified so as to take into account the coherence of the regions. A new color distance is also defined for this algorithm. Based on ....... Experimental results demonstrate the performance of the algorithm. The development of an intelligent image content-based search engine for the World Wide Web is also presented, as a direct application of the presented algorithm....

  14. A Specialized Framework for Data Retrieval Web Applications

    Directory of Open Access Journals (Sweden)

    Jerzy Nogiec

    2005-06-01

    Full Text Available Although many general-purpose frameworks have been developed to aid in web application development, they typically tend to be both comprehensive and complex. To address this problem, a specialized server-side Java framework designed specifically for data retrieval and visualization has been developed. The framework's focus is on maintainability and data security. The functionality is rich with features necessary for simplifying data display design, deployment, user management and application debugging, yet the scope is deliberately kept limited to allow for easy comprehension and rapid application development. The system clearly decouples the application processing and visualization, which in turn allows for clean separation of layout and processing development. Duplication of standard web page features such as toolbars and navigational aids is therefore eliminated. The framework employs the popular Model-View-Controller (MVC architecture, but it also uses the filter mechanism for several of its base functionalities, which permits easy extension of the provided core functionality of the system.

  15. A specialized framework for data retrieval Web applications

    International Nuclear Information System (INIS)

    Jerzy Nogiec; Kelley Trombly-Freytag; Dana Walbridge

    2004-01-01

    Although many general-purpose frameworks have been developed to aid in web application development, they typically tend to be both comprehensive and complex. To address this problem, a specialized server-side Java framework designed specifically for data retrieval and visualization has been developed. The framework's focus is on maintainability and data security. The functionality is rich with features necessary for simplifying data display design, deployment, user management and application debugging, yet the scope is deliberately kept limited to allow for easy comprehension and rapid application development. The system clearly decouples the application processing and visualization, which in turn allows for clean separation of layout and processing development. Duplication of standard web page features such as toolbars and navigational aids is therefore eliminated. The framework employs the popular Model-View-Controller (MVC) architecture, but it also uses the filter mechanism for several of its base functionalities, which permits easy extension of the provided core functionality of the system

  16. Secure image retrieval with multiple keys

    Science.gov (United States)

    Liang, Haihua; Zhang, Xinpeng; Wei, Qiuhan; Cheng, Hang

    2018-03-01

    This article proposes a secure image retrieval scheme under a multiuser scenario. In this scheme, the owner first encrypts and uploads images and their corresponding features to the cloud; then, the user submits the encrypted feature of the query image to the cloud; next, the cloud compares the encrypted features and returns encrypted images with similar content to the user. To find the nearest neighbor in the encrypted features, an encryption with multiple keys is proposed, in which the query feature of each user is encrypted by his/her own key. To improve the key security and space utilization, global optimization and Gaussian distribution are, respectively, employed to generate multiple keys. The experiments show that the proposed encryption can provide effective and secure image retrieval for each user and ensure confidentiality of the query feature of each user.

  17. Image Information Retrieval: An Overview of Current Research

    OpenAIRE

    Abby A. Goodrum

    2000-01-01

    This paper provides an overview of current research in image information retrieval and provides an outline of areas for future research. The approach is broad and interdisciplinary and focuses on three aspects of image research (IR): text-based retrieval, content-based retrieval, and user interactions with image information retrieval systems. The review concludes with a call for image retrieval evaluation studies similar to TREC.

  18. Robust histogram-based image retrieval

    Czech Academy of Sciences Publication Activity Database

    Höschl, Cyril; Flusser, Jan

    2016-01-01

    Roč. 69, č. 1 (2016), s. 72-81 ISSN 0167-8655 R&D Projects: GA ČR GA15-16928S Institutional support: RVO:67985556 Keywords : Image retrieval * Noisy image * Histogram * Convolution * Moments * Invariants Subject RIV: JD - Computer Applications, Robotics Impact factor: 1.995, year: 2016 http://library.utia.cas.cz/separaty/2015/ZOI/hoschl-0452147.pdf

  19. Contextual Distance Refining for Image Retrieval

    KAUST Repository

    Islam, Almasri

    2014-01-01

    Recently, a number of methods have been proposed to improve image retrieval accuracy by capturing context information. These methods try to compensate for the fact that a visually less similar image might be more relevant because it depicts the same object. We propose a new quick method for refining any pairwise distance metric, it works by iteratively discovering the object in the image from the most similar images, and then refine the distance metric accordingly. Test show that our technique improves over the state of art in terms of accuracy over the MPEG7 dataset.

  20. Contextual Distance Refining for Image Retrieval

    KAUST Repository

    Islam, Almasri

    2014-09-16

    Recently, a number of methods have been proposed to improve image retrieval accuracy by capturing context information. These methods try to compensate for the fact that a visually less similar image might be more relevant because it depicts the same object. We propose a new quick method for refining any pairwise distance metric, it works by iteratively discovering the object in the image from the most similar images, and then refine the distance metric accordingly. Test show that our technique improves over the state of art in terms of accuracy over the MPEG7 dataset.

  1. Content-based image retrieval with ontological ranking

    Science.gov (United States)

    Tsai, Shen-Fu; Tsai, Min-Hsuan; Huang, Thomas S.

    2010-02-01

    Images are a much more powerful medium of expression than text, as the adage says: "One picture is worth a thousand words." It is because compared with text consisting of an array of words, an image has more degrees of freedom and therefore a more complicated structure. However, the less limited structure of images presents researchers in the computer vision community a tough task of teaching machines to understand and organize images, especially when a limit number of learning examples and background knowledge are given. The advance of internet and web technology in the past decade has changed the way human gain knowledge. People, hence, can exchange knowledge with others by discussing and contributing information on the web. As a result, the web pages in the internet have become a living and growing source of information. One is therefore tempted to wonder whether machines can learn from the web knowledge base as well. Indeed, it is possible to make computer learn from the internet and provide human with more meaningful knowledge. In this work, we explore this novel possibility on image understanding applied to semantic image search. We exploit web resources to obtain links from images to keywords and a semantic ontology constituting human's general knowledge. The former maps visual content to related text in contrast to the traditional way of associating images with surrounding text; the latter provides relations between concepts for machines to understand to what extent and in what sense an image is close to the image search query. With the aid of these two tools, the resulting image search system is thus content-based and moreover, organized. The returned images are ranked and organized such that semantically similar images are grouped together and given a rank based on the semantic closeness to the input query. The novelty of the system is twofold: first, images are retrieved not only based on text cues but their actual contents as well; second, the grouping

  2. Active learning methods for interactive image retrieval.

    Science.gov (United States)

    Gosselin, Philippe Henri; Cord, Matthieu

    2008-07-01

    Active learning methods have been considered with increased interest in the statistical learning community. Initially developed within a classification framework, a lot of extensions are now being proposed to handle multimedia applications. This paper provides algorithms within a statistical framework to extend active learning for online content-based image retrieval (CBIR). The classification framework is presented with experiments to compare several powerful classification techniques in this information retrieval context. Focusing on interactive methods, active learning strategy is then described. The limitations of this approach for CBIR are emphasized before presenting our new active selection process RETIN. First, as any active method is sensitive to the boundary estimation between classes, the RETIN strategy carries out a boundary correction to make the retrieval process more robust. Second, the criterion of generalization error to optimize the active learning selection is modified to better represent the CBIR objective of database ranking. Third, a batch processing of images is proposed. Our strategy leads to a fast and efficient active learning scheme to retrieve sets of online images (query concept). Experiments on large databases show that the RETIN method performs well in comparison to several other active strategies.

  3. Retrieval and classification of food images.

    Science.gov (United States)

    Farinella, Giovanni Maria; Allegra, Dario; Moltisanti, Marco; Stanco, Filippo; Battiato, Sebastiano

    2016-10-01

    Automatic food understanding from images is an interesting challenge with applications in different domains. In particular, food intake monitoring is becoming more and more important because of the key role that it plays in health and market economies. In this paper, we address the study of food image processing from the perspective of Computer Vision. As first contribution we present a survey of the studies in the context of food image processing from the early attempts to the current state-of-the-art methods. Since retrieval and classification engines able to work on food images are required to build automatic systems for diet monitoring (e.g., to be embedded in wearable cameras), we focus our attention on the aspect of the representation of the food images because it plays a fundamental role in the understanding engines. The food retrieval and classification is a challenging task since the food presents high variableness and an intrinsic deformability. To properly study the peculiarities of different image representations we propose the UNICT-FD1200 dataset. It was composed of 4754 food images of 1200 distinct dishes acquired during real meals. Each food plate is acquired multiple times and the overall dataset presents both geometric and photometric variabilities. The images of the dataset have been manually labeled considering 8 categories: Appetizer, Main Course, Second Course, Single Course, Side Dish, Dessert, Breakfast, Fruit. We have performed tests employing different representations of the state-of-the-art to assess the related performances on the UNICT-FD1200 dataset. Finally, we propose a new representation based on the perceptual concept of Anti-Textons which is able to encode spatial information between Textons outperforming other representations in the context of food retrieval and Classification. Copyright © 2016 Elsevier Ltd. All rights reserved.

  4. Web multimedia information retrieval using improved Bayesian algorithm.

    Science.gov (United States)

    Yu, Yi-Jun; Chen, Chun; Yu, Yi-Min; Lin, Huai-Zhong

    2003-01-01

    The main thrust of this paper is application of a novel data mining approach on the log of user's feedback to improve web multimedia information retrieval performance. A user space model was constructed based on data mining, and then integrated into the original information space model to improve the accuracy of the new information space model. It can remove clutter and irrelevant text information and help to eliminate mismatch between the page author's expression and the user's understanding and expectation. User space model was also utilized to discover the relationship between high-level and low-level features for assigning weight. The authors proposed improved Bayesian algorithm for data mining. Experiment proved that the authors' proposed algorithm was efficient.

  5. Retrieving high-resolution images over the Internet from an anatomical image database

    Science.gov (United States)

    Strupp-Adams, Annette; Henderson, Earl

    1999-12-01

    The Visible Human Data set is an important contribution to the national collection of anatomical images. To enhance the availability of these images, the National Library of Medicine has supported the design and development of a prototype object-oriented image database which imports, stores, and distributes high resolution anatomical images in both pixel and voxel formats. One of the key database modules is its client-server Internet interface. This Web interface provides a query engine with retrieval access to high-resolution anatomical images that range in size from 100KB for browser viewable rendered images, to 1GB for anatomical structures in voxel file formats. The Web query and retrieval client-server system is composed of applet GUIs, servlets, and RMI application modules which communicate with each other to allow users to query for specific anatomical structures, and retrieve image data as well as associated anatomical images from the database. Selected images can be downloaded individually as single files via HTTP or downloaded in batch-mode over the Internet to the user's machine through an applet that uses Netscape's Object Signing mechanism. The image database uses ObjectDesign's object-oriented DBMS, ObjectStore that has a Java interface. The query and retrieval systems has been tested with a Java-CDE window system, and on the x86 architecture using Windows NT 4.0. This paper describes the Java applet client search engine that queries the database; the Java client module that enables users to view anatomical images online; the Java application server interface to the database which organizes data returned to the user, and its distribution engine that allow users to download image files individually and/or in batch-mode.

  6. Storage and retrieval of large digital images

    Science.gov (United States)

    Bradley, J.N.

    1998-01-20

    Image compression and viewing are implemented with (1) a method for performing DWT-based compression on a large digital image with a computer system possessing a two-level system of memory and (2) a method for selectively viewing areas of the image from its compressed representation at multiple resolutions and, if desired, in a client-server environment. The compression of a large digital image I(x,y) is accomplished by first defining a plurality of discrete tile image data subsets T{sub ij}(x,y) that, upon superposition, form the complete set of image data I(x,y). A seamless wavelet-based compression process is effected on I(x,y) that is comprised of successively inputting the tiles T{sub ij}(x,y) in a selected sequence to a DWT routine, and storing the resulting DWT coefficients in a first primary memory. These coefficients are periodically compressed and transferred to a secondary memory to maintain sufficient memory in the primary memory for data processing. The sequence of DWT operations on the tiles T{sub ij}(x,y) effectively calculates a seamless DWT of I(x,y). Data retrieval consists of specifying a resolution and a region of I(x,y) for display. The subset of stored DWT coefficients corresponding to each requested scene is determined and then decompressed for input to an inverse DWT, the output of which forms the image display. The repeated process whereby image views are specified may take the form an interaction with a computer pointing device on an image display from a previous retrieval. 6 figs.

  7. Image Retrieval Berdasarkan Fitur Warna, Bentuk, dan Tekstur

    Directory of Open Access Journals (Sweden)

    Rita Layona

    2014-12-01

    Full Text Available Along with the times, information retrieval is no longer just on textual data, but also the visual data. The technique was originally used is Text-Based Image Retrieval (TBIR, but the technique still has some shortcomings such as the relevance of the picture successfully retrieved, and the specific space required to store meta-data in the image. Seeing the shortage of Text-Based Image Retrieval techniques, then other techniques were developed, namely Image Retrieval based on content or commonly called Content Based Image Retrieval (CBIR. In this research, CBIR will be discussed based on color, shape and texture using a color histogram, Gabor and SIFT. This study aimed to compare the results of image retrieval with some of these techniques. The results obtained are by combining color, shape and texture features, the performance of the system can be improved.

  8. Introduction to the JASIST Special Topic Issue on Web Retrieval and Mining: A Machine Learning Perspective.

    Science.gov (United States)

    Chen, Hsinchun

    2003-01-01

    Discusses information retrieval techniques used on the World Wide Web. Topics include machine learning in information extraction; relevance feedback; information filtering and recommendation; text classification and text clustering; Web mining, based on data mining techniques; hyperlink structure; and Web size. (LRW)

  9. Enhancing Image Retrieval System Using Content Based Search ...

    African Journals Online (AJOL)

    The output shows more efficiency in retrieval because instead of performing the search on the entire image database, the image category option directs the retrieval engine to the specified category. Also, there is provision to update or modify the different image categories in the image database as need arise. Keywords: ...

  10. Image Retrieval based on Integration between Color and Geometric Moment Features

    International Nuclear Information System (INIS)

    Saad, M.H.; Saleh, H.I.; Konbor, H.; Ashour, M.

    2012-01-01

    Content based image retrieval is the retrieval of images based on visual features such as colour, texture and shape. .the Current approaches to CBIR differ in terms of which image features are extracted; recent work deals with combination of distances or scores from different and usually independent representations in an attempt to induce high level semantics from the low level descriptors of the images. content-based image retrieval has many application areas such as, education, commerce, military, searching, commerce, and biomedicine and Web image classification. This paper proposes a new image retrieval system, which uses color and geometric moment feature to form the feature vectors. Bhattacharyya distance and histogram intersection are used to perform feature matching. This framework integrates the color histogram which represents the global feature and geometric moment as local descriptor to enhance the retrieval results. The proposed technique is proper for precisely retrieving images even in deformation cases such as geometric deformations and noise. It is tested on a standard the results shows that a combination of our approach as a local image descriptor with other global descriptors outperforms other approaches.

  11. Modelling of chromatic contrast for retrieval of wallpaper images

    OpenAIRE

    Gao, Xiaohong W.; Wang, Yuanlei; Qian, Yu; Gao, Alice

    2015-01-01

    Colour remains one of the key factors in presenting an object and consequently has been widely applied in retrieval of images based on their visual contents. However, a colour appearance changes with the change of viewing surroundings, the phenomenon that has not been paid attention yet while performing colour-based image retrieval. To comprehend this effect, in this paper, a chromatic contrast model, CAMcc, is developed for the application of retrieval of colour intensive images, cementing t...

  12. A Statistical Approach to Retrieving Historical Manuscript Images without Recognition

    National Research Council Canada - National Science Library

    Rath, Toni M; Lavrenko, Victor; Manmatha, R

    2003-01-01

    ...), and word spotting -- an image matching approach (computationally expensive). In this work, the authors present a novel retrieval approach for historical document collections that does not require recognition...

  13. Biased discriminant euclidean embedding for content-based image retrieval.

    Science.gov (United States)

    Bian, Wei; Tao, Dacheng

    2010-02-01

    With many potential multimedia applications, content-based image retrieval (CBIR) has recently gained more attention for image management and web search. A wide variety of relevance feedback (RF) algorithms have been developed in recent years to improve the performance of CBIR systems. These RF algorithms capture user's preferences and bridge the semantic gap. However, there is still a big room to further the RF performance, because the popular RF algorithms ignore the manifold structure of image low-level visual features. In this paper, we propose the biased discriminative Euclidean embedding (BDEE) which parameterises samples in the original high-dimensional ambient space to discover the intrinsic coordinate of image low-level visual features. BDEE precisely models both the intraclass geometry and interclass discrimination and never meets the undersampled problem. To consider unlabelled samples, a manifold regularization-based item is introduced and combined with BDEE to form the semi-supervised BDEE, or semi-BDEE for short. To justify the effectiveness of the proposed BDEE and semi-BDEE, we compare them against the conventional RF algorithms and show a significant improvement in terms of accuracy and stability based on a subset of the Corel image gallery.

  14. Performance analysis of algorithms for retrieval of magnetic resonance images for interactive teleradiology

    Science.gov (United States)

    Atkins, M. Stella; Hwang, Robert; Tang, Simon

    2001-05-01

    We have implemented a prototype system consisting of a Java- based image viewer and a web server extension component for transmitting Magnetic Resonance Images (MRI) to an image viewer, to test the performance of different image retrieval techniques. We used full-resolution images, and images compressed/decompressed using the Set Partitioning in Hierarchical Trees (SPIHT) image compression algorithm. We examined the SPIHT decompression algorithm using both non- progressive and progressive transmission, focusing on the running times of the algorithm, client memory usage and garbage collection. We also compared the Java implementation with a native C++ implementation of the non- progressive SPIHT decompression variant. Our performance measurements showed that for uncompressed image retrieval using a 10Mbps Ethernet, a film of 16 MR images can be retrieved and displayed almost within interactive times. The native C++ code implementation of the client-side decoder is twice as fast as the Java decoder. If the network bandwidth is low, the high communication time for retrieving uncompressed images may be reduced by use of SPIHT-compressed images, although the image quality is then degraded. To provide diagnostic quality images, we also investigated the retrieval of up to 3 images on a MR film at full-resolution, using progressive SPIHT decompression. The Java-based implementation of progressive decompression performed badly, mainly due to the memory requirements for maintaining the image states, and the high cost of execution of the Java garbage collector. Hence, in systems where the bandwidth is high, such as found in a hospital intranet, SPIHT image compression does not provide advantages for image retrieval performance.

  15. Millennial Undergraduate Research Strategies in Web and Library Information Retrieval Systems

    Science.gov (United States)

    Porter, Brandi

    2011-01-01

    This article summarizes the author's dissertation regarding search strategies of millennial undergraduate students in Web and library online information retrieval systems. Millennials bring a unique set of search characteristics and strategies to their research since they have never known a world without the Web. Through the use of search engines,…

  16. Improving Web Page Retrieval using Search Context from Clicked Domain Names

    NARCIS (Netherlands)

    Li, R.

    Search context is a crucial factor that helps to understand a user’s information need in ad-hoc Web page retrieval. A query log of a search engine contains rich information on issued queries and their corresponding clicked Web pages. The clicked data implies its relevance to the query and can be

  17. Mobile object retrieval in server-based image databases

    Science.gov (United States)

    Manger, D.; Pagel, F.; Widak, H.

    2013-05-01

    The increasing number of mobile phones equipped with powerful cameras leads to huge collections of user-generated images. To utilize the information of the images on site, image retrieval systems are becoming more and more popular to search for similar objects in an own image database. As the computational performance and the memory capacity of mobile devices are constantly increasing, this search can often be performed on the device itself. This is feasible, for example, if the images are represented with global image features or if the search is done using EXIF or textual metadata. However, for larger image databases, if multiple users are meant to contribute to a growing image database or if powerful content-based image retrieval methods with local features are required, a server-based image retrieval backend is needed. In this work, we present a content-based image retrieval system with a client server architecture working with local features. On the server side, the scalability to large image databases is addressed with the popular bag-of-word model with state-of-the-art extensions. The client end of the system focuses on a lightweight user interface presenting the most similar images of the database highlighting the visual information which is common with the query image. Additionally, new images can be added to the database making it a powerful and interactive tool for mobile contentbased image retrieval.

  18. VICAR - VIDEO IMAGE COMMUNICATION AND RETRIEVAL

    Science.gov (United States)

    Wall, R. J.

    1994-01-01

    VICAR (Video Image Communication and Retrieval) is a general purpose image processing software system that has been under continuous development since the late 1960's. Originally intended for data from the NASA Jet Propulsion Laboratory's unmanned planetary spacecraft, VICAR is now used for a variety of other applications including biomedical image processing, cartography, earth resources, and geological exploration. The development of this newest version of VICAR emphasized a standardized, easily-understood user interface, a shield between the user and the host operating system, and a comprehensive array of image processing capabilities. Structurally, VICAR can be divided into roughly two parts; a suite of applications programs and an executive which serves as the interfaces between the applications, the operating system, and the user. There are several hundred applications programs ranging in function from interactive image editing, data compression/decompression, and map projection, to blemish, noise, and artifact removal, mosaic generation, and pattern recognition and location. An information management system designed specifically for handling image related data can merge image data with other types of data files. The user accesses these programs through the VICAR executive, which consists of a supervisor and a run-time library. From the viewpoint of the user and the applications programs, the executive is an environment that is independent of the operating system. VICAR does not replace the host computer's operating system; instead, it overlays the host resources. The core of the executive is the VICAR Supervisor, which is based on NASA Goddard Space Flight Center's Transportable Applications Executive (TAE). Various modifications and extensions have been made to optimize TAE for image processing applications, resulting in a user friendly environment. The rest of the executive consists of the VICAR Run-Time Library, which provides a set of subroutines (image

  19. Automatic medical image annotation and keyword-based image retrieval using relevance feedback.

    Science.gov (United States)

    Ko, Byoung Chul; Lee, JiHyeon; Nam, Jae-Yeal

    2012-08-01

    This paper presents novel multiple keywords annotation for medical images, keyword-based medical image retrieval, and relevance feedback method for image retrieval for enhancing image retrieval performance. For semantic keyword annotation, this study proposes a novel medical image classification method combining local wavelet-based center symmetric-local binary patterns with random forests. For keyword-based image retrieval, our retrieval system use the confidence score that is assigned to each annotated keyword by combining probabilities of random forests with predefined body relation graph. To overcome the limitation of keyword-based image retrieval, we combine our image retrieval system with relevance feedback mechanism based on visual feature and pattern classifier. Compared with other annotation and relevance feedback algorithms, the proposed method shows both improved annotation performance and accurate retrieval results.

  20. Content Based Retrieval System for Magnetic Resonance Images

    International Nuclear Information System (INIS)

    Trojachanets, Katarina

    2010-01-01

    The amount of medical images is continuously increasing as a consequence of the constant growth and development of techniques for digital image acquisition. Manual annotation and description of each image is impractical, expensive and time consuming approach. Moreover, it is an imprecise and insufficient way for describing all information stored in medical images. This induces the necessity for developing efficient image storage, annotation and retrieval systems. Content based image retrieval (CBIR) emerges as an efficient approach for digital image retrieval from large databases. It includes two phases. In the first phase, the visual content of the image is analyzed and the feature extraction process is performed. An appropriate descriptor, namely, feature vector is then associated with each image. These descriptors are used in the second phase, i.e. the retrieval process. With the aim to improve the efficiency and precision of the content based image retrieval systems, feature extraction and automatic image annotation techniques are subject of continuous researches and development. Including the classification techniques in the retrieval process enables automatic image annotation in an existing CBIR system. It contributes to more efficient and easier image organization in the system.Applying content based retrieval in the field of magnetic resonance is a big challenge. Magnetic resonance imaging is an image based diagnostic technique which is widely used in medical environment. According to this, the number of magnetic resonance images is enormously growing. Magnetic resonance images provide plentiful medical information, high resolution and specific nature. Thus, the capability of CBIR systems for image retrieval from large database is of great importance for efficient analysis of this kind of images. The aim of this thesis is to propose content based retrieval system architecture for magnetic resonance images. To provide the system efficiency, feature

  1. Improved image retrieval based on fuzzy colour feature vector

    Science.gov (United States)

    Ben-Ahmeida, Ahlam M.; Ben Sasi, Ahmed Y.

    2013-03-01

    One of Image indexing techniques is the Content-Based Image Retrieval which is an efficient way for retrieving images from the image database automatically based on their visual contents such as colour, texture, and shape. In this paper will be discuss how using content-based image retrieval (CBIR) method by colour feature extraction and similarity checking. By dividing the query image and all images in the database into pieces and extract the features of each part separately and comparing the corresponding portions in order to increase the accuracy in the retrieval. The proposed approach is based on the use of fuzzy sets, to overcome the problem of curse of dimensionality. The contribution of colour of each pixel is associated to all the bins in the histogram using fuzzy-set membership functions. As a result, the Fuzzy Colour Histogram (FCH), outperformed the Conventional Colour Histogram (CCH) in image retrieving, due to its speedy results, where were images represented as signatures that took less size of memory, depending on the number of divisions. The results also showed that FCH is less sensitive and more robust to brightness changes than the CCH with better retrieval recall values.

  2. Pareto-depth for multiple-query image retrieval.

    Science.gov (United States)

    Hsiao, Ko-Jen; Calder, Jeff; Hero, Alfred O

    2015-02-01

    Most content-based image retrieval systems consider either one single query, or multiple queries that include the same object or represent the same semantic information. In this paper, we consider the content-based image retrieval problem for multiple query images corresponding to different image semantics. We propose a novel multiple-query information retrieval algorithm that combines the Pareto front method with efficient manifold ranking. We show that our proposed algorithm outperforms state of the art multiple-query retrieval algorithms on real-world image databases. We attribute this performance improvement to concavity properties of the Pareto fronts, and prove a theoretical result that characterizes the asymptotic concavity of the fronts.

  3. Retrieving top-k prestige-based relevant spatial web objects

    DEFF Research Database (Denmark)

    Cao, Xin; Cong, Gao; Jensen, Christian S.

    2010-01-01

    The location-aware keyword query returns ranked objects that are near a query location and that have textual descriptions that match query keywords. This query occurs inherently in many types of mobile and traditional web services and applications, e.g., Yellow Pages and Maps services. Previous...... of prestige-based relevance to capture both the textual relevance of an object to a query and the effects of nearby objects. Based on this, a new type of query, the Location-aware top-k Prestige-based Text retrieval (LkPT) query, is proposed that retrieves the top-k spatial web objects ranked according...... to both prestige-based relevance and location proximity. We propose two algorithms that compute LkPT queries. Empirical studies with real-world spatial data demonstrate that LkPT queries are more effective in retrieving web objects than a previous approach that does not consider the effects of nearby...

  4. A novel architecture for information retrieval system based on semantic web

    Science.gov (United States)

    Zhang, Hui

    2011-12-01

    Nowadays, the web has enabled an explosive growth of information sharing (there are currently over 4 billion pages covering most areas of human endeavor) so that the web has faced a new challenge of information overhead. The challenge that is now before us is not only to help people locating relevant information precisely but also to access and aggregate a variety of information from different resources automatically. Current web document are in human-oriented formats and they are suitable for the presentation, but machines cannot understand the meaning of document. To address this issue, Berners-Lee proposed a concept of semantic web. With semantic web technology, web information can be understood and processed by machine. It provides new possibilities for automatic web information processing. A main problem of semantic web information retrieval is that when these is not enough knowledge to such information retrieval system, the system will return to a large of no sense result to uses due to a huge amount of information results. In this paper, we present the architecture of information based on semantic web. In addiction, our systems employ the inference Engine to check whether the query should pose to Keyword-based Search Engine or should pose to the Semantic Search Engine.

  5. 4D reconstruction of the past: the image retrieval and 3D model construction pipeline

    Science.gov (United States)

    Hadjiprocopis, Andreas; Ioannides, Marinos; Wenzel, Konrad; Rothermel, Mathias; Johnsons, Paul S.; Fritsch, Dieter; Doulamis, Anastasios; Protopapadakis, Eftychios; Kyriakaki, Georgia; Makantasis, Kostas; Weinlinger, Guenther; Klein, Michael; Fellner, Dieter; Stork, Andre; Santos, Pedro

    2014-08-01

    One of the main characteristics of the Internet era we are living in, is the free and online availability of a huge amount of data. This data is of varied reliability and accuracy and exists in various forms and formats. Often, it is cross-referenced and linked to other data, forming a nexus of text, images, animation and audio enabled by hypertext and, recently, by the Web3.0 standard. Our main goal is to enable historians, architects, archaeolo- gists, urban planners and affiliated professionals to reconstruct views of historical monuments from thousands of images floating around the web. This paper aims to provide an update of our progress in designing and imple- menting a pipeline for searching, filtering and retrieving photographs from Open Access Image Repositories and social media sites and using these images to build accurate 3D models of archaeological monuments as well as enriching multimedia of cultural / archaeological interest with metadata and harvesting the end products to EU- ROPEANA. We provide details of how our implemented software searches and retrieves images of archaeological sites from Flickr and Picasa repositories as well as strategies on how to filter the results, on two levels; a) based on their built-in metadata including geo-location information and b) based on image processing and clustering techniques. We also describe our implementation of a Structure from Motion pipeline designed for producing 3D models using the large collection of 2D input images (>1000) retrieved from Internet Repositories.

  6. A framework for efficient spatial web object retrieval

    DEFF Research Database (Denmark)

    Wu, Dinging; Cong, Gao; Jensen, Christian S.

    2012-01-01

    The conventional Internet is acquiring a geospatial dimension. Web documents are being geo-tagged and geo-referenced objects such as points of interest are being associated with descriptive text documents. The resulting fusion of geo-location and documents enables new kinds of queries that take...

  7. An Intelligent Web Digital Image Metadata Service Platform for Social Curation Commerce Environment

    Directory of Open Access Journals (Sweden)

    Seong-Yong Hong

    2015-01-01

    Full Text Available Information management includes multimedia data management, knowledge management, collaboration, and agents, all of which are supporting technologies for XML. XML technologies have an impact on multimedia databases as well as collaborative technologies and knowledge management. That is, e-commerce documents are encoded in XML and are gaining much popularity for business-to-business or business-to-consumer transactions. Recently, the internet sites, such as e-commerce sites and shopping mall sites, deal with a lot of image and multimedia information. This paper proposes an intelligent web digital image information retrieval platform, which adopts XML technology for social curation commerce environment. To support object-based content retrieval on product catalog images containing multiple objects, we describe multilevel metadata structures representing the local features, global features, and semantics of image data. To enable semantic-based and content-based retrieval on such image data, we design an XML-Schema for the proposed metadata. We also describe how to automatically transform the retrieval results into the forms suitable for the various user environments, such as web browser or mobile device, using XSLT. The proposed scheme can be utilized to enable efficient e-catalog metadata sharing between systems, and it will contribute to the improvement of the retrieval correctness and the user’s satisfaction on semantic-based web digital image information retrieval.

  8. Comparing the Scale of Web Subject Directories Precision in Technical-Engineering Information Retrieval

    Directory of Open Access Journals (Sweden)

    Mehrdokht Wazirpour Keshmiri

    2012-07-01

    Full Text Available The main purpose of this research was to compare the scale of web subject directories precision in information retrieval of technical-engineering science. Information gathering was documentary and webometric. Keywords of technical-engineering science were chosen at twenty different subjects from IEEE (Institute of Electrical and Electronics Engineers and engineering magazines that situated in sciencedirect site. These keywords are used at five subject directories Yahoo, Google, Infomine, Intute, Dmoz, that were web directories high-utilization. Usually first results in searching tools are connected to searching keywords. Because, first ten results was evaluated in every search. These assessments to consist of scale of precision, scale of error, scale retrieval items in technical-engineering categories to retrieval items entirely. The used criteria for determining the scale of precision that was according to high-utilization standards in different documents, to consist of presence of the keywords in title, appearance of keywords at the part of web retrieved pages, keywords adjacency, URL of page, page description and subject categories. Information analysis was according to Kruskal-Wallis Test and L.S.D fisher. Results revealed that there was meaningful difference about precision of web subject directories in information retrieval of technical-engineering science, Therefore this theory was confirmed.web subject directories ranked from point of precision as follows. Google, Yahoo, Intute, Dmoz, and Infomine. The scale of observed error at the first results was another criterion that was used for comparing web subject directories. In this research, Yahoo had minimum scale of error and Infomine had most of error. This research also compared the scale of retrieval items in all of categories web subject directories entirely to retrieval items in technical-engineering categories, results revealed that there was meaningful difference between them. And

  9. Color and neighbor edge directional difference feature for image retrieval

    Institute of Scientific and Technical Information of China (English)

    Chaobing Huang; Shengsheng Yu; Jingli Zhou; Hongwei Lu

    2005-01-01

    @@ A novel image feature termed neighbor edge directional difference unit histogram is proposed, in which the neighbor edge directional difference unit is defined and computed for every pixel in the image, and is used to generate the neighbor edge directional difference unit histogram. This histogram and color histogram are used as feature indexes to retrieve color image. The feature is invariant to image scaling and translation and has more powerful descriptive for the natural color images. Experimental results show that the feature can achieve better retrieval performance than other color-spatial features.

  10. Generating region proposals for histopathological whole slide image retrieval.

    Science.gov (United States)

    Ma, Yibing; Jiang, Zhiguo; Zhang, Haopeng; Xie, Fengying; Zheng, Yushan; Shi, Huaqiang; Zhao, Yu; Shi, Jun

    2018-06-01

    Content-based image retrieval is an effective method for histopathological image analysis. However, given a database of huge whole slide images (WSIs), acquiring appropriate region-of-interests (ROIs) for training is significant and difficult. Moreover, histopathological images can only be annotated by pathologists, resulting in the lack of labeling information. Therefore, it is an important and challenging task to generate ROIs from WSI and retrieve image with few labels. This paper presents a novel unsupervised region proposing method for histopathological WSI based on Selective Search. Specifically, the WSI is over-segmented into regions which are hierarchically merged until the WSI becomes a single region. Nucleus-oriented similarity measures for region mergence and Nucleus-Cytoplasm color space for histopathological image are specially defined to generate accurate region proposals. Additionally, we propose a new semi-supervised hashing method for image retrieval. The semantic features of images are extracted with Latent Dirichlet Allocation and transformed into binary hashing codes with Supervised Hashing. The methods are tested on a large-scale multi-class database of breast histopathological WSIs. The results demonstrate that for one WSI, our region proposing method can generate 7.3 thousand contoured regions which fit well with 95.8% of the ROIs annotated by pathologists. The proposed hashing method can retrieve a query image among 136 thousand images in 0.29 s and reach precision of 91% with only 10% of images labeled. The unsupervised region proposing method can generate regions as predictions of lesions in histopathological WSI. The region proposals can also serve as the training samples to train machine-learning models for image retrieval. The proposed hashing method can achieve fast and precise image retrieval with small amount of labels. Furthermore, the proposed methods can be potentially applied in online computer-aided-diagnosis systems. Copyright

  11. The Nuclear Science References (NSR) database and Web Retrieval System

    International Nuclear Information System (INIS)

    Pritychenko, B.; Betak, E.; Kellett, M.A.; Singh, B.; Totans, J.

    2011-01-01

    The Nuclear Science References (NSR) database together with its associated Web interface is the world's only comprehensive source of easily accessible low- and intermediate-energy nuclear physics bibliographic information for more than 200,000 articles since the beginning of nuclear science. The weekly updated NSR database provides essential support for nuclear data evaluation, compilation and research activities. The principles of the database and Web application development and maintenance are described. Examples of nuclear structure, reaction and decay applications are specifically included. The complete NSR database is freely available at the websites of the National Nuclear Data Center (http://www.nndc.bnl.gov/nsr) and the International Atomic Energy Agency (http://www-nds.iaea.org/nsr).

  12. Density-based retrieval from high-similarity image databases

    DEFF Research Database (Denmark)

    Hansen, Michael Edberg; Carstensen, Jens Michael

    2004-01-01

    Many image classification problems can fruitfully be thought of as image retrieval in a "high similarity image database" (HSID) characterized by being tuned towards a specific application and having a high degree of visual similarity between entries that should be distinguished. We introduce a me...

  13. Web-based information search and retrieval: effects of strategy use and age on search success.

    Science.gov (United States)

    Stronge, Aideen J; Rogers, Wendy A; Fisk, Arthur D

    2006-01-01

    The purpose of this study was to investigate the relationship between strategy use and search success on the World Wide Web (i.e., the Web) for experienced Web users. An additional goal was to extend understanding of how the age of the searcher may influence strategy use. Current investigations of information search and retrieval on the Web have provided an incomplete picture of Web strategy use because participants have not been given the opportunity to demonstrate their knowledge of Web strategies while also searching for information on the Web. Using both behavioral and knowledge-engineering methods, we investigated searching behavior and system knowledge for 16 younger adults (M = 20.88 years of age) and 16 older adults (M = 67.88 years). Older adults were less successful than younger adults in finding correct answers to the search tasks. Knowledge engineering revealed that the age-related effect resulted from ineffective search strategies and amount of Web experience rather than age per se. Our analysis led to the development of a decision-action diagram representing search behavior for both age groups. Older adults had more difficulty than younger adults when searching for information on the Web. However, this difficulty was related to the selection of inefficient search strategies, which may have been attributable to a lack of knowledge about available Web search strategies. Actual or potential applications of this research include training Web users to search more effectively and suggestions to improve the design of search engines.

  14. A Learning State-Space Model for Image Retrieval

    Directory of Open Access Journals (Sweden)

    Lee Greg C

    2007-01-01

    Full Text Available This paper proposes an approach based on a state-space model for learning the user concepts in image retrieval. We first design a scheme of region-based image representation based on concept units, which are integrated with different types of feature spaces and with different region scales of image segmentation. The design of the concept units aims at describing similar characteristics at a certain perspective among relevant images. We present the details of our proposed approach based on a state-space model for interactive image retrieval, including likelihood and transition models, and we also describe some experiments that show the efficacy of our proposed model. This work demonstrates the feasibility of using a state-space model to estimate the user intuition in image retrieval.

  15. Effective Web and Desktop Retrieval with Enhanced Semantic Spaces

    Science.gov (United States)

    Daoud, Amjad M.

    We describe the design and implementation of the NETBOOK prototype system for collecting, structuring and efficiently creating semantic vectors for concepts, noun phrases, and documents from a corpus of free full text ebooks available on the World Wide Web. Automatic generation of concept maps from correlated index terms and extracted noun phrases are used to build a powerful conceptual index of individual pages. To ensure scalabilty of our system, dimension reduction is performed using Random Projection [13]. Furthermore, we present a complete evaluation of the relative effectiveness of the NETBOOK system versus the Google Desktop [8].

  16. Sigma: Web Retrieval Interface for Nuclear Reaction Data

    International Nuclear Information System (INIS)

    Pritychenko, B.; Sonzogni, A.A.

    2008-01-01

    The authors present Sigma, a Web-rich application which provides user-friendly access in processing and plotting of the evaluated and experimental nuclear reaction data stored in the ENDF-6 and EXFOR formats. The main interface includes browsing using a periodic table and a directory tree, basic and advanced search capabilities, interactive plots of cross sections, angular distributions and spectra, comparisons between evaluated and experimental data, computations between different cross section sets. Interactive energy-angle, neutron cross section uncertainties plots and visualization of covariance matrices are under development. Sigma is publicly available at the National Nuclear Data Center website at www.nndc.bnl.gov/sigma

  17. Folksonomies indexing and retrieval in web 2.0

    CERN Document Server

    Peters, Isabella

    2009-01-01

    In Web 2.0 users not only make heavy use of Col-laborative Information Services in order to create, publish and share digital information resources - what is more, they index and represent these re-sources via own keywords, so-called tags. The sum of this user-generated metadata of a Collaborative Information Service is also called Folksonomy. In contrast to professionally created and highly struc-tured metadata, e.g. subject headings, thesauri, clas-sification systems or ontologies, which are applied in libraries, corporate information architectures or commercial databases and which were deve

  18. Information Retrieval Strategies of Millennial Undergraduate Students in Web and Library Database Searches

    Science.gov (United States)

    Porter, Brandi

    2009-01-01

    Millennial students make up a large portion of undergraduate students attending colleges and universities, and they have a variety of online resources available to them to complete academically related information searches, primarily Web based and library-based online information retrieval systems. The content, ease of use, and required search…

  19. The Role of the Medical Students’ Emotional Mood in Information Retrieval from the Web

    Directory of Open Access Journals (Sweden)

    Marzieh Yari Zanganeh

    2018-04-01

    Full Text Available Background: Online information retrieval is a process the result of which is influenced by the changes in the emotional moods of the user. It seems reasonable to include emotional aspects in developing information retrieval systems in order to optimize the experience of the users. Therefore, this study aimed to identify the role of positive and negative affects in the information seeking process on the web among students of medical sciences. Methods: From the methodological perspective, the present study was an experimental and applied research. According to the nature of the experimental method, observation and questionnaire were used. The participants were the students of various fields of Medical Sciences. The research sample included 50 students of Shiraz University of Medical Sciences selected through purposeful sampling method; they regularly used World Wide Web and google engine for information retrieval in educational, Research, personal, or managerial activities. In order to collect the data, search tasks were characterized by the topic, sequence in a search process, difficulty level, and searcher’s interest (simple in a task. Face and content validity of the questionnaire were confirmed by the experts. Reliability of the questionnaire was tested by Alpha Cronbach. Cronbach’s alpha coefficient (PA=0.777, NA=0.754 showed a high rate of reliability in a PANAS questionnaire. The collected data were analyzed using SPSS, version 20.0; also, to test the research hypothesis, T-Test and pair Samples T-Test were used. The P0.05. Conclusion: Information retrieval systems in the Web should identify positive and negative affects in the information seeking process in a set of perceiving signs in human interaction with the computer. The automatic identification of the users’ affect opens new dimensions into users moderators and information retrieval systems for successful retrieval from the Web.

  20. Multi region based image retrieval system

    Indian Academy of Sciences (India)

    data mining, information theory, statistics and psychology. ∗ .... ground complication and independent of image size and orientation (Zhang 2007). ..... Figure 2. Significant regions: (a) the input image, (b) the primary significant region, (c) the ...

  1. Learning tag relevance by neighbor voting for social image retrieval

    NARCIS (Netherlands)

    Li, X.; Snoek, C.G.M.; Worring, M.

    2008-01-01

    Social image retrieval is important for exploiting the increasing amounts of amateur-tagged multimedia such as Flickr images. Since amateur tagging is known to be uncontrolled, ambiguous, and personalized, a fundamental problem is how to reliably interpret the relevance of a tag with respect to the

  2. TRADEMARK IMAGE RETRIEVAL USING LOW LEVEL FEATURE EXTRACTION IN CBIR

    OpenAIRE

    Latika Pinjarkar*, Manisha Sharma, Smita Selot

    2016-01-01

    Trademarks work as significant responsibility in industry and commerce. Trademarks are important component of its industrial property, and violation can have severe penalty. Therefore designing an efficient trademark retrieval system and its assessment for uniqueness is thus becoming very important task now a days. Trademark image retrieval system where a new candidate trademark is compared with already registered trademarks to check that there is no possibility of resembl...

  3. Measuring and Predicting Tag Importance for Image Retrieval.

    Science.gov (United States)

    Li, Shangwen; Purushotham, Sanjay; Chen, Chen; Ren, Yuzhuo; Kuo, C-C Jay

    2017-12-01

    Textual data such as tags, sentence descriptions are combined with visual cues to reduce the semantic gap for image retrieval applications in today's Multimodal Image Retrieval (MIR) systems. However, all tags are treated as equally important in these systems, which may result in misalignment between visual and textual modalities during MIR training. This will further lead to degenerated retrieval performance at query time. To address this issue, we investigate the problem of tag importance prediction, where the goal is to automatically predict the tag importance and use it in image retrieval. To achieve this, we first propose a method to measure the relative importance of object and scene tags from image sentence descriptions. Using this as the ground truth, we present a tag importance prediction model to jointly exploit visual, semantic and context cues. The Structural Support Vector Machine (SSVM) formulation is adopted to ensure efficient training of the prediction model. Then, the Canonical Correlation Analysis (CCA) is employed to learn the relation between the image visual feature and tag importance to obtain robust retrieval performance. Experimental results on three real-world datasets show a significant performance improvement of the proposed MIR with Tag Importance Prediction (MIR/TIP) system over other MIR systems.

  4. Online Hashing for Scalable Remote Sensing Image Retrieval

    Directory of Open Access Journals (Sweden)

    Peng Li

    2018-05-01

    Full Text Available Recently, hashing-based large-scale remote sensing (RS image retrieval has attracted much attention. Many new hashing algorithms have been developed and successfully applied to fast RS image retrieval tasks. However, there exists an important problem rarely addressed in the research literature of RS image hashing. The RS images are practically produced in a streaming manner in many real-world applications, which means the data distribution keeps changing over time. Most existing RS image hashing methods are batch-based models whose hash functions are learned once for all and kept fixed all the time. Therefore, the pre-trained hash functions might not fit the ever-growing new RS images. Moreover, the batch-based models have to load all the training images into memory for model learning, which consumes many computing and memory resources. To address the above deficiencies, we propose a new online hashing method, which learns and adapts its hashing functions with respect to the newly incoming RS images in terms of a novel online partial random learning scheme. Our hash model is updated in a sequential mode such that the representative power of the learned binary codes for RS images are improved accordingly. Moreover, benefiting from the online learning strategy, our proposed hashing approach is quite suitable for scalable real-world remote sensing image retrieval. Extensive experiments on two large-scale RS image databases under online setting demonstrated the efficacy and effectiveness of the proposed method.

  5. Biomedical image retrieval using microscopic configuration with ...

    Indian Academy of Sciences (India)

    G DEEP

    2018-03-10

    Mar 10, 2018 ... The selection of feature descriptors affects the image .... Example of obtaining LBP for 3 9 3 neighbourhoods (adopted from Ojala et al [9]). 20 Page 2 of 13 ...... Directional binary wavelet patterns for biomedical image indexing ...

  6. Dictionary Pruning with Visual Word Significance for Medical Image Retrieval.

    Science.gov (United States)

    Zhang, Fan; Song, Yang; Cai, Weidong; Hauptmann, Alexander G; Liu, Sidong; Pujol, Sonia; Kikinis, Ron; Fulham, Michael J; Feng, David Dagan; Chen, Mei

    2016-02-12

    Content-based medical image retrieval (CBMIR) is an active research area for disease diagnosis and treatment but it can be problematic given the small visual variations between anatomical structures. We propose a retrieval method based on a bag-of-visual-words (BoVW) to identify discriminative characteristics between different medical images with Pruned Dictionary based on Latent Semantic Topic description. We refer to this as the PD-LST retrieval. Our method has two main components. First, we calculate a topic-word significance value for each visual word given a certain latent topic to evaluate how the word is connected to this latent topic. The latent topics are learnt, based on the relationship between the images and words, and are employed to bridge the gap between low-level visual features and high-level semantics. These latent topics describe the images and words semantically and can thus facilitate more meaningful comparisons between the words. Second, we compute an overall-word significance value to evaluate the significance of a visual word within the entire dictionary. We designed an iterative ranking method to measure overall-word significance by considering the relationship between all latent topics and words. The words with higher values are considered meaningful with more significant discriminative power in differentiating medical images. We evaluated our method on two public medical imaging datasets and it showed improved retrieval accuracy and efficiency.

  7. Content Based Medical Image Retrieval for Histopathological, CT and MRI Images

    Directory of Open Access Journals (Sweden)

    Swarnambiga AYYACHAMY

    2013-09-01

    Full Text Available A content based approach is followed for medical images. The purpose of this study is to access the stability of these methods for medical image retrieval. The methods used in color based retrieval for histopathological images are color co-occurrence matrix (CCM and histogram with meta features. For texture based retrieval GLCM (gray level co-occurrence matrix and local binary pattern (LBP were used. For shape based retrieval canny edge detection and otsu‘s method with multivariable threshold were used. Texture and shape based retrieval were implemented using MRI (magnetic resonance images. The most remarkable characteristics of the article are its content based approach for each medical imaging modality. Our efforts were focused on the initial visual search. From our experiment, histogram with meta features in color based retrieval for histopathological images shows a precision of 60 % and recall of 30 %. Whereas GLCM in texture based retrieval for MRI images shows a precision of 70 % and recall of 20 %. Shape based retrieval for MRI images shows a precision of 50% and recall of 25 %. The retrieval results shows that this simple approach is successful.

  8. Applying Semantic Web technologies to improve the retrieval, credibility and use of health-related web resources.

    Science.gov (United States)

    Mayer, Miguel A; Karampiperis, Pythagoras; Kukurikos, Antonis; Karkaletsis, Vangelis; Stamatakis, Kostas; Villarroel, Dagmar; Leis, Angela

    2011-06-01

    The number of health-related websites is increasing day-by-day; however, their quality is variable and difficult to assess. Various "trust marks" and filtering portals have been created in order to assist consumers in retrieving quality medical information. Consumers are using search engines as the main tool to get health information; however, the major problem is that the meaning of the web content is not machine-readable in the sense that computers cannot understand words and sentences as humans can. In addition, trust marks are invisible to search engines, thus limiting their usefulness in practice. During the last five years there have been different attempts to use Semantic Web tools to label health-related web resources to help internet users identify trustworthy resources. This paper discusses how Semantic Web technologies can be applied in practice to generate machine-readable labels and display their content, as well as to empower end-users by providing them with the infrastructure for expressing and sharing their opinions on the quality of health-related web resources.

  9. SOFTWARE FOR REGIONS OF INTEREST RETRIEVAL ON MEDICAL 3D IMAGES

    Directory of Open Access Journals (Sweden)

    G. G. Stromov

    2014-01-01

    Full Text Available Background. Implementation of software for areas of interest retrieval in 3D medical images is described in this article. It has been tested against large volume of model MRIs.Material and methods. We tested software against normal and pathological (severe multiple sclerosis model MRIs from tge BrainWeb resource. Technological stack is based on open-source cross-platform solutions. We implemented storage system on Maria DB (an open-sourced fork of MySQL with P/SQL extensions. Python 2.7 scripting was used for automatization of extract-transform-load operations. The computational core is written on Java 7 with Spring framework 3. MongoDB was used as a cache in the cluster of workstations. Maven 3 was chosen as a dependency manager and build system, the project is hosted at Github.Results. As testing on SSMU's LAN has showed, software has been developed is quite efficiently retrieves ROIs are matching for the morphological substratum on pathological MRIs.Conclusion. Automation of a diagnostic process using medical imaging allows to level down the subjective component in decision making and increase the availability of hi-tech medicine. Software has shown in the article is a complex solution for ROI retrieving and segmentation process on model medical images in full-automated mode.We would like to thank Robert Vincent for great help with consulting of usage the BrainWeb resource.

  10. Multi-clues image retrieval based on improved color invariants

    Science.gov (United States)

    Liu, Liu; Li, Jian-Xun

    2012-05-01

    At present, image retrieval has a great progress in indexing efficiency and memory usage, which mainly benefits from the utilization of the text retrieval technology, such as the bag-of-features (BOF) model and the inverted-file structure. Meanwhile, because the robust local feature invariants are selected to establish BOF, the retrieval precision of BOF is enhanced, especially when it is applied to a large-scale database. However, these local feature invariants mainly consider the geometric variance of the objects in the images, and thus the color information of the objects fails to be made use of. Because of the development of the information technology and Internet, the majority of our retrieval objects is color images. Therefore, retrieval performance can be further improved through proper utilization of the color information. We propose an improved method through analyzing the flaw of shadow-shading quasi-invariant. The response and performance of shadow-shading quasi-invariant for the object edge with the variance of lighting are enhanced. The color descriptors of the invariant regions are extracted and integrated into BOF based on the local feature. The robustness of the algorithm and the improvement of the performance are verified in the final experiments.

  11. Recent advances in intelligent image search and video retrieval

    CERN Document Server

    2017-01-01

    This book initially reviews the major feature representation and extraction methods and effective learning and recognition approaches, which have broad applications in the context of intelligent image search and video retrieval. It subsequently presents novel methods, such as improved soft assignment coding, Inheritable Color Space (InCS) and the Generalized InCS framework, the sparse kernel manifold learner method, the efficient Support Vector Machine (eSVM), and the Scale-Invariant Feature Transform (SIFT) features in multiple color spaces. Lastly, the book presents clothing analysis for subject identification and retrieval, and performance evaluation methods of video analytics for traffic monitoring. Digital images and videos are proliferating at an amazing speed in the fields of science, engineering and technology, media and entertainment. With the huge accumulation of such data, keyword searches and manual annotation schemes may no longer be able to meet the practical demand for retrieving relevant conte...

  12. AMARSI: Aerosol modeling and retrieval from multi-spectral imagers

    NARCIS (Netherlands)

    Leeuw, G. de; Curier, R.L.; Staroverova, A.; Kokhanovsky, A.; Hoyningen-Huene, W. van; Rozanov, V.V.; Burrows, J.P.; Hesselmans, G.; Gale, L.; Bouvet, M.

    2008-01-01

    The AMARSI project aims at the development and validation of aerosol retrieval algorithms over ocean. One algorithm will be developed for application with data from the Multi Spectral Imager (MSI) on EarthCARE. A second algorithm will be developed using the combined information from AATSR and MERIS,

  13. Content-based image retrieval: Color-selection exploited

    NARCIS (Netherlands)

    Broek, E.L. van den; Vuurpijl, L.G.; Kisters, P. M. F.; Schmid, J.C.M. von; Moens, M.F.; Busser, R. de; Hiemstra, D.; Kraaij, W.

    2002-01-01

    This research presents a new color selection interface that facilitates query-by-color in Content-Based Image Retrieval (CBIR). Existing CBIR color selection interfaces, are being judged as non-intuitive and difficult to use. Our interface copes with these problems of usability. It is based on 11

  14. Content-Based Image Retrieval: Color-selection exploited

    NARCIS (Netherlands)

    Moens, Marie-Francine; van den Broek, Egon; Vuurpijl, L.G.; de Brusser, Rik; Kisters, P.M.F.; Hiemstra, Djoerd; Kraaij, Wessel; von Schmid, J.C.M.

    2002-01-01

    This research presents a new color selection interface that facilitates query-by-color in Content-Based Image Retrieval (CBIR). Existing CBIR color selection interfaces, are being judged as non-intuitive and difficult to use. Our interface copes with these problems of usability. It is based on 11

  15. Grid-Independent Compressive Imaging and Fourier Phase Retrieval

    Science.gov (United States)

    Liao, Wenjing

    2013-01-01

    This dissertation is composed of two parts. In the first part techniques of band exclusion(BE) and local optimization(LO) are proposed to solve linear continuum inverse problems independently of the grid spacing. The second part is devoted to the Fourier phase retrieval problem. Many situations in optics, medical imaging and signal processing call…

  16. Signature detection and matching for document image retrieval.

    Science.gov (United States)

    Zhu, Guangyu; Zheng, Yefeng; Doermann, David; Jaeger, Stefan

    2009-11-01

    As one of the most pervasive methods of individual identification and document authentication, signatures present convincing evidence and provide an important form of indexing for effective document image processing and retrieval in a broad range of applications. However, detection and segmentation of free-form objects such as signatures from clustered background is currently an open document analysis problem. In this paper, we focus on two fundamental problems in signature-based document image retrieval. First, we propose a novel multiscale approach to jointly detecting and segmenting signatures from document images. Rather than focusing on local features that typically have large variations, our approach captures the structural saliency using a signature production model and computes the dynamic curvature of 2D contour fragments over multiple scales. This detection framework is general and computationally tractable. Second, we treat the problem of signature retrieval in the unconstrained setting of translation, scale, and rotation invariant nonrigid shape matching. We propose two novel measures of shape dissimilarity based on anisotropic scaling and registration residual error and present a supervised learning framework for combining complementary shape information from different dissimilarity metrics using LDA. We quantitatively study state-of-the-art shape representations, shape matching algorithms, measures of dissimilarity, and the use of multiple instances as query in document image retrieval. We further demonstrate our matching techniques in offline signature verification. Extensive experiments using large real-world collections of English and Arabic machine-printed and handwritten documents demonstrate the excellent performance of our approaches.

  17. Image Retrieval Algorithm Based on Discrete Fractional Transforms

    Science.gov (United States)

    Jindal, Neeru; Singh, Kulbir

    2013-06-01

    The discrete fractional transforms is a signal processing tool which suggests computational algorithms and solutions to various sophisticated applications. In this paper, a new technique to retrieve the encrypted and scrambled image based on discrete fractional transforms has been proposed. Two-dimensional image was encrypted using discrete fractional transforms with three fractional orders and two random phase masks placed in the two intermediate planes. The significant feature of discrete fractional transforms benefits from its extra degree of freedom that is provided by its fractional orders. Security strength was enhanced (1024!)4 times by scrambling the encrypted image. In decryption process, image retrieval is sensitive for both correct fractional order keys and scrambling algorithm. The proposed approach make the brute force attack infeasible. Mean square error and relative error are the recital parameters to verify validity of proposed method.

  18. Applying GA for Optimizing the User Query in Image and Video Retrieval

    OpenAIRE

    Ehsan Lotfi

    2014-01-01

    In an information retrieval system, the query can be made by user sketch. The new method presented here, optimizes the user sketch and applies the optimized query to retrieval the information. This optimization may be used in Content-Based Image Retrieval (CBIR) and Content-Based Video Retrieval (CBVR) which is based on trajectory extraction. To optimize the retrieval process, one stage of retrieval is performed by the user sketch. The retrieval criterion is based on the proposed distance met...

  19. TBIdoc: 3D content-based CT image retrieval system for traumatic brain injury

    Science.gov (United States)

    Li, Shimiao; Gong, Tianxia; Wang, Jie; Liu, Ruizhe; Tan, Chew Lim; Leong, Tze Yun; Pang, Boon Chuan; Lim, C. C. Tchoyoson; Lee, Cheng Kiang; Tian, Qi; Zhang, Zhuo

    2010-03-01

    Traumatic brain injury (TBI) is a major cause of death and disability. Computed Tomography (CT) scan is widely used in the diagnosis of TBI. Nowadays, large amount of TBI CT data is stacked in the hospital radiology department. Such data and the associated patient information contain valuable information for clinical diagnosis and outcome prediction. However, current hospital database system does not provide an efficient and intuitive tool for doctors to search out cases relevant to the current study case. In this paper, we present the TBIdoc system: a content-based image retrieval (CBIR) system which works on the TBI CT images. In this web-based system, user can query by uploading CT image slices from one study, retrieval result is a list of TBI cases ranked according to their 3D visual similarity to the query case. Specifically, cases of TBI CT images often present diffuse or focal lesions. In TBIdoc system, these pathological image features are represented as bin-based binary feature vectors. We use the Jaccard-Needham measure as the similarity measurement. Based on these, we propose a 3D similarity measure for computing the similarity score between two series of CT slices. nDCG is used to evaluate the system performance, which shows the system produces satisfactory retrieval results. The system is expected to improve the current hospital data management in TBI and to give better support for the clinical decision-making process. It may also contribute to the computer-aided education in TBI.

  20. Image based book cover recognition and retrieval

    Science.gov (United States)

    Sukhadan, Kalyani; Vijayarajan, V.; Krishnamoorthi, A.; Bessie Amali, D. Geraldine

    2017-11-01

    In this we are developing a graphical user interface using MATLAB for the users to check the information related to books in real time. We are taking the photos of the book cover using GUI, then by using MSER algorithm it will automatically detect all the features from the input image, after this it will filter bifurcate non-text features which will be based on morphological difference between text and non-text regions. We implemented a text character alignment algorithm which will improve the accuracy of the original text detection. We will also have a look upon the built in MATLAB OCR recognition algorithm and an open source OCR which is commonly used to perform better detection results, post detection algorithm is implemented and natural language processing to perform word correction and false detection inhibition. Finally, the detection result will be linked to internet to perform online matching. More than 86% accuracy can be obtained by this algorithm.

  1. Web Based Distributed Coastal Image Analysis System, Phase I

    Data.gov (United States)

    National Aeronautics and Space Administration — This project develops Web based distributed image analysis system processing the Moderate Resolution Imaging Spectroradiometer (MODIS) data to provide decision...

  2. Mutual information based feature selection for medical image retrieval

    Science.gov (United States)

    Zhi, Lijia; Zhang, Shaomin; Li, Yan

    2018-04-01

    In this paper, authors propose a mutual information based method for lung CT image retrieval. This method is designed to adapt to different datasets and different retrieval task. For practical applying consideration, this method avoids using a large amount of training data. Instead, with a well-designed training process and robust fundamental features and measurements, the method in this paper can get promising performance and maintain economic training computation. Experimental results show that the method has potential practical values for clinical routine application.

  3. Retrieval of very large numbers of items in the Web of Science: an exercise to develop accurate search strategies

    NARCIS (Netherlands)

    Arencibia-Jorge, R.; Leydesdorff, L.; Chinchilla-Rodríguez, Z.; Rousseau, R.; Paris, S.W.

    2009-01-01

    The Web of Science interface counts at most 100,000 retrieved items from a single query. If the query results in a dataset containing more than 100,000 items the number of retrieved items is indicated as >100,000. The problem studied here is how to find the exact number of items in a query that

  4. Hybrid Histogram Descriptor: A Fusion Feature Representation for Image Retrieval.

    Science.gov (United States)

    Feng, Qinghe; Hao, Qiaohong; Chen, Yuqi; Yi, Yugen; Wei, Ying; Dai, Jiangyan

    2018-06-15

    Currently, visual sensors are becoming increasingly affordable and fashionable, acceleratingly the increasing number of image data. Image retrieval has attracted increasing interest due to space exploration, industrial, and biomedical applications. Nevertheless, designing effective feature representation is acknowledged as a hard yet fundamental issue. This paper presents a fusion feature representation called a hybrid histogram descriptor (HHD) for image retrieval. The proposed descriptor comprises two histograms jointly: a perceptually uniform histogram which is extracted by exploiting the color and edge orientation information in perceptually uniform regions; and a motif co-occurrence histogram which is acquired by calculating the probability of a pair of motif patterns. To evaluate the performance, we benchmarked the proposed descriptor on RSSCN7, AID, Outex-00013, Outex-00014 and ETHZ-53 datasets. Experimental results suggest that the proposed descriptor is more effective and robust than ten recent fusion-based descriptors under the content-based image retrieval framework. The computational complexity was also analyzed to give an in-depth evaluation. Furthermore, compared with the state-of-the-art convolutional neural network (CNN)-based descriptors, the proposed descriptor also achieves comparable performance, but does not require any training process.

  5. Bat-Inspired Algorithm Based Query Expansion for Medical Web Information Retrieval.

    Science.gov (United States)

    Khennak, Ilyes; Drias, Habiba

    2017-02-01

    With the increasing amount of medical data available on the Web, looking for health information has become one of the most widely searched topics on the Internet. Patients and people of several backgrounds are now using Web search engines to acquire medical information, including information about a specific disease, medical treatment or professional advice. Nonetheless, due to a lack of medical knowledge, many laypeople have difficulties in forming appropriate queries to articulate their inquiries, which deem their search queries to be imprecise due the use of unclear keywords. The use of these ambiguous and vague queries to describe the patients' needs has resulted in a failure of Web search engines to retrieve accurate and relevant information. One of the most natural and promising method to overcome this drawback is Query Expansion. In this paper, an original approach based on Bat Algorithm is proposed to improve the retrieval effectiveness of query expansion in medical field. In contrast to the existing literature, the proposed approach uses Bat Algorithm to find the best expanded query among a set of expanded query candidates, while maintaining low computational complexity. Moreover, this new approach allows the determination of the length of the expanded query empirically. Numerical results on MEDLINE, the on-line medical information database, show that the proposed approach is more effective and efficient compared to the baseline.

  6. Research of image retrieval technology based on color feature

    Science.gov (United States)

    Fu, Yanjun; Jiang, Guangyu; Chen, Fengying

    2009-10-01

    Recently, with the development of the communication and the computer technology and the improvement of the storage technology and the capability of the digital image equipment, more and more image resources are given to us than ever. And thus the solution of how to locate the proper image quickly and accurately is wanted.The early method is to set up a key word for searching in the database, but now the method has become very difficult when we search much more picture that we need. In order to overcome the limitation of the traditional searching method, content based image retrieval technology was aroused. Now, it is a hot research subject.Color image retrieval is the important part of it. Color is the most important feature for color image retrieval. Three key questions on how to make use of the color characteristic are discussed in the paper: the expression of color, the abstraction of color characteristic and the measurement of likeness based on color. On the basis, the extraction technology of the color histogram characteristic is especially discussed. Considering the advantages and disadvantages of the overall histogram and the partition histogram, a new method based the partition-overall histogram is proposed. The basic thought of it is to divide the image space according to a certain strategy, and then calculate color histogram of each block as the color feature of this block. Users choose the blocks that contain important space information, confirming the right value. The system calculates the distance between the corresponding blocks that users choosed. Other blocks merge into part overall histograms again, and the distance should be calculated. Then accumulate all the distance as the real distance between two pictures. The partition-overall histogram comprehensive utilizes advantages of two methods above, by choosing blocks makes the feature contain more spatial information which can improve performance; the distances between partition-overall histogram

  7. Color-Based Image Retrieval from High-Similarity Image Databases

    DEFF Research Database (Denmark)

    Hansen, Michael Adsetts Edberg; Carstensen, Jens Michael

    2003-01-01

    Many image classification problems can fruitfully be thought of as image retrieval in a "high similarity image database" (HSID) characterized by being tuned towards a specific application and having a high degree of visual similarity between entries that should be distinguished. We introduce...... a method for HSID retrieval using a similarity measure based on a linear combination of Jeffreys-Matusita (JM) distances between distributions of color (and color derivatives) estimated from a set of automatically extracted image regions. The weight coefficients are estimated based on optimal retrieval...... performance. Experimental results on the difficult task of visually identifying clones of fungal colonies grown in a petri dish and categorization of pelts show a high retrieval accuracy of the method when combined with standardized sample preparation and image acquisition....

  8. Efficient Retrieval of Massive Ocean Remote Sensing Images via a Cloud-Based Mean-Shift Algorithm.

    Science.gov (United States)

    Yang, Mengzhao; Song, Wei; Mei, Haibin

    2017-07-23

    The rapid development of remote sensing (RS) technology has resulted in the proliferation of high-resolution images. There are challenges involved in not only storing large volumes of RS images but also in rapidly retrieving the images for ocean disaster analysis such as for storm surges and typhoon warnings. In this paper, we present an efficient retrieval of massive ocean RS images via a Cloud-based mean-shift algorithm. Distributed construction method via the pyramid model is proposed based on the maximum hierarchical layer algorithm and used to realize efficient storage structure of RS images on the Cloud platform. We achieve high-performance processing of massive RS images in the Hadoop system. Based on the pyramid Hadoop distributed file system (HDFS) storage method, an improved mean-shift algorithm for RS image retrieval is presented by fusion with the canopy algorithm via Hadoop MapReduce programming. The results show that the new method can achieve better performance for data storage than HDFS alone and WebGIS-based HDFS. Speedup and scaleup are very close to linear changes with an increase of RS images, which proves that image retrieval using our method is efficient.

  9. Ontology of Gaps in Content-Based Image Retrieval

    OpenAIRE

    Deserno, Thomas M.; Antani, Sameer; Long, Rodney

    2008-01-01

    Content-based image retrieval (CBIR) is a promising technology to enrich the core functionality of picture archiving and communication systems (PACS). CBIR has a potential for making a strong impact in diagnostics, research, and education. Research as reported in the scientific literature, however, has not made significant inroads as medical CBIR applications incorporated into routine clinical medicine or medical research. The cause is often attributed (without supporting analysis) to the ina...

  10. A visual perceptual descriptor with depth feature for image retrieval

    Science.gov (United States)

    Wang, Tianyang; Qin, Zhengrui

    2017-07-01

    This paper proposes a visual perceptual descriptor (VPD) and a new approach to extract perceptual depth feature for 2D image retrieval. VPD mimics human visual system, which can easily distinguish regions that have different textures, whereas for regions which have similar textures, color features are needed for further differentiation. We apply VPD on the gradient direction map of an image, capture texture-similar regions to generate a VPD map. We then impose the VPD map on a quantized color map and extract color features only from the overlapped regions. To reflect the nature of perceptual distance in single 2D image, we propose and extract the perceptual depth feature by computing the nuclear norm of the sparse depth map of an image. Extracted color features and the perceptual depth feature are both incorporated to a feature vector, we utilize this vector to represent an image and measure similarity. We observe that the proposed VPD + depth method achieves a promising result, and extensive experiments prove that it outperforms other typical methods on 2D image retrieval.

  11. Efficient Retrieval of the Top-k Most Relevant Spatial Web Objects

    DEFF Research Database (Denmark)

    Cong, Gao; Jensen, Christian Søndergaard; Wu, Dingming

    2009-01-01

    The conventional Internet is acquiring a geo-spatial dimension. Web documents are being geo-tagged, and geo-referenced objects such as points of interest are being associated with descriptive text documents. The resulting fusion of geo-location and documents enables a new kind of top-k query...... that takes into account both location proximity and text relevancy. To our knowledge, only naive techniques exist that are capable of computing a general web information retrieval query while also taking location into account. This paper proposes a new indexing framework for location-aware top-k text...... both text relevancy and location proximity to prune the search space. Results of empirical studies with an implementation of the framework demonstrate that the paper’s proposal offers scalability and is capable of excellent performance....

  12. Interactive classification and content-based retrieval of tissue images

    Science.gov (United States)

    Aksoy, Selim; Marchisio, Giovanni B.; Tusk, Carsten; Koperski, Krzysztof

    2002-11-01

    We describe a system for interactive classification and retrieval of microscopic tissue images. Our system models tissues in pixel, region and image levels. Pixel level features are generated using unsupervised clustering of color and texture values. Region level features include shape information and statistics of pixel level feature values. Image level features include statistics and spatial relationships of regions. To reduce the gap between low-level features and high-level expert knowledge, we define the concept of prototype regions. The system learns the prototype regions in an image collection using model-based clustering and density estimation. Different tissue types are modeled using spatial relationships of these regions. Spatial relationships are represented by fuzzy membership functions. The system automatically selects significant relationships from training data and builds models which can also be updated using user relevance feedback. A Bayesian framework is used to classify tissues based on these models. Preliminary experiments show that the spatial relationship models we developed provide a flexible and powerful framework for classification and retrieval of tissue images.

  13. Brief communication: 3-D reconstruction of a collapsed rock pillar from Web-retrieved images and terrestrial lidar data - the 2005 event of the west face of the Drus (Mont Blanc massif)

    Science.gov (United States)

    Guerin, Antoine; Abellán, Antonio; Matasci, Battista; Jaboyedoff, Michel; Derron, Marc-Henri; Ravanel, Ludovic

    2017-07-01

    In June 2005, a series of major rockfall events completely wiped out the Bonatti Pillar located in the legendary Drus west face (Mont Blanc massif, France). Terrestrial lidar scans of the west face were acquired after this event, but no pre-event point cloud is available. Thus, in order to reconstruct the volume and the shape of the collapsed blocks, a 3-D model has been built using photogrammetry (structure-from-motion (SfM) algorithms) based on 30 pictures collected on the Web. All these pictures were taken between September 2003 and May 2005. We then reconstructed the shape and volume of the fallen compartment by comparing the SfM model with terrestrial lidar data acquired in October 2005 and November 2011. The volume is calculated to 292 680 m3 (±5.6 %). This result is close to the value previously assessed by Ravanel and Deline (2008) for this same rock avalanche (265 000 ± 10 000 m3). The difference between these two estimations can be explained by the rounded shape of the volume determined by photogrammetry, which may lead to a volume overestimation. However it is not excluded that the volume calculated by Ravanel and Deline (2008) is slightly underestimated, the thickness of the blocks having been assessed manually from historical photographs.

  14. The application of similar image retrieval in electronic commerce.

    Science.gov (United States)

    Hu, YuPing; Yin, Hua; Han, Dezhi; Yu, Fei

    2014-01-01

    Traditional online shopping platform (OSP), which searches product information by keywords, faces three problems: indirect search mode, large search space, and inaccuracy in search results. For solving these problems, we discuss and research the application of similar image retrieval in electronic commerce. Aiming at improving the network customers' experience and providing merchants with the accuracy of advertising, we design a reasonable and extensive electronic commerce application system, which includes three subsystems: image search display subsystem, image search subsystem, and product information collecting subsystem. This system can provide seamless connection between information platform and OSP, on which consumers can automatically and directly search similar images according to the pictures from information platform. At the same time, it can be used to provide accuracy of internet marketing for enterprises. The experiment shows the efficiency of constructing the system.

  15. The Application of Similar Image Retrieval in Electronic Commerce

    Directory of Open Access Journals (Sweden)

    YuPing Hu

    2014-01-01

    Full Text Available Traditional online shopping platform (OSP, which searches product information by keywords, faces three problems: indirect search mode, large search space, and inaccuracy in search results. For solving these problems, we discuss and research the application of similar image retrieval in electronic commerce. Aiming at improving the network customers’ experience and providing merchants with the accuracy of advertising, we design a reasonable and extensive electronic commerce application system, which includes three subsystems: image search display subsystem, image search subsystem, and product information collecting subsystem. This system can provide seamless connection between information platform and OSP, on which consumers can automatically and directly search similar images according to the pictures from information platform. At the same time, it can be used to provide accuracy of internet marketing for enterprises. The experiment shows the efficiency of constructing the system.

  16. The Application of Similar Image Retrieval in Electronic Commerce

    Science.gov (United States)

    Hu, YuPing; Yin, Hua; Han, Dezhi; Yu, Fei

    2014-01-01

    Traditional online shopping platform (OSP), which searches product information by keywords, faces three problems: indirect search mode, large search space, and inaccuracy in search results. For solving these problems, we discuss and research the application of similar image retrieval in electronic commerce. Aiming at improving the network customers' experience and providing merchants with the accuracy of advertising, we design a reasonable and extensive electronic commerce application system, which includes three subsystems: image search display subsystem, image search subsystem, and product information collecting subsystem. This system can provide seamless connection between information platform and OSP, on which consumers can automatically and directly search similar images according to the pictures from information platform. At the same time, it can be used to provide accuracy of internet marketing for enterprises. The experiment shows the efficiency of constructing the system. PMID:24883411

  17. WFIRST: Retrieval Studies of Directly Imaged Extrasolar Giant Planets

    Science.gov (United States)

    Marley, Mark; Lupu, Roxana; Lewis, Nikole K.; WFIRST Coronagraph SITs

    2018-01-01

    The typical direct imaging and spectroscopy target for the WFIRST Coronagraph will be a mature Jupiter-mass giant planet at a few AU from an FGK star. The spectra of such planets is expected to be shaped primarily by scattering from H2O clouds and absorption by gaseous NH3 and CH4. We have computed forward model spectra of such typical planets and applied noise models to understand the quality of photometry and spectra we can expect. Using such simulated datasets we have conducted Markov Chain Monte Carlo and MultiNest retrievals to derive atmospheric abundance of CH4, cloud scattering properties, gravity, and other parameters for various planets and observing modes. Our focus has primarily been to understand which combinations of photometry and spectroscopy at what SNR allow retrievals of atmospheric methane mixing ratios to within a factor of ten of the true value. This is a challenging task for directly imaged planets as the planet mass and radius--and thus surface gravity--are not as well constrained as in the case of transiting planets. We find that for plausible planets and datasets of the quality expected to be obtained by WFIRST it should be possible to place such constraints, at least for some planets. We present some examples of our retrieval results and explain how they have been utilized to help set design requirements on the coronagraph camera and integrated field spectrometer.

  18. Content-based histopathology image retrieval using CometCloud.

    Science.gov (United States)

    Qi, Xin; Wang, Daihou; Rodero, Ivan; Diaz-Montes, Javier; Gensure, Rebekah H; Xing, Fuyong; Zhong, Hua; Goodell, Lauri; Parashar, Manish; Foran, David J; Yang, Lin

    2014-08-26

    The development of digital imaging technology is creating extraordinary levels of accuracy that provide support for improved reliability in different aspects of the image analysis, such as content-based image retrieval, image segmentation, and classification. This has dramatically increased the volume and rate at which data are generated. Together these facts make querying and sharing non-trivial and render centralized solutions unfeasible. Moreover, in many cases this data is often distributed and must be shared across multiple institutions requiring decentralized solutions. In this context, a new generation of data/information driven applications must be developed to take advantage of the national advanced cyber-infrastructure (ACI) which enable investigators to seamlessly and securely interact with information/data which is distributed across geographically disparate resources. This paper presents the development and evaluation of a novel content-based image retrieval (CBIR) framework. The methods were tested extensively using both peripheral blood smears and renal glomeruli specimens. The datasets and performance were evaluated by two pathologists to determine the concordance. The CBIR algorithms that were developed can reliably retrieve the candidate image patches exhibiting intensity and morphological characteristics that are most similar to a given query image. The methods described in this paper are able to reliably discriminate among subtle staining differences and spatial pattern distributions. By integrating a newly developed dual-similarity relevance feedback module into the CBIR framework, the CBIR results were improved substantially. By aggregating the computational power of high performance computing (HPC) and cloud resources, we demonstrated that the method can be successfully executed in minutes on the Cloud compared to weeks using standard computers. In this paper, we present a set of newly developed CBIR algorithms and validate them using two

  19. Memory versus logic: two models of organizing information and their influences on web retrieval strategies

    Directory of Open Access Journals (Sweden)

    Teresa Numerico

    2008-07-01

    Full Text Available We can find the first anticipation of the World Wide Web hypertextual structure in Bush paper of 1945, where he described a “selection” and storage machine called the Memex, capable of keeping the useful information of a user and connecting it to other relevant material present in the machine or added by other users. We will argue that Vannevar Bush, who conceived this type of machine, did it because its involvement with analogical devices. During the 1930s, in fact, he invented and built the Differential Analyzer, a powerful analogue machine, used to calculate various relevant mathematical functions. The model of the Memex is not the digital one, because it relies on another form of data representation that emulates more the procedures of memory than the attitude of the logic used by the intellect. Memory seems to select and arrange information according to association strategies, i.e., using analogies and connections that are very often arbitrary, sometimes even chaotic and completely subjective. The organization of information and the knowledge creation process suggested by logic and symbolic formal representation of data is deeply different from the former one, though the logic approach is at the core of the birth of computer science (i.e., the Turing Machine and the Von Neumann Machine. We will discuss the issues raised by these two “visions” of information management and the influences of the philosophical tradition of the theory of knowledge on the hypertextual organization of content. We will also analyze all the consequences of these different attitudes with respect to information retrieval techniques in a hypertextual environment, as the web. Our position is that it necessary to take into accounts the nature and the dynamic social topology of the network when we choose information retrieval methods for the network; otherwise, we risk creating a misleading service for the end user of web search tools (i.e., search engines.

  20. Video retrieval by still-image analysis with ImageMiner

    Science.gov (United States)

    Kreyss, Jutta; Roeper, M.; Alshuth, Peter; Hermes, Thorsten; Herzog, Otthein

    1997-01-01

    The large amount of available multimedia information (e.g. videos, audio, images) requires efficient and effective annotation and retrieval methods. As videos start playing a more important role in the frame of multimedia, we want to make these available for content-based retrieval. The ImageMiner-System, which was developed at the University of Bremen in the AI group, is designed for content-based retrieval of single images by a new combination of techniques and methods from computer vision and artificial intelligence. In our approach to make videos available for retrieval in a large database of videos and images there are two necessary steps: First, the detection and extraction of shots from a video, which is done by a histogram based method and second, the construction of the separate frames in a shot to one still single images. This is performed by a mosaicing-technique. The resulting mosaiced image gives a one image visualization of the shot and can be analyzed by the ImageMiner-System. ImageMiner has been tested on several domains, (e.g. landscape images, technical drawings), which cover a wide range of applications.

  1. Rotation invariant deep binary hashing for fast image retrieval

    Science.gov (United States)

    Dai, Lai; Liu, Jianming; Jiang, Aiwen

    2017-07-01

    In this paper, we study how to compactly represent image's characteristics for fast image retrieval. We propose supervised rotation invariant compact discriminative binary descriptors through combining convolutional neural network with hashing. In the proposed network, binary codes are learned by employing a hidden layer for representing latent concepts that dominate on class labels. A loss function is proposed to minimize the difference between binary descriptors that describe reference image and the rotated one. Compared with some other supervised methods, the proposed network doesn't have to require pair-wised inputs for binary code learning. Experimental results show that our method is effective and achieves state-of-the-art results on the CIFAR-10 and MNIST datasets.

  2. OntoTrader: An Ontological Web Trading Agent Approach for Environmental Information Retrieval

    Directory of Open Access Journals (Sweden)

    Luis Iribarne

    2014-01-01

    Full Text Available Modern Web-based Information Systems (WIS are becoming increasingly necessary to provide support for users who are in different places with different types of information, by facilitating their access to the information, decision making, workgroups, and so forth. Design of these systems requires the use of standardized methods and techniques that enable a common vocabulary to be defined to represent the underlying knowledge. Thus, mediation elements such as traders enrich the interoperability of web components in open distributed systems. These traders must operate with other third-party traders and/or agents in the system, which must also use a common vocabulary for communication between them. This paper presents the OntoTrader architecture, an Ontological Web Trading agent based on the OMG ODP trading standard. It also presents the ontology needed by some system agents to communicate with the trading agent and the behavioral framework for the SOLERES OntoTrader agent, an Environmental Management Information System (EMIS. This framework implements a “Query-Searching/Recovering-Response” information retrieval model using a trading service, SPARQL notation, and the JADE platform. The paper also presents reflection, delegation and, federation mediation models and describes formalization, an experimental testing environment in three scenarios, and a tool which allows our proposal to be evaluated and validated.

  3. Intelligent retrieval of chest X-ray image database using sketches

    International Nuclear Information System (INIS)

    Hasegawa, Jun-ichi; Okada, Noritake; Toriwaki, Jun-ichiro

    1988-01-01

    This paper presents further experiments on intelligent retrieval in our chest X-ray image database system using 'sketches'. First, in the previous sketch extraction procedure, vertical-location-invariant thresholding and shape-oriented smoothing are newly developed to improve the precision of lung borders and rib images in each sketch, respectively. Then, two new ways for image retrieval using sketches; (1) image-description retrieval and (2) pattern-matching retrieval, are proposed. In each retrieval way, a procedure for understanding picture queries input through a sketch is described in detail. (author)

  4. Relevance Feedback in Content Based Image Retrieval: A Review

    Directory of Open Access Journals (Sweden)

    Manesh B. Kokare

    2011-01-01

    Full Text Available This paper provides an overview of the technical achievements in the research area of relevance feedback (RF in content-based image retrieval (CBIR. Relevance feedback is a powerful technique in CBIR systems, in order to improve the performance of CBIR effectively. It is an open research area to the researcher to reduce the semantic gap between low-level features and high level concepts. The paper covers the current state of art of the research in relevance feedback in CBIR, various relevance feedback techniques and issues in relevance feedback are discussed in detail.

  5. Application of object modeling technique to medical image retrieval system

    International Nuclear Information System (INIS)

    Teshima, Fumiaki; Abe, Takeshi

    1993-01-01

    This report describes the results of discussions on the object-oriented analysis methodology, which is one of the object-oriented paradigms. In particular, we considered application of the object modeling technique (OMT) to the analysis of a medical image retrieval system. The object-oriented methodology places emphasis on the construction of an abstract model from real-world entities. The effectiveness of and future improvements to OMT are discussed from the standpoint of the system's expandability. These discussions have elucidated that the methodology is sufficiently well-organized and practical to be applied to commercial products, provided that it is applied to the appropriate problem domain. (author)

  6. A single-image method of aberration retrieval for imaging systems under partially coherent illumination

    International Nuclear Information System (INIS)

    Xu, Shuang; Liu, Shiyuan; Zhang, Chuanwei; Wei, Haiqing

    2014-01-01

    We propose a method for retrieving small lens aberrations in optical imaging systems under partially coherent illumination, which only requires to measure one single defocused image of intensity. By deriving a linear theory of imaging systems, we obtain a generalized formulation of aberration sensitivity in a matrix form, which provides a set of analytic kernels that relate the measured intensity distribution directly to the unknown Zernike coefficients. Sensitivity analysis is performed and test patterns are optimized to ensure well-posedness of the inverse problem. Optical lithography simulations have validated the theoretical derivation and confirmed its simplicity and superior performance in retrieving small lens aberrations. (fast track communication)

  7. An Implementation of Semantic Web System for Information retrieval using J2EE Technologies.

    OpenAIRE

    B.Hemanth kumar,; Prof. M.Surendra Prasad Babu

    2011-01-01

    Accessing web resources (Information) is an essential facility provided by web applications to every body. Semantic web is one of the systems that provide a facility to access the resources through web service applications. Semantic web and web Services are new emerging web based technologies. An automatic information processing system can be developed by using semantic web and web services, each having its own contribution within the context of developing web-based information systems and ap...

  8. Web-Scale Discovery Services Retrieve Relevant Results in Health Sciences Topics Including MEDLINE Content

    Directory of Open Access Journals (Sweden)

    Elizabeth Margaret Stovold

    2017-06-01

    Full Text Available A Review of: Hanneke, R., & O’Brien, K. K. (2016. Comparison of three web-scale discovery services for health sciences research. Journal of the Medical Library Association, 104(2, 109-117. http://dx.doi.org/10.3163/1536-5050.104.2.004 Abstract Objective – To compare the results of health sciences search queries in three web-scale discovery (WSD services for relevance, duplicate detection, and retrieval of MEDLINE content. Design – Comparative evaluation and bibliometric study. Setting – Six university libraries in the United States of America. Subjects – Three commercial WSD services: Primo, Summon, and EBSCO Discovery Service (EDS. Methods – The authors collected data at six universities, including their own. They tested each of the three WSDs at two data collection sites. However, since one of the sites was using a legacy version of Summon that was due to be upgraded, data collected for Summon at this site were considered obsolete and excluded from the analysis. The authors generated three questions for each of six major health disciplines, then designed simple keyword searches to mimic typical student search behaviours. They captured the first 20 results from each query run at each test site, to represent the first “page” of results, giving a total of 2,086 total search results. These were independently assessed for relevance to the topic. Authors resolved disagreements by discussion, and calculated a kappa inter-observer score. They retained duplicate records within the results so that the duplicate detection by the WSDs could be compared. They assessed MEDLINE coverage by the WSDs in several ways. Using precise strategies to generate a relevant set of articles, they conducted one search from each of the six disciplines in PubMed so that they could compare retrieval of MEDLINE content. These results were cross-checked against the first 20 results from the corresponding query in the WSDs. To aid investigation of overall

  9. Model-based magnetization retrieval from holographic phase images

    Energy Technology Data Exchange (ETDEWEB)

    Röder, Falk, E-mail: f.roeder@hzdr.de [Helmholtz-Zentrum Dresden-Rossendorf, Institut für Ionenstrahlphysik und Materialforschung, Bautzner Landstr. 400, D-01328 Dresden (Germany); Triebenberg Labor, Institut für Strukturphysik, Technische Universität Dresden, D-01062 Dresden (Germany); Vogel, Karin [Triebenberg Labor, Institut für Strukturphysik, Technische Universität Dresden, D-01062 Dresden (Germany); Wolf, Daniel [Helmholtz-Zentrum Dresden-Rossendorf, Institut für Ionenstrahlphysik und Materialforschung, Bautzner Landstr. 400, D-01328 Dresden (Germany); Triebenberg Labor, Institut für Strukturphysik, Technische Universität Dresden, D-01062 Dresden (Germany); Hellwig, Olav [Helmholtz-Zentrum Dresden-Rossendorf, Institut für Ionenstrahlphysik und Materialforschung, Bautzner Landstr. 400, D-01328 Dresden (Germany); AG Magnetische Funktionsmaterialien, Institut für Physik, Technische Universität Chemnitz, D-09126 Chemnitz (Germany); HGST, A Western Digital Company, 3403 Yerba Buena Rd., San Jose, CA 95135 (United States); Wee, Sung Hun [HGST, A Western Digital Company, 3403 Yerba Buena Rd., San Jose, CA 95135 (United States); Wicht, Sebastian; Rellinghaus, Bernd [IFW Dresden, Institute for Metallic Materials, P.O. Box 270116, D-01171 Dresden (Germany)

    2017-05-15

    The phase shift of the electron wave is a useful measure for the projected magnetic flux density of magnetic objects at the nanometer scale. More important for materials science, however, is the knowledge about the magnetization in a magnetic nano-structure. As demonstrated here, a dominating presence of stray fields prohibits a direct interpretation of the phase in terms of magnetization modulus and direction. We therefore present a model-based approach for retrieving the magnetization by considering the projected shape of the nano-structure and assuming a homogeneous magnetization therein. We apply this method to FePt nano-islands epitaxially grown on a SrTiO{sub 3} substrate, which indicates an inclination of their magnetization direction relative to the structural easy magnetic [001] axis. By means of this real-world example, we discuss prospects and limits of this approach. - Highlights: • Retrieval of the magnetization from holographic phase images. • Magnetostatic model constructed for a magnetic nano-structure. • Decomposition into homogeneously magnetized components. • Discretization of a each component by elementary cuboids. • Analytic solution for the phase of a magnetized cuboid considered. • Fitting a set of magnetization vectors to experimental phase images.

  10. ImageGrouper: a group-oriented user interface for content-based image retrieval and digital image arrangement

    NARCIS (Netherlands)

    Nakazato, Munehiro; Manola, L.; Huang, Thomas S.

    In content-based image retrieval (CBIR), experimental (trial-and-error) query with relevance feedback is essential for successful retrieval. Unfortunately, the traditional user interfaces are not suitable for trying different combinations of query examples. This is because first, these systems

  11. A World Wide Web Region-Based Image Search Engine

    DEFF Research Database (Denmark)

    Kompatsiaris, Ioannis; Triantafyllou, Evangelia; Strintzis, Michael G.

    2001-01-01

    In this paper the development of an intelligent image content-based search engine for the World Wide Web is presented. This system will offer a new form of media representation and access of content available in WWW. Information Web Crawlers continuously traverse the Internet and collect images...

  12. Integrating Web Services into Map Image Applications

    National Research Council Canada - National Science Library

    Tu, Shengru

    2003-01-01

    Web services have been opening a wide avenue for software integration. In this paper, we have reported our experiments with three applications that are built by utilizing and providing web services for Geographic Information Systems (GIS...

  13. Coupled binary embedding for large-scale image retrieval.

    Science.gov (United States)

    Zheng, Liang; Wang, Shengjin; Tian, Qi

    2014-08-01

    Visual matching is a crucial step in image retrieval based on the bag-of-words (BoW) model. In the baseline method, two keypoints are considered as a matching pair if their SIFT descriptors are quantized to the same visual word. However, the SIFT visual word has two limitations. First, it loses most of its discriminative power during quantization. Second, SIFT only describes the local texture feature. Both drawbacks impair the discriminative power of the BoW model and lead to false positive matches. To tackle this problem, this paper proposes to embed multiple binary features at indexing level. To model correlation between features, a multi-IDF scheme is introduced, through which different binary features are coupled into the inverted file. We show that matching verification methods based on binary features, such as Hamming embedding, can be effectively incorporated in our framework. As an extension, we explore the fusion of binary color feature into image retrieval. The joint integration of the SIFT visual word and binary features greatly enhances the precision of visual matching, reducing the impact of false positive matches. Our method is evaluated through extensive experiments on four benchmark datasets (Ukbench, Holidays, DupImage, and MIR Flickr 1M). We show that our method significantly improves the baseline approach. In addition, large-scale experiments indicate that the proposed method requires acceptable memory usage and query time compared with other approaches. Further, when global color feature is integrated, our method yields competitive performance with the state-of-the-arts.

  14. Using the open Web as an information resource and scholarly Web search engines as retrieval tools for academic and research purposes

    Directory of Open Access Journals (Sweden)

    Filistea Naude

    2010-08-01

    Full Text Available This study provided insight into the significance of the open Web as an information resource and Web search engines as research tools amongst academics. The academic staff establishment of the University of South Africa (Unisa was invited to participate in a questionnaire survey and included 1188 staff members from five colleges. This study culminated in a PhD dissertation in 2008. One hundred and eighty seven respondents participated in the survey which gave a response rate of 15.7%. The results of this study show that academics have indeed accepted the open Web as a useful information resource and Web search engines as retrieval tools when seeking information for academic and research work. The majority of respondents used the open Web and Web search engines on a daily or weekly basis to source academic and research information. The main obstacles presented by using the open Web and Web search engines included lack of time to search and browse the Web, information overload, poor network speed and the slow downloading speed of webpages.

  15. Using the open Web as an information resource and scholarly Web search engines as retrieval tools for academic and research purposes

    Directory of Open Access Journals (Sweden)

    Filistea Naude

    2010-12-01

    Full Text Available This study provided insight into the significance of the open Web as an information resource and Web search engines as research tools amongst academics. The academic staff establishment of the University of South Africa (Unisa was invited to participate in a questionnaire survey and included 1188 staff members from five colleges. This study culminated in a PhD dissertation in 2008. One hundred and eighty seven respondents participated in the survey which gave a response rate of 15.7%. The results of this study show that academics have indeed accepted the open Web as a useful information resource and Web search engines as retrieval tools when seeking information for academic and research work. The majority of respondents used the open Web and Web search engines on a daily or weekly basis to source academic and research information. The main obstacles presented by using the open Web and Web search engines included lack of time to search and browse the Web, information overload, poor network speed and the slow downloading speed of webpages.

  16. Large-Scale Partial-Duplicate Image Retrieval and Its Applications

    Science.gov (United States)

    2016-04-23

    tree based image retrieval , a semantic-aware co-indexing algorithm is proposed to jointly embed two strong cues into the inverted indexes: 1) local...based image retrieval , a semantic-aware co-indexing algorithm is proposed to jointly embed two strong cues into the inverted indexes: 1) local...Distribution Unlimited UU UU UU UU 23-04-2016 23-Jan-2012 22-Jan-2016 Final Report: Large-Scale Partial-Duplicate Image Retrieval and Its Applications

  17. Design Guidelines for a Content-Based Image Retrieval Color-Selection Interface

    NARCIS (Netherlands)

    Eggen, Berry; van den Broek, Egon; van der Veer, Gerrit C.; Kisters, Peter M.F.; Willems, Rob; Vuurpijl, Louis G.

    2004-01-01

    In Content-Based Image Retrieval (CBIR) two query-methods exist: query-by-example and query-by-memory. The user either selects an example image or selects image features retrieved from memory (such as color, texture, spatial attributes, and shape) to define his query. Hitherto, research on CBIR

  18. Content-based multimedia retrieval: indexing and diversification

    NARCIS (Netherlands)

    van Leuken, R.H.

    2009-01-01

    The demand for efficient systems that facilitate searching in multimedia databases and collections is vastly increasing. Application domains include criminology, musicology, trademark registration, medicine and image or video retrieval on the web. This thesis discusses content-based retrieval

  19. Recognition of pornographic web pages by classifying texts and images.

    Science.gov (United States)

    Hu, Weiming; Wu, Ou; Chen, Zhouyao; Fu, Zhouyu; Maybank, Steve

    2007-06-01

    With the rapid development of the World Wide Web, people benefit more and more from the sharing of information. However, Web pages with obscene, harmful, or illegal content can be easily accessed. It is important to recognize such unsuitable, offensive, or pornographic Web pages. In this paper, a novel framework for recognizing pornographic Web pages is described. A C4.5 decision tree is used to divide Web pages, according to content representations, into continuous text pages, discrete text pages, and image pages. These three categories of Web pages are handled, respectively, by a continuous text classifier, a discrete text classifier, and an algorithm that fuses the results from the image classifier and the discrete text classifier. In the continuous text classifier, statistical and semantic features are used to recognize pornographic texts. In the discrete text classifier, the naive Bayes rule is used to calculate the probability that a discrete text is pornographic. In the image classifier, the object's contour-based features are extracted to recognize pornographic images. In the text and image fusion algorithm, the Bayes theory is used to combine the recognition results from images and texts. Experimental results demonstrate that the continuous text classifier outperforms the traditional keyword-statistics-based classifier, the contour-based image classifier outperforms the traditional skin-region-based image classifier, the results obtained by our fusion algorithm outperform those by either of the individual classifiers, and our framework can be adapted to different categories of Web pages.

  20. Domainwise Web Page Optimization Based On Clustered Query Sessions Using Hybrid Of Trust And ACO For Effective Information Retrieval

    Directory of Open Access Journals (Sweden)

    Dr. Suruchi Chawla

    2015-08-01

    Full Text Available Abstract In this paper hybrid of Ant Colony OptimizationACO and trust has been used for domainwise web page optimization in clustered query sessions for effective Information retrieval. The trust of the web page identifies its degree of relevance in satisfying specific information need of the user. The trusted web pages when optimized using pheromone updates in ACO will identify the trusted colonies of web pages which will be relevant to users information need in a given domain. Hence in this paper the hybrid of Trust and ACO has been used on clustered query sessions for identifying more and more relevant number of documents in a given domain in order to better satisfy the information need of the user. Experiment was conducted on the data set of web query sessions to test the effectiveness of the proposed approach in selected three domains Academics Entertainment and Sports and the results confirm the improvement in the precision of search results.

  1. Ontology of gaps in content-based image retrieval.

    Science.gov (United States)

    Deserno, Thomas M; Antani, Sameer; Long, Rodney

    2009-04-01

    Content-based image retrieval (CBIR) is a promising technology to enrich the core functionality of picture archiving and communication systems (PACS). CBIR has a potential for making a strong impact in diagnostics, research, and education. Research as reported in the scientific literature, however, has not made significant inroads as medical CBIR applications incorporated into routine clinical medicine or medical research. The cause is often attributed (without supporting analysis) to the inability of these applications in overcoming the "semantic gap." The semantic gap divides the high-level scene understanding and interpretation available with human cognitive capabilities from the low-level pixel analysis of computers, based on mathematical processing and artificial intelligence methods. In this paper, we suggest a more systematic and comprehensive view of the concept of "gaps" in medical CBIR research. In particular, we define an ontology of 14 gaps that addresses the image content and features, as well as system performance and usability. In addition to these gaps, we identify seven system characteristics that impact CBIR applicability and performance. The framework we have created can be used a posteriori to compare medical CBIR systems and approaches for specific biomedical image domains and goals and a priori during the design phase of a medical CBIR application, as the systematic analysis of gaps provides detailed insight in system comparison and helps to direct future research.

  2. Content-based image retrieval applied to bone age assessment

    Science.gov (United States)

    Fischer, Benedikt; Brosig, André; Welter, Petra; Grouls, Christoph; Günther, Rolf W.; Deserno, Thomas M.

    2010-03-01

    Radiological bone age assessment is based on local image regions of interest (ROI), such as the epiphysis or the area of carpal bones. These are compared to a standardized reference and scores determining the skeletal maturity are calculated. For computer-aided diagnosis, automatic ROI extraction and analysis is done so far mainly by heuristic approaches. Due to high variations in the imaged biological material and differences in age, gender and ethnic origin, automatic analysis is difficult and frequently requires manual interactions. On the contrary, epiphyseal regions (eROIs) can be compared to previous cases with known age by content-based image retrieval (CBIR). This requires a sufficient number of cases with reliable positioning of the eROI centers. In this first approach to bone age assessment by CBIR, we conduct leaving-oneout experiments on 1,102 left hand radiographs and 15,428 metacarpal and phalangeal eROIs from the USC hand atlas. The similarity of the eROIs is assessed by cross-correlation of 16x16 scaled eROIs. The effects of the number of eROIs, two age computation methods as well as the number of considered CBIR references are analyzed. The best results yield an error rate of 1.16 years and a standard deviation of 0.85 years. As the appearance of the hand varies naturally by up to two years, these results clearly demonstrate the applicability of the CBIR approach for bone age estimation.

  3. INTEGRATION OF SPATIAL INFORMATION WITH COLOR FOR CONTENT RETRIEVAL OF REMOTE SENSING IMAGES

    Directory of Open Access Journals (Sweden)

    Bikesh Kumar Singh

    2010-08-01

    Full Text Available There is rapid increase in image databases of remote sensing images due to image satellites with high resolution, commercial applications of remote sensing & high available bandwidth in last few years. The problem of content-based image retrieval (CBIR of remotely sensed images presents a major challenge not only because of the surprisingly increasing volume of images acquired from a wide range of sensors but also because of the complexity of images themselves. In this paper, a software system for content-based retrieval of remote sensing images using RGB and HSV color spaces is presented. Further, we also compare our results with spatiogram based content retrieval which integrates spatial information along with color histogram. Experimental results show that the integration of spatial information in color improves the image analysis of remote sensing data. In general, retrievals in HSV color space showed better performance than in RGB color space.

  4. Combining textual and visual information for image retrieval in the medical domain.

    Science.gov (United States)

    Gkoufas, Yiannis; Morou, Anna; Kalamboukis, Theodore

    2011-01-01

    In this article we have assembled the experience obtained from our participation in the imageCLEF evaluation task over the past two years. Exploitation on the use of linear combinations for image retrieval has been attempted by combining visual and textual sources of images. From our experiments we conclude that a mixed retrieval technique that applies both textual and visual retrieval in an interchangeably repeated manner improves the performance while overcoming the scalability limitations of visual retrieval. In particular, the mean average precision (MAP) has increased from 0.01 to 0.15 and 0.087 for 2009 and 2010 data, respectively, when content-based image retrieval (CBIR) is performed on the top 1000 results from textual retrieval based on natural language processing (NLP).

  5. Blind phase retrieval for aberrated linear shift-invariant imaging systems

    International Nuclear Information System (INIS)

    Yu, Rotha P; Paganin, David M

    2010-01-01

    We develop a means to reconstruct an input complex coherent scalar wavefield, given a through focal series (TFS) of three intensity images output from a two-dimensional (2D) linear shift-invariant optical imaging system with unknown aberrations. This blind phase retrieval technique unites two methods, namely (i) TFS phase retrieval and (ii) iterative blind deconvolution. The efficacy of our blind phase retrieval procedure has been demonstrated using simulated data, for a variety of Poisson noise levels.

  6. Design and development of semantic web-based system for computer science domain-specific information retrieval

    Directory of Open Access Journals (Sweden)

    Ritika Bansal

    2016-09-01

    Full Text Available In semantic web-based system, the concept of ontology is used to search results by contextual meaning of input query instead of keyword matching. From the research literature, there seems to be a need for a tool which can provide an easy interface for complex queries in natural language that can retrieve the domain-specific information from the ontology. This research paper proposes an IRSCSD system (Information retrieval system for computer science domain as a solution. This system offers advanced querying and browsing of structured data with search results automatically aggregated and rendered directly in a consistent user-interface, thus reducing the manual effort of users. So, the main objective of this research is design and development of semantic web-based system for integrating ontology towards domain-specific retrieval support. Methodology followed is a piecemeal research which involves the following stages. First Stage involves the designing of framework for semantic web-based system. Second stage builds the prototype for the framework using Protégé tool. Third Stage deals with the natural language query conversion into SPARQL query language using Python-based QUEPY framework. Fourth Stage involves firing of converted SPARQL queries to the ontology through Apache's Jena API to fetch the results. Lastly, evaluation of the prototype has been done in order to ensure its efficiency and usability. Thus, this research paper throws light on framework development for semantic web-based system that assists in efficient retrieval of domain-specific information, natural language query interpretation into semantic web language, creation of domain-specific ontology and its mapping with related ontology. This research paper also provides approaches and metrics for ontology evaluation on prototype ontology developed to study the performance based on accessibility of required domain-related information.

  7. Scalable Integrated Region-Based Image Retrieval Using IRM and Statistical Clustering.

    Science.gov (United States)

    Wang, James Z.; Du, Yanping

    Statistical clustering is critical in designing scalable image retrieval systems. This paper presents a scalable algorithm for indexing and retrieving images based on region segmentation. The method uses statistical clustering on region features and IRM (Integrated Region Matching), a measure developed to evaluate overall similarity between images…

  8. Learning effective color features for content based image retrieval in dermatology

    NARCIS (Netherlands)

    Bunte, Kerstin; Biehl, Michael; Jonkman, Marcel F.; Petkov, Nicolai

    We investigate the extraction of effective color features for a content-based image retrieval (CBIR) application in dermatology. Effectiveness is measured by the rate of correct retrieval of images from four color classes of skin lesions. We employ and compare two different methods to learn

  9. A Fast, Background-Independent Retrieval Strategy for Color Image Databases

    National Research Council Canada - National Science Library

    Das, M; Draper, B. A; Lim, W. J; Manmatha, R; Riseman, E. M

    1996-01-01

    We describe an interactive, multi-phase color-based image retrieval system which is capable of identifying query objects specified by the user in an image in the presence of significant, interfering backgrounds...

  10. Face Image Retrieval of Efficient Sparse Code words and Multiple Attribute in Binning Image

    Directory of Open Access Journals (Sweden)

    Suchitra S

    2017-08-01

    Full Text Available ABSTRACT In photography, face recognition and face retrieval play an important role in many applications such as security, criminology and image forensics. Advancements in face recognition make easier for identity matching of an individual with attributes. Latest development in computer vision technologies enables us to extract facial attributes from the input image and provide similar image results. In this paper, we propose a novel LOP and sparse codewords method to provide similar matching results with respect to input query image. To improve accuracy in image results with input image and dynamic facial attributes, Local octal pattern algorithm [LOP] and Sparse codeword applied in offline and online. The offline and online procedures in face image binning techniques apply with sparse code. Experimental results with Pubfig dataset shows that the proposed LOP along with sparse codewords able to provide matching results with increased accuracy of 90%.

  11. An intelligent framework for medical image retrieval using MDCT and multi SVM.

    Science.gov (United States)

    Balan, J A Alex Rajju; Rajan, S Edward

    2014-01-01

    Volumes of medical images are rapidly generated in medical field and to manage them effectively has become a great challenge. This paper studies the development of innovative medical image retrieval based on texture features and accuracy. The objective of the paper is to analyze the image retrieval based on diagnosis of healthcare management systems. This paper traces the development of innovative medical image retrieval to estimate both the image texture features and accuracy. The texture features of medical images are extracted using MDCT and multi SVM. Both the theoretical approach and the simulation results revealed interesting observations and they were corroborated using MDCT coefficients and SVM methodology. All attempts to extract the data about the image in response to the query has been computed successfully and perfect image retrieval performance has been obtained. Experimental results on a database of 100 trademark medical images show that an integrated texture feature representation results in 98% of the images being retrieved using MDCT and multi SVM. Thus we have studied a multiclassification technique based on SVM which is prior suitable for medical images. The results show the retrieval accuracy of 98%, 99% for different sets of medical images with respect to the class of image.

  12. Socializing the Semantic Gap: A Comparative Survey on Image Tag Assignment, Refinement and Retrieval

    NARCIS (Netherlands)

    Li, X.; Uricchio, T.; Ballan, L.; Bertini, M.; Snoek, C.G.M.; Del Bimbo, A.

    2016-01-01

    Where previous reviews on content-based image retrieval emphasize what can be seen in an image to bridge the semantic gap, this survey considers what people tag about an image. A comprehensive treatise of three closely linked problems (i.e., image tag assignment, refinement, and tag-based image

  13. Image retrieval by information fusion based on scalable vocabulary tree and robust Hausdorff distance

    Science.gov (United States)

    Che, Chang; Yu, Xiaoyang; Sun, Xiaoming; Yu, Boyang

    2017-12-01

    In recent years, Scalable Vocabulary Tree (SVT) has been shown to be effective in image retrieval. However, for general images where the foreground is the object to be recognized while the background is cluttered, the performance of the current SVT framework is restricted. In this paper, a new image retrieval framework that incorporates a robust distance metric and information fusion is proposed, which improves the retrieval performance relative to the baseline SVT approach. First, the visual words that represent the background are diminished by using a robust Hausdorff distance between different images. Second, image matching results based on three image signature representations are fused, which enhances the retrieval precision. We conducted intensive experiments on small-scale to large-scale image datasets: Corel-9, Corel-48, and PKU-198, where the proposed Hausdorff metric and information fusion outperforms the state-of-the-art methods by about 13, 15, and 15%, respectively.

  14. Content-Based Image Retrieval Based on Electromagnetism-Like Mechanism

    Directory of Open Access Journals (Sweden)

    Hamid A. Jalab

    2013-01-01

    Full Text Available Recently, many researchers in the field of automatic content-based image retrieval have devoted a remarkable amount of research looking for methods to retrieve the best relevant images to the query image. This paper presents a novel algorithm for increasing the precision in content-based image retrieval based on electromagnetism optimization technique. The electromagnetism optimization is a nature-inspired technique that follows the collective attraction-repulsion mechanism by considering each image as an electrical charge. The algorithm is composed of two phases: fitness function measurement and electromagnetism optimization technique. It is implemented on a database with 8,000 images spread across 80 classes with 100 images in each class. Eight thousand queries are fired on the database, and the overall average precision is computed. Experimental results of the proposed approach have shown significant improvement in the retrieval performance in regard to precision.

  15. A Novel Technique for Shape Feature Extraction Using Content Based Image Retrieval

    Directory of Open Access Journals (Sweden)

    Dhanoa Jaspreet Singh

    2016-01-01

    Full Text Available With the advent of technology and multimedia information, digital images are increasing very quickly. Various techniques are being developed to retrieve/search digital information or data contained in the image. Traditional Text Based Image Retrieval System is not plentiful. Since it is time consuming as it require manual image annotation. Also, the image annotation differs with different peoples. An alternate to this is Content Based Image Retrieval (CBIR system. It retrieves/search for image using its contents rather the text, keywords etc. A lot of exploration has been compassed in the range of Content Based Image Retrieval (CBIR with various feature extraction techniques. Shape is a significant image feature as it reflects the human perception. Moreover, Shape is quite simple to use by the user to define object in an image as compared to other features such as Color, texture etc. Over and above, if applied alone, no descriptor will give fruitful results. Further, by combining it with an improved classifier, one can use the positive features of both the descriptor and classifier. So, a tryout will be made to establish an algorithm for accurate feature (Shape extraction in Content Based Image Retrieval (CBIR. The main objectives of this project are: (a To propose an algorithm for shape feature extraction using CBIR, (b To evaluate the performance of proposed algorithm and (c To compare the proposed algorithm with state of art techniques.

  16. SIRW: A web server for the Simple Indexing and Retrieval System that combines sequence motif searches with keyword searches.

    Science.gov (United States)

    Ramu, Chenna

    2003-07-01

    SIRW (http://sirw.embl.de/) is a World Wide Web interface to the Simple Indexing and Retrieval System (SIR) that is capable of parsing and indexing various flat file databases. In addition it provides a framework for doing sequence analysis (e.g. motif pattern searches) for selected biological sequences through keyword search. SIRW is an ideal tool for the bioinformatics community for searching as well as analyzing biological sequences of interest.

  17. Robustness of phase retrieval methods in x-ray phase contrast imaging: A comparison

    International Nuclear Information System (INIS)

    Yan, Aimin; Wu, Xizeng; Liu, Hong

    2011-01-01

    Purpose: The robustness of the phase retrieval methods is of critical importance for limiting and reducing radiation doses involved in x-ray phase contrast imaging. This work is to compare the robustness of two phase retrieval methods by analyzing the phase maps retrieved from the experimental images of a phantom. Methods: Two phase retrieval methods were compared. One method is based on the transport of intensity equation (TIE) for phase contrast projections, and the TIE-based method is the most commonly used method for phase retrieval in the literature. The other is the recently developed attenuation-partition based (AP-based) phase retrieval method. The authors applied these two methods to experimental projection images of an air-bubble wrap phantom for retrieving the phase map of the bubble wrap. The retrieved phase maps obtained by using the two methods are compared. Results: In the wrap's phase map retrieved by using the TIE-based method, no bubble is recognizable, hence, this method failed completely for phase retrieval from these bubble wrap images. Even with the help of the Tikhonov regularization, the bubbles are still hardly visible and buried in the cluttered background in the retrieved phase map. The retrieved phase values with this method are grossly erroneous. In contrast, in the wrap's phase map retrieved by using the AP-based method, the bubbles are clearly recovered. The retrieved phase values with the AP-based method are reasonably close to the estimate based on the thickness-based measurement. The authors traced these stark performance differences of the two methods to their different techniques employed to deal with the singularity problem involved in the phase retrievals. Conclusions: This comparison shows that the conventional TIE-based phase retrieval method, regardless if Tikhonov regularization is used or not, is unstable against the noise in the wrap's projection images, while the AP-based phase retrieval method is shown in these

  18. W-transform method for feature-oriented multiresolution image retrieval

    Energy Technology Data Exchange (ETDEWEB)

    Kwong, M.K.; Lin, B. [Argonne National Lab., IL (United States). Mathematics and Computer Science Div.

    1995-07-01

    Image database management is important in the development of multimedia technology. Since an enormous amount of digital images is likely to be generated within the next few decades in order to integrate computers, television, VCR, cables, telephone and various imaging devices. Effective image indexing and retrieval systems are urgently needed so that images can be easily organized, searched, transmitted, and presented. Here, the authors present a local-feature-oriented image indexing and retrieval method based on Kwong, and Tang`s W-transform. Multiresolution histogram comparison is an effective method for content-based image indexing and retrieval. However, most recent approaches perform multiresolution analysis for whole images but do not exploit the local features present in the images. Since W-transform is featured by its ability to handle images of arbitrary size, with no periodicity assumptions, it provides a natural tool for analyzing local image features and building indexing systems based on such features. In this approach, the histograms of the local features of images are used in the indexing, system. The system not only can retrieve images that are similar or identical to the query images but also can retrieve images that contain features specified in the query images, even if the retrieved images as a whole might be very different from the query images. The local-feature-oriented method also provides a speed advantage over the global multiresolution histogram comparison method. The feature-oriented approach is expected to be applicable in managing large-scale image systems such as video databases and medical image databases.

  19. LUNARINFO:A Data Archiving and Retrieving System for the Circumlunar Explorer Based on XML/Web Services

    Institute of Scientific and Technical Information of China (English)

    ZUO Wei; LI Chunlai; OUYANG Ziyuan; LIU Jianjun; XU Tao

    2004-01-01

    It is essential to build a modem information management system to store and manage data of our circumlunar explorer in order to realize the scientific objectives. It is difficult for an information system based on traditional distributed technology to communicate information and work together among heterogeneous systems in order to meet the new requirement of Intemet development. XML and Web Services, because of their open standards and self-containing properties, have changed the mode of information organization and data management. Now they can provide a good solution for building an open, extendable, and compatible information management system, and facilitate interchanging and transferring of data among heterogeneous systems. On the basis of the three-tiered browse/server architectures and the Oracle 9i Database as an information storage platform, we have designed and implemented a data archiving and retrieval system for the circumlunar explorer-LUNARINFO. We have also successfully realized the integration between LUNARINFO and the cosmic dust database system. LUNARINFO consists of five function modules for data management, information publishing, system management, data retrieval, and interface integration. Based on XML and Web Services, it not only is an information database system for archiving, long-term storing, retrieving and publication of lunar reference data related to the circumlunar explorer, but also provides data web Services which can be easily developed by various expert groups and connected to the common information system to realize data resource integration.

  20. Scipion web tools: Easy to use cryo-EM image processing over the web.

    Science.gov (United States)

    Conesa Mingo, Pablo; Gutierrez, José; Quintana, Adrián; de la Rosa Trevín, José Miguel; Zaldívar-Peraza, Airén; Cuenca Alba, Jesús; Kazemi, Mohsen; Vargas, Javier; Del Cano, Laura; Segura, Joan; Sorzano, Carlos Oscar S; Carazo, Jose María

    2018-01-01

    Macromolecular structural determination by Electron Microscopy under cryogenic conditions is revolutionizing the field of structural biology, interesting a large community of potential users. Still, the path from raw images to density maps is complex, and sophisticated image processing suites are required in this process, often demanding the installation and understanding of different software packages. Here, we present Scipion Web Tools, a web-based set of tools/workflows derived from the Scipion image processing framework, specially tailored to nonexpert users in need of very precise answers at several key stages of the structural elucidation process. © 2017 The Protein Society.

  1. Optical image transformation and encryption by phase-retrieval-based double random-phase encoding and compressive ghost imaging

    Science.gov (United States)

    Yuan, Sheng; Yang, Yangrui; Liu, Xuemei; Zhou, Xin; Wei, Zhenzhuo

    2018-01-01

    An optical image transformation and encryption scheme is proposed based on double random-phase encoding (DRPE) and compressive ghost imaging (CGI) techniques. In this scheme, a secret image is first transformed into a binary image with the phase-retrieval-based DRPE technique, and then encoded by a series of random amplitude patterns according to the ghost imaging (GI) principle. Compressive sensing, corrosion and expansion operations are implemented to retrieve the secret image in the decryption process. This encryption scheme takes the advantage of complementary capabilities offered by the phase-retrieval-based DRPE and GI-based encryption techniques. That is the phase-retrieval-based DRPE is used to overcome the blurring defect of the decrypted image in the GI-based encryption, and the CGI not only reduces the data amount of the ciphertext, but also enhances the security of DRPE. Computer simulation results are presented to verify the performance of the proposed encryption scheme.

  2. Retrieving clinically relevant diabetic retinopathy images using a multi-class multiple-instance framework

    Science.gov (United States)

    Chandakkar, Parag S.; Venkatesan, Ragav; Li, Baoxin

    2013-02-01

    Diabetic retinopathy (DR) is a vision-threatening complication from diabetes mellitus, a medical condition that is rising globally. Unfortunately, many patients are unaware of this complication because of absence of symptoms. Regular screening of DR is necessary to detect the condition for timely treatment. Content-based image retrieval, using archived and diagnosed fundus (retinal) camera DR images can improve screening efficiency of DR. This content-based image retrieval study focuses on two DR clinical findings, microaneurysm and neovascularization, which are clinical signs of non-proliferative and proliferative diabetic retinopathy. The authors propose a multi-class multiple-instance image retrieval framework which deploys a modified color correlogram and statistics of steerable Gaussian Filter responses, for retrieving clinically relevant images from a database of DR fundus image database.

  3. Evaluation of web-based annotation of ophthalmic images for multicentric clinical trials.

    Science.gov (United States)

    Chalam, K V; Jain, P; Shah, V A; Shah, Gaurav Y

    2006-06-01

    An Internet browser-based annotation system can be used to identify and describe features in digitalized retinal images, in multicentric clinical trials, in real time. In this web-based annotation system, the user employs a mouse to draw and create annotations on a transparent layer, that encapsulates the observations and interpretations of a specific image. Multiple annotation layers may be overlaid on a single image. These layers may correspond to annotations by different users on the same image or annotations of a temporal sequence of images of a disease process, over a period of time. In addition, geometrical properties of annotated figures may be computed and measured. The annotations are stored in a central repository database on a server, which can be retrieved by multiple users in real time. This system facilitates objective evaluation of digital images and comparison of double-blind readings of digital photographs, with an identifiable audit trail. Annotation of ophthalmic images allowed clinically feasible and useful interpretation to track properties of an area of fundus pathology. This provided an objective method to monitor properties of pathologies over time, an essential component of multicentric clinical trials. The annotation system also allowed users to view stereoscopic images that are stereo pairs. This web-based annotation system is useful and valuable in monitoring patient care, in multicentric clinical trials, telemedicine, teaching and routine clinical settings.

  4. Implementation and evaluation of a medical image management system with content-based retrieval support

    International Nuclear Information System (INIS)

    Carita, Edilson Carlos; Seraphim, Enzo; Honda, Marcelo Ossamu; Azevedo-Marques, Paulo Mazzoncini de

    2008-01-01

    Objective: the present paper describes the implementation and evaluation of a medical images management system with content-based retrieval support (PACS-CBIR) integrating modules focused on images acquisition, storage and distribution, and text retrieval by keyword and images retrieval by similarity. Materials and methods: internet-compatible technologies were utilized for the system implementation with free ware, and C ++ , PHP and Java languages on a Linux platform. There is a DICOM-compatible image management module and two query modules, one of them based on text and the other on similarity of image texture attributes. Results: results demonstrate an appropriate images management and storage, and that the images retrieval time, always < 15 sec, was found to be good by users. The evaluation of retrieval by similarity has demonstrated that the selected images extractor allowed the sorting of images according to anatomical areas. Conclusion: based on these results, one can conclude that the PACS-CBIR implementation is feasible. The system has demonstrated to be DICOM-compatible, and that it can be integrated with the local information system. The similar images retrieval functionality can be enhanced by the introduction of further descriptors. (author)

  5. Efficient Image Blur in Web-Based Applications

    DEFF Research Database (Denmark)

    Kraus, Martin

    2010-01-01

    Scripting languages require the use of high-level library functions to implement efficient image processing; thus, real-time image blur in web-based applications is a challenging task unless specific library functions are available for this purpose. We present a pyramid blur algorithm, which can ...

  6. Indexing, learning and content-based retrieval for special purpose image databases

    NARCIS (Netherlands)

    M.J. Huiskes (Mark); E.J. Pauwels (Eric)

    2005-01-01

    textabstractThis chapter deals with content-based image retrieval in special purpose image databases. As image data is amassed ever more effortlessly, building efficient systems for searching and browsing of image databases becomes increasingly urgent. We provide an overview of the current

  7. SIP: A Web-Based Astronomical Image Processing Program

    Science.gov (United States)

    Simonetti, J. H.

    1999-12-01

    I have written an astronomical image processing and analysis program designed to run over the internet in a Java-compatible web browser. The program, Sky Image Processor (SIP), is accessible at the SIP webpage (http://www.phys.vt.edu/SIP). Since nothing is installed on the user's machine, there is no need to download upgrades; the latest version of the program is always instantly available. Furthermore, the Java programming language is designed to work on any computer platform (any machine and operating system). The program could be used with students in web-based instruction or in a computer laboratory setting; it may also be of use in some research or outreach applications. While SIP is similar to other image processing programs, it is unique in some important respects. For example, SIP can load images from the user's machine or from the Web. An instructor can put images on a web server for students to load and analyze on their own personal computer. Or, the instructor can inform the students of images to load from any other web server. Furthermore, since SIP was written with students in mind, the philosophy is to present the user with the most basic tools necessary to process and analyze astronomical images. Images can be combined (by addition, subtraction, multiplication, or division), multiplied by a constant, smoothed, cropped, flipped, rotated, and so on. Statistics can be gathered for pixels within a box drawn by the user. Basic tools are available for gathering data from an image which can be used for performing simple differential photometry, or astrometry. Therefore, students can learn how astronomical image processing works. Since SIP is not part of a commercial CCD camera package, the program is written to handle the most common denominator image file, the FITS format.

  8. Implementation of Texture Based Image Retrieval Using M-band Wavelet Transform

    Institute of Scientific and Technical Information of China (English)

    LiaoYa-li; Yangyan; CaoYang

    2003-01-01

    Wavelet transform has attracted attention because it is a very useful tool for signal analyzing. As a fundamental characteristic of an image, texture traits play an important role in the human vision system for recognition and interpretation of images. The paper presents an approach to implement texture-based image retrieval using M-band wavelet transform. Firstly the traditional 2-band wavelet is extended to M-band wavelet transform. Then the wavelet moments are computed by M-band wavelet coefficients in the wavelet domain. The set of wavelet moments forms the feature vector related to the texture distribution of each wavelet images. The distances between the feature vectors describe the similarities of different images. The experimental result shows that the M-band wavelet moment features of the images are effective for image indexing.The retrieval method has lower computational complexity, yet it is capable of giving better retrieval performance for a given medical image database.

  9. Low-dose multiple-information retrieval algorithm for X-ray grating-based imaging

    International Nuclear Information System (INIS)

    Wang Zhentian; Huang Zhifeng; Chen Zhiqiang; Zhang Li; Jiang Xiaolei; Kang Kejun; Yin Hongxia; Wang Zhenchang; Stampanoni, Marco

    2011-01-01

    The present work proposes a low dose information retrieval algorithm for X-ray grating-based multiple-information imaging (GB-MII) method, which can retrieve the attenuation, refraction and scattering information of samples by only three images. This algorithm aims at reducing the exposure time and the doses delivered to the sample. The multiple-information retrieval problem in GB-MII is solved by transforming a nonlinear equations set to a linear equations and adopting the nature of the trigonometric functions. The proposed algorithm is validated by experiments both on conventional X-ray source and synchrotron X-ray source, and compared with the traditional multiple-image-based retrieval algorithm. The experimental results show that our algorithm is comparable with the traditional retrieval algorithm and especially suitable for high Signal-to-Noise system.

  10. Fundus Image Features Extraction for Exudate Mining in Coordination with Content Based Image Retrieval: A Study

    Science.gov (United States)

    Gururaj, C.; Jayadevappa, D.; Tunga, Satish

    2018-06-01

    Medical field has seen a phenomenal improvement over the previous years. The invention of computers with appropriate increase in the processing and internet speed has changed the face of the medical technology. However there is still scope for improvement of the technologies in use today. One of the many such technologies of medical aid is the detection of afflictions of the eye. Although a repertoire of research has been accomplished in this field, most of them fail to address how to take the detection forward to a stage where it will be beneficial to the society at large. An automated system that can predict the current medical condition of a patient after taking the fundus image of his eye is yet to see the light of the day. Such a system is explored in this paper by summarizing a number of techniques for fundus image features extraction, predominantly hard exudate mining, coupled with Content Based Image Retrieval to develop an automation tool. The knowledge of the same would bring about worthy changes in the domain of exudates extraction of the eye. This is essential in cases where the patients may not have access to the best of technologies. This paper attempts at a comprehensive summary of the techniques for Content Based Image Retrieval (CBIR) or fundus features image extraction, and few choice methods of both, and an exploration which aims to find ways to combine these two attractive features, and combine them so that it is beneficial to all.

  11. Fundus Image Features Extraction for Exudate Mining in Coordination with Content Based Image Retrieval: A Study

    Science.gov (United States)

    Gururaj, C.; Jayadevappa, D.; Tunga, Satish

    2018-02-01

    Medical field has seen a phenomenal improvement over the previous years. The invention of computers with appropriate increase in the processing and internet speed has changed the face of the medical technology. However there is still scope for improvement of the technologies in use today. One of the many such technologies of medical aid is the detection of afflictions of the eye. Although a repertoire of research has been accomplished in this field, most of them fail to address how to take the detection forward to a stage where it will be beneficial to the society at large. An automated system that can predict the current medical condition of a patient after taking the fundus image of his eye is yet to see the light of the day. Such a system is explored in this paper by summarizing a number of techniques for fundus image features extraction, predominantly hard exudate mining, coupled with Content Based Image Retrieval to develop an automation tool. The knowledge of the same would bring about worthy changes in the domain of exudates extraction of the eye. This is essential in cases where the patients may not have access to the best of technologies. This paper attempts at a comprehensive summary of the techniques for Content Based Image Retrieval (CBIR) or fundus features image extraction, and few choice methods of both, and an exploration which aims to find ways to combine these two attractive features, and combine them so that it is beneficial to all.

  12. A Novel Optimization-Based Approach for Content-Based Image Retrieval

    Directory of Open Access Journals (Sweden)

    Manyu Xiao

    2013-01-01

    Full Text Available Content-based image retrieval is nowadays one of the possible and promising solutions to manage image databases effectively. However, with the large number of images, there still exists a great discrepancy between the users’ expectations (accuracy and efficiency and the real performance in image retrieval. In this work, new optimization strategies are proposed on vocabulary tree building, retrieval, and matching methods. More precisely, a new clustering strategy combining classification and conventional K-Means method is firstly redefined. Then a new matching technique is built to eliminate the error caused by large-scaled scale-invariant feature transform (SIFT. Additionally, a new unit mechanism is proposed to reduce the cost of indexing time. Finally, the numerical results show that excellent performances are obtained in both accuracy and efficiency based on the proposed improvements for image retrieval.

  13. The Potential of User Feedback Through the Iterative Refining of Queries in an Image Retrieval System

    NARCIS (Netherlands)

    Ben Moussa, Maher; Pasch, Marco; Hiemstra, Djoerd; van der Vet, P.E.; Huibers, Theo W.C.; Marchand-Maillet, Stephane; Bruno, Eric; Nürnberger, Andreas; Detyniecki, Marcin

    2007-01-01

    Inaccurate or ambiguous expressions in queries lead to poor results in information retrieval. We assume that iterative user feedback can improve the quality of queries. To this end we developed a system for image retrieval that utilizes user feedback to refine the user’s search query. This is done

  14. Joint Textual And Visual Cues For Retrieving Images Using Latent Semantic Indexing

    OpenAIRE

    Pecenovic, Zoran; Ayer, Serge; Vetterli, Martin

    2001-01-01

    In this article we present a novel approach of integrating textual and visual descriptors of images in a unified retrieval structure. The methodology, inspired from text retrieval and information filtering is based on Latent Semantic Indexing (LS1).

  15. Using Fuzzy SOM Strategy for Satellite Image Retrieval and Information Mining

    Directory of Open Access Journals (Sweden)

    Yo-Ping Huang

    2008-02-01

    Full Text Available This paper proposes an efficient satellite image retrieval and knowledge discovery model. The strategy comprises two major parts. First, a computational algorithm is used for off-line satellite image feature extraction, image data representation and image retrieval. Low level features are automatically extracted from the segmented regions of satellite images. A self-organization feature map is used to construct a two-layer satellite image concept hierarchy. The events are stored in one layer and the corresponding feature vectors are categorized in the other layer. Second, a user friendly interface is provided that retrieves images of interest and mines useful information based on the events in the concept hierarchy. The proposed system is evaluated with prominent features such as typhoons or high-pressure masses.

  16. Retrieval of bilingual autobiographical memories: effects of cue language and cue imageability.

    Science.gov (United States)

    Mortensen, Linda; Berntsen, Dorthe; Bohn, Ocke-Schwen

    2015-01-01

    An important issue in theories of bilingual autobiographical memory is whether linguistically encoded memories are represented in language-specific stores or in a common language-independent store. Previous research has found that autobiographical memory retrieval is facilitated when the language of the cue is the same as the language of encoding, consistent with language-specific memory stores. The present study examined whether this language congruency effect is influenced by cue imageability. Danish-English bilinguals retrieved autobiographical memories in response to Danish and English high- or low-imageability cues. Retrieval latencies were shorter to Danish than English cues and shorter to high- than low-imageability cues. Importantly, the cue language effect was stronger for low-than high-imageability cues. To examine the relationship between cue language and the language of internal retrieval, participants identified the language in which the memories were internally retrieved. More memories were retrieved when the cue language was the same as the internal language than when the cue was in the other language, and more memories were identified as being internally retrieved in Danish than English, regardless of the cue language. These results provide further evidence for language congruency effects in bilingual memory and suggest that this effect is influenced by cue imageability.

  17. A Fast, Background-Independent Retrieval Strategy for Color Image Databases

    National Research Council Canada - National Science Library

    Das, M; Draper, B. A; Lim, W. J; Manmatha, R; Riseman, E. M

    1996-01-01

    .... The method is fast and has low storage overhead. Good retrieval results are obtained with multi-colored query objects even when they occur in arbitrary sizes, rotations and locations in the database images...

  18. Medical Image Retrieval Based On the Parallelization of the Cluster Sampling Algorithm

    OpenAIRE

    Ali, Hesham Arafat; Attiya, Salah; El-henawy, Ibrahim

    2017-01-01

    In this paper we develop parallel cluster sampling algorithms and show that a multi-chain version is embarrassingly parallel and can be used efficiently for medical image retrieval among other applications.

  19. High resolution satellite image indexing and retrieval using SURF features and bag of visual words

    Science.gov (United States)

    Bouteldja, Samia; Kourgli, Assia

    2017-03-01

    In this paper, we evaluate the performance of SURF descriptor for high resolution satellite imagery (HRSI) retrieval through a BoVW model on a land-use/land-cover (LULC) dataset. Local feature approaches such as SIFT and SURF descriptors can deal with a large variation of scale, rotation and illumination of the images, providing, therefore, a better discriminative power and retrieval efficiency than global features, especially for HRSI which contain a great range of objects and spatial patterns. Moreover, we combine SURF and color features to improve the retrieval accuracy, and we propose to learn a category-specific dictionary for each image category which results in a more discriminative image representation and boosts the image retrieval performance.

  20. A novel biomedical image indexing and retrieval system via deep preference learning.

    Science.gov (United States)

    Pang, Shuchao; Orgun, Mehmet A; Yu, Zhezhou

    2018-05-01

    The traditional biomedical image retrieval methods as well as content-based image retrieval (CBIR) methods originally designed for non-biomedical images either only consider using pixel and low-level features to describe an image or use deep features to describe images but still leave a lot of room for improving both accuracy and efficiency. In this work, we propose a new approach, which exploits deep learning technology to extract the high-level and compact features from biomedical images. The deep feature extraction process leverages multiple hidden layers to capture substantial feature structures of high-resolution images and represent them at different levels of abstraction, leading to an improved performance for indexing and retrieval of biomedical images. We exploit the current popular and multi-layered deep neural networks, namely, stacked denoising autoencoders (SDAE) and convolutional neural networks (CNN) to represent the discriminative features of biomedical images by transferring the feature representations and parameters of pre-trained deep neural networks from another domain. Moreover, in order to index all the images for finding the similarly referenced images, we also introduce preference learning technology to train and learn a kind of a preference model for the query image, which can output the similarity ranking list of images from a biomedical image database. To the best of our knowledge, this paper introduces preference learning technology for the first time into biomedical image retrieval. We evaluate the performance of two powerful algorithms based on our proposed system and compare them with those of popular biomedical image indexing approaches and existing regular image retrieval methods with detailed experiments over several well-known public biomedical image databases. Based on different criteria for the evaluation of retrieval performance, experimental results demonstrate that our proposed algorithms outperform the state

  1. Large-Scale Query-by-Image Video Retrieval Using Bloom Filters

    OpenAIRE

    Araujo, Andre; Chaves, Jason; Lakshman, Haricharan; Angst, Roland; Girod, Bernd

    2016-01-01

    We consider the problem of using image queries to retrieve videos from a database. Our focus is on large-scale applications, where it is infeasible to index each database video frame independently. Our main contribution is a framework based on Bloom filters, which can be used to index long video segments, enabling efficient image-to-video comparisons. Using this framework, we investigate several retrieval architectures, by considering different types of aggregation and different functions to ...

  2. An improved ptychographical phase retrieval algorithm for diffractive imaging

    International Nuclear Information System (INIS)

    Maiden, Andrew M.; Rodenburg, John M.

    2009-01-01

    The ptychographical iterative engine (or PIE) is a recently developed phase retrieval algorithm that employs a series of diffraction patterns recorded as a known illumination function is translated to a set of overlapping positions relative to a target sample. The technique has been demonstrated successfully at optical and X-ray wavelengths and has been shown to be robust to detector noise and to converge considerably faster than support-based phase retrieval methods. In this paper, the PIE is extended so that the requirement for an accurate model of the illumination function is removed.

  3. A Novel Image Retrieval Based on Visual Words Integration of SIFT and SURF.

    Directory of Open Access Journals (Sweden)

    Nouman Ali

    Full Text Available With the recent evolution of technology, the number of image archives has increased exponentially. In Content-Based Image Retrieval (CBIR, high-level visual information is represented in the form of low-level features. The semantic gap between the low-level features and the high-level image concepts is an open research problem. In this paper, we present a novel visual words integration of Scale Invariant Feature Transform (SIFT and Speeded-Up Robust Features (SURF. The two local features representations are selected for image retrieval because SIFT is more robust to the change in scale and rotation, while SURF is robust to changes in illumination. The visual words integration of SIFT and SURF adds the robustness of both features to image retrieval. The qualitative and quantitative comparisons conducted on Corel-1000, Corel-1500, Corel-2000, Oliva and Torralba and Ground Truth image benchmarks demonstrate the effectiveness of the proposed visual words integration.

  4. Diversification in an image retrieval system based on text and image processing

    Directory of Open Access Journals (Sweden)

    Adrian Iftene

    2014-11-01

    Full Text Available In this paper we present an image retrieval system created within the research project MUCKE (Multimedia and User Credibility Knowledge Extraction, a CHIST-ERA research project where UAIC{\\footnote{"Alexandru Ioan Cuza" University of Iasi}} is one of the partners{\\footnote{Together with Technical University from Wienna, Austria, CEA-LIST Institute from Paris, France and BILKENT University from Ankara, Turkey}}. Our discussion in this work will focus mainly on components that are part of our image retrieval system proposed in MUCKE, and we present the work done by the UAIC group. MUCKE incorporates modules for processing multimedia content in different modes and languages (like English, French, German and Romanian and UAIC is responsible with text processing tasks (for Romanian and English. One of the problems addressed by our work is related to search results diversification. In order to solve this problem, we first process the user queries in both languages and secondly, we create clusters of similar images.

  5. Design and development of a content-based medical image retrieval system for spine vertebrae irregularity.

    Science.gov (United States)

    Mustapha, Aouache; Hussain, Aini; Samad, Salina Abdul; Zulkifley, Mohd Asyraf; Diyana Wan Zaki, Wan Mimi; Hamid, Hamzaini Abdul

    2015-01-16

    Content-based medical image retrieval (CBMIR) system enables medical practitioners to perform fast diagnosis through quantitative assessment of the visual information of various modalities. In this paper, a more robust CBMIR system that deals with both cervical and lumbar vertebrae irregularity is afforded. It comprises three main phases, namely modelling, indexing and retrieval of the vertebrae image. The main tasks in the modelling phase are to improve and enhance the visibility of the x-ray image for better segmentation results using active shape model (ASM). The segmented vertebral fractures are then characterized in the indexing phase using region-based fracture characterization (RB-FC) and contour-based fracture characterization (CB-FC). Upon a query, the characterized features are compared to the query image. Effectiveness of the retrieval phase is determined by its retrieval, thus, we propose an integration of the predictor model based cross validation neural network (PMCVNN) and similarity matching (SM) in this stage. The PMCVNN task is to identify the correct vertebral irregularity class through classification allowing the SM process to be more efficient. Retrieval performance between the proposed and the standard retrieval architectures are then compared using retrieval precision (Pr@M) and average group score (AGS) measures. Experimental results show that the new integrated retrieval architecture performs better than those of the standard CBMIR architecture with retrieval results of cervical (AGS > 87%) and lumbar (AGS > 82%) datasets. The proposed CBMIR architecture shows encouraging results with high Pr@M accuracy. As a result, images from the same visualization class are returned for further used by the medical personnel.

  6. New nuclear data service at CNEA: retrieval of the update libraries from a local Web-Server

    International Nuclear Information System (INIS)

    Suarez, Patricia M.; Pepe, Maria E.; Sbaffoni, Maria M.

    2000-01-01

    A new On-line Nuclear Data Service was implemented at National Atomic Energy Commission (CNEA) Web-Site. The information usually issued by the Nuclear Data Section of IAEA (NDS-IAEA) on CD-ROM, as well as complementary libraries periodically downloaded from the a mirror server of NDS-IAEA Service located at IPEN, Brazil are available on the new CNEA Web page. In the site, users can find numerical data on neutron, charged-particle, and photonuclear reactions, nuclear structure, and decay data, with related bibliographic information. This data server is permanently maintained and updated by CNEA staff members. This crew also offers assistance on the use and retrieval of nuclear data to local users. (author)

  7. SWORS: a system for the efficient retrieval of relevant spatial web objects

    DEFF Research Database (Denmark)

    Cao, Xin; Cong, Gao; Jensen, Christian S.

    2012-01-01

    Spatial web objects that possess both a geographical location and a textual description are gaining in prevalence. This gives prominence to spatial keyword queries that exploit both location and textual arguments. Such queries are used in many web services such as yellow pages and maps services....

  8. Quantifying the margin sharpness of lesions on radiological images for content-based image retrieval

    International Nuclear Information System (INIS)

    Xu Jiajing; Napel, Sandy; Greenspan, Hayit; Beaulieu, Christopher F.; Agrawal, Neeraj; Rubin, Daniel

    2012-01-01

    . Equivalence across deformations was assessed using Schuirmann's paired two one-sided tests. Results: In simulated images, the concordance correlation between measured gradient and actual gradient was 0.994. The mean (s.d.) and standard deviation NDCG score for the retrieval of K images, K = 5, 10, and 15, were 84% (8%), 85% (7%), and 85% (7%) for CT images containing liver lesions, and 82% (7%), 84% (6%), and 85% (4%) for CT images containing lung nodules, respectively. The authors’ proposed method outperformed the two existing margin characterization methods in average NDCG scores over all K, by 1.5% and 3% in datasets containing liver lesion, and 4.5% and 5% in datasets containing lung nodules. Equivalence testing showed that the authors’ feature is more robust across all margin deformations (p < 0.05) than the two existing methods for margin sharpness characterization in both simulated and clinical datasets. Conclusions: The authors have described a new image feature to quantify the margin sharpness of lesions. It has strong correlation with known margin sharpness in simulated images and in clinical CT images containing liver lesions and lung nodules. This image feature has excellent performance for retrieving images with similar margin characteristics, suggesting potential utility, in conjunction with other lesion features, for content-based image retrieval applications.

  9. Radar Images of the Earth and the World Wide Web

    Science.gov (United States)

    Chapman, B.; Freeman, A.

    1995-01-01

    A perspective of NASA's Jet Propulsion Laboratory as a center of planetary exploration, and its involvement in studying the earth from space is given. Remote sensing, radar maps, land topography, snow cover properties, vegetation type, biomass content, moisture levels, and ocean data are items discussed related to earth orbiting satellite imaging radar. World Wide Web viewing of this content is discussed.

  10. Pleasant/Unpleasant Filtering for Affective Image Retrieval Based on Cross-Correlation of EEG Features

    Directory of Open Access Journals (Sweden)

    Keranmu Xielifuguli

    2014-01-01

    Full Text Available People often make decisions based on sensitivity rather than rationality. In the field of biological information processing, methods are available for analyzing biological information directly based on electroencephalogram: EEG to determine the pleasant/unpleasant reactions of users. In this study, we propose a sensitivity filtering technique for discriminating preferences (pleasant/unpleasant for images using a sensitivity image filtering system based on EEG. Using a set of images retrieved by similarity retrieval, we perform the sensitivity-based pleasant/unpleasant classification of images based on the affective features extracted from images with the maximum entropy method: MEM. In the present study, the affective features comprised cross-correlation features obtained from EEGs produced when an individual observed an image. However, it is difficult to measure the EEG when a subject visualizes an unknown image. Thus, we propose a solution where a linear regression method based on canonical correlation is used to estimate the cross-correlation features from image features. Experiments were conducted to evaluate the validity of sensitivity filtering compared with image similarity retrieval methods based on image features. We found that sensitivity filtering using color correlograms was suitable for the classification of preferred images, while sensitivity filtering using local binary patterns was suitable for the classification of unpleasant images. Moreover, sensitivity filtering using local binary patterns for unpleasant images had a 90% success rate. Thus, we conclude that the proposed method is efficient for filtering unpleasant images.

  11. Content-based image retrieval using a signature graph and a self-organizing map

    Directory of Open Access Journals (Sweden)

    Van Thanh The

    2016-06-01

    Full Text Available In order to effectively retrieve a large database of images, a method of creating an image retrieval system CBIR (contentbased image retrieval is applied based on a binary index which aims to describe features of an image object of interest. This index is called the binary signature and builds input data for the problem of matching similar images. To extract the object of interest, we propose an image segmentation method on the basis of low-level visual features including the color and texture of the image. These features are extracted at each block of the image by the discrete wavelet frame transform and the appropriate color space. On the basis of a segmented image, we create a binary signature to describe the location, color and shape of the objects of interest. In order to match similar images, we provide a similarity measure between the images based on binary signatures. Then, we present a CBIR model which combines a signature graph and a self-organizing map to cluster and store similar images. To illustrate the proposed method, experiments on image databases are reported, including COREL,Wang and MSRDI.

  12. Large-scale retrieval for medical image analytics: A comprehensive review.

    Science.gov (United States)

    Li, Zhongyu; Zhang, Xiaofan; Müller, Henning; Zhang, Shaoting

    2018-01-01

    Over the past decades, medical image analytics was greatly facilitated by the explosion of digital imaging techniques, where huge amounts of medical images were produced with ever-increasing quality and diversity. However, conventional methods for analyzing medical images have achieved limited success, as they are not capable to tackle the huge amount of image data. In this paper, we review state-of-the-art approaches for large-scale medical image analysis, which are mainly based on recent advances in computer vision, machine learning and information retrieval. Specifically, we first present the general pipeline of large-scale retrieval, summarize the challenges/opportunities of medical image analytics on a large-scale. Then, we provide a comprehensive review of algorithms and techniques relevant to major processes in the pipeline, including feature representation, feature indexing, searching, etc. On the basis of existing work, we introduce the evaluation protocols and multiple applications of large-scale medical image retrieval, with a variety of exploratory and diagnostic scenarios. Finally, we discuss future directions of large-scale retrieval, which can further improve the performance of medical image analysis. Copyright © 2017 Elsevier B.V. All rights reserved.

  13. Retrieve polarization aberration from image degradation: a new measurement method in DUV lithography

    Science.gov (United States)

    Xiang, Zhongbo; Li, Yanqiu

    2017-10-01

    Detailed knowledge of polarization aberration (PA) of projection lens in higher-NA DUV lithographic imaging is necessary due to its impact to imaging degradations, and precise measurement of PA is conductive to computational lithography techniques such as RET and OPC. Current in situ measurement method of PA thorough the detection of degradations of aerial images need to do linear approximation and apply the assumption of 3-beam/2-beam interference condition. The former approximation neglects the coupling effect of the PA coefficients, which would significantly influence the accuracy of PA retrieving. The latter assumption restricts the feasible pitch of test masks in higher-NA system, conflicts with the Kirhhoff diffraction model of test mask used in retrieving model, and introduces 3D mask effect as a source of retrieving error. In this paper, a new in situ measurement method of PA is proposed. It establishes the analytical quadratic relation between the PA coefficients and the degradations of aerial images of one-dimensional dense lines in coherent illumination through vector aerial imaging, which does not rely on the assumption of 3-beam/2- beam interference and linear approximation. In this case, the retrieval of PA from image degradation can be convert from the nonlinear system of m-quadratic equations to a multi-objective quadratic optimization problem, and finally be solved by nonlinear least square method. Some preliminary simulation results are given to demonstrate the correctness and accuracy of the new PA retrieving model.

  14. A novel 3D shape descriptor for automatic retrieval of anatomical structures from medical images

    Science.gov (United States)

    Nunes, Fátima L. S.; Bergamasco, Leila C. C.; Delmondes, Pedro H.; Valverde, Miguel A. G.; Jackowski, Marcel P.

    2017-03-01

    Content-based image retrieval (CBIR) aims at retrieving from a database objects that are similar to an object provided by a query, by taking into consideration a set of extracted features. While CBIR has been widely applied in the two-dimensional image domain, the retrieval of3D objects from medical image datasets using CBIR remains to be explored. In this context, the development of descriptors that can capture information specific to organs or structures is desirable. In this work, we focus on the retrieval of two anatomical structures commonly imaged by Magnetic Resonance Imaging (MRI) and Computed Tomography (CT) techniques, the left ventricle of the heart and blood vessels. Towards this aim, we developed the Area-Distance Local Descriptor (ADLD), a novel 3D local shape descriptor that employs mesh geometry information, namely facet area and distance from centroid to surface, to identify shape changes. Because ADLD only considers surface meshes extracted from volumetric medical images, it substantially diminishes the amount of data to be analyzed. A 90% precision rate was obtained when retrieving both convex (left ventricle) and non-convex structures (blood vessels), allowing for detection of abnormalities associated with changes in shape. Thus, ADLD has the potential to aid in the diagnosis of a wide range of vascular and cardiac diseases.

  15. Chinese Herbal Medicine Image Recognition and Retrieval by Convolutional Neural Network.

    Science.gov (United States)

    Sun, Xin; Qian, Huinan

    2016-01-01

    Chinese herbal medicine image recognition and retrieval have great potential of practical applications. Several previous studies have focused on the recognition with hand-crafted image features, but there are two limitations in them. Firstly, most of these hand-crafted features are low-level image representation, which is easily affected by noise and background. Secondly, the medicine images are very clean without any backgrounds, which makes it difficult to use in practical applications. Therefore, designing high-level image representation for recognition and retrieval in real world medicine images is facing a great challenge. Inspired by the recent progress of deep learning in computer vision, we realize that deep learning methods may provide robust medicine image representation. In this paper, we propose to use the Convolutional Neural Network (CNN) for Chinese herbal medicine image recognition and retrieval. For the recognition problem, we use the softmax loss to optimize the recognition network; then for the retrieval problem, we fine-tune the recognition network by adding a triplet loss to search for the most similar medicine images. To evaluate our method, we construct a public database of herbal medicine images with cluttered backgrounds, which has in total 5523 images with 95 popular Chinese medicine categories. Experimental results show that our method can achieve the average recognition precision of 71% and the average retrieval precision of 53% over all the 95 medicine categories, which are quite promising given the fact that the real world images have multiple pieces of occluded herbal and cluttered backgrounds. Besides, our proposed method achieves the state-of-the-art performance by improving previous studies with a large margin.

  16. Dual-force ISOMAP: a new relevance feedback method for medical image retrieval.

    Science.gov (United States)

    Shen, Hualei; Tao, Dacheng; Ma, Dianfu

    2013-01-01

    With great potential for assisting radiological image interpretation and decision making, content-based image retrieval in the medical domain has become a hot topic in recent years. Many methods to enhance the performance of content-based medical image retrieval have been proposed, among which the relevance feedback (RF) scheme is one of the most promising. Given user feedback information, RF algorithms interactively learn a user's preferences to bridge the "semantic gap" between low-level computerized visual features and high-level human semantic perception and thus improve retrieval performance. However, most existing RF algorithms perform in the original high-dimensional feature space and ignore the manifold structure of the low-level visual features of images. In this paper, we propose a new method, termed dual-force ISOMAP (DFISOMAP), for content-based medical image retrieval. Under the assumption that medical images lie on a low-dimensional manifold embedded in a high-dimensional ambient space, DFISOMAP operates in the following three stages. First, the geometric structure of positive examples in the learned low-dimensional embedding is preserved according to the isometric feature mapping (ISOMAP) criterion. To precisely model the geometric structure, a reconstruction error constraint is also added. Second, the average distance between positive and negative examples is maximized to separate them; this margin maximization acts as a force that pushes negative examples far away from positive examples. Finally, the similarity propagation technique is utilized to provide negative examples with another force that will pull them back into the negative sample set. We evaluate the proposed method on a subset of the IRMA medical image dataset with a RF-based medical image retrieval framework. Experimental results show that DFISOMAP outperforms popular approaches for content-based medical image retrieval in terms of accuracy and stability.

  17. The iMars WebGIS - Spatio-Temporal Data Queries and Single Image Map Web Services

    Science.gov (United States)

    Walter, Sebastian; Steikert, Ralf; Schreiner, Bjoern; Muller, Jan-Peter; van Gasselt, Stephan; Sidiropoulos, Panagiotis; Lanz-Kroechert, Julia

    2017-04-01

    Server backend which in turn delivers the response back to the MapCache instance. Web frontend: We have implemented a web-GIS frontend based on various OpenLayers components. The basemap is a global color-hillshaded HRSC bundle-adjusted DTM mosaic with a resolution of 50 m per pixel. The new bundle-block-adjusted qudrangle mosaics of the MC-11 quadrangle, both image and DTM, are included with opacity slider options. The layer user interface has been adapted on the base of the ol3-layerswitcher and extended by foldable and switchable groups, layer sorting (by resolution, by time and alphabeticallly) and reordering (drag-and-drop). A collapsible time panel accomodates a time slider interface where the user can filter the visible data by a range of Mars or Earth dates and/or by solar longitudes. The visualisation of time-series of single images is controlled by a specific toolbar enabling the workflow of image selection (by point or bounding box), dynamic image loading and playback of single images in a video player-like environment. During a stress-test campaign we could demonstrate that the system is capable of serving up to 10 simultaneous users on its current lightweight development hardware. It is planned to relocate the software to more powerful hardware by the time of this conference. Conclusions/Outlook: The iMars webGIS is an expert tool for the detection and visualization of surface changes. We demonstrate a technique to dynamically retrieve and display single images based on the time-series structure of the data. Together with the multi-temporal database and its MapServer/MapCache backend it provides a stable and high performance environment for the dissemination of the various iMars products. Acknowledgements: This research has received funding from the EU's FP7 Programme under iMars 607379 and by the German Space Agency (DLR Bonn), grant 50 QM 1301 (HRSC on Mars Express).

  18. Context-based adaptive filtering of interest points in image retrieval

    DEFF Research Database (Denmark)

    Nguyen, Phuong Giang; Andersen, Hans Jørgen

    2009-01-01

    Interest points have been used as local features with success in many computer vision applications such as image/video retrieval and object recognition. However, a major issue when using this approach is a large number of interest points detected from each image and created a dense feature space...... a subset of features. Our approach differs from others in a fact that selected feature is based on the context of the given image. Our experimental results show a significant reduction rate of features while preserving the retrieval performance....

  19. Optical multiple-image encryption based on multiplane phase retrieval and interference

    International Nuclear Information System (INIS)

    Chen, Wen; Chen, Xudong

    2011-01-01

    In this paper, we propose a new method for optical multiple-image encryption based on multiplane phase retrieval and interference. An optical encoding system is developed in the Fresnel domain. A phase-only map is iteratively extracted based on a multiplane phase retrieval algorithm, and multiple plaintexts are simultaneously encrypted. Subsequently, the extracted phase-only map is further encrypted into two phase-only masks based on a non-iterative interference algorithm. During image decryption, the advantages and security of the proposed optical cryptosystem are analyzed. Numerical results are presented to demonstrate the validity of the proposed optical multiple-image encryption method

  20. Curvature histogram features for retrieval of images of smooth 3D objects

    International Nuclear Information System (INIS)

    Zhdanov, I; Scherbakov, O; Potapov, A; Peterson, M

    2014-01-01

    We consider image features on the base of histograms of oriented gradients (HOG) with addition of contour curvature histogram (HOG-CH), and also compare it with results of known scale-invariant feature transform (SIFT) approach in application to retrieval of images of smooth 3D objects.

  1. Publication and Retrieval of Computational Chemical-Physical Data Via the Semantic Web. Final Technical Report

    Energy Technology Data Exchange (ETDEWEB)

    Ostlund, Neil [Chemical Semantics, Inc., Gainesville, FL (United States)

    2017-07-20

    This research showed the feasibility of applying the concepts of the Semantic Web to Computation Chemistry. We have created the first web portal (www.chemsem.com) that allows data created in the calculations of quantum chemistry, and other such chemistry calculations to be placed on the web in a way that makes the data accessible to scientists in a semantic form never before possible. The semantic web nature of the portal allows data to be searched, found, and used as an advance over the usual approach of a relational database. The semantic data on our portal has the nature of a Giant Global Graph (GGG) that can be easily merged with related data and searched globally via a SPARQL Protocol and RDF Query Language (SPARQL) that makes global searches for data easier than with traditional methods. Our Semantic Web Portal requires that the data be understood by a computer and hence defined by an ontology (vocabulary). This ontology is used by the computer in understanding the data. We have created such an ontology for computational chemistry (purl.org/gc) that encapsulates a broad knowledge of the field of computational chemistry. We refer to this ontology as the Gainesville Core. While it is perhaps the first ontology for computational chemistry and is used by our portal, it is only a start of what must be a long multi-partner effort to define computational chemistry. In conjunction with the above efforts we have defined a new potential file standard (Common Standard for eXchange – CSX for computational chemistry data). This CSX file is the precursor of data in the Resource Description Framework (RDF) form that the semantic web requires. Our portal translates CSX files (as well as other computational chemistry data files) into RDF files that are part of the graph database that the semantic web employs. We propose a CSX file as a convenient way to encapsulate computational chemistry data.

  2. Comparison of the effectiveness of alternative feature sets in shape retrieval of multicomponent images

    Science.gov (United States)

    Eakins, John P.; Edwards, Jonathan D.; Riley, K. Jonathan; Rosin, Paul L.

    2001-01-01

    Many different kinds of features have been used as the basis for shape retrieval from image databases. This paper investigates the relative effectiveness of several types of global shape feature, both singly and in combination. The features compared include well-established descriptors such as Fourier coefficients and moment invariants, as well as recently-proposed measures of triangularity and ellipticity. Experiments were conducted within the framework of the ARTISAN shape retrieval system, and retrieval effectiveness assessed on a database of over 10,000 images, using 24 queries and associated ground truth supplied by the UK Patent Office . Our experiments revealed only minor differences in retrieval effectiveness between different measures, suggesting that a wide variety of shape feature combinations can provide adequate discriminating power for effective shape retrieval in multi-component image collections such as trademark registries. Marked differences between measures were observed for some individual queries, suggesting that there could be considerable scope for improving retrieval effectiveness by providing users with an improved framework for searching multi-dimensional feature space.

  3. Similarity estimation for reference image retrieval in mammograms using convolutional neural network

    Science.gov (United States)

    Muramatsu, Chisako; Higuchi, Shunichi; Morita, Takako; Oiwa, Mikinao; Fujita, Hiroshi

    2018-02-01

    Periodic breast cancer screening with mammography is considered effective in decreasing breast cancer mortality. For screening programs to be successful, an intelligent image analytic system may support radiologists' efficient image interpretation. In our previous studies, we have investigated image retrieval schemes for diagnostic references of breast lesions on mammograms and ultrasound images. Using a machine learning method, reliable similarity measures that agree with radiologists' similarity were determined and relevant images could be retrieved. However, our previous method includes a feature extraction step, in which hand crafted features were determined based on manual outlines of the masses. Obtaining the manual outlines of masses is not practical in clinical practice and such data would be operator-dependent. In this study, we investigated a similarity estimation scheme using a convolutional neural network (CNN) to skip such procedure and to determine data-driven similarity scores. By using CNN as feature extractor, in which extracted features were employed in determination of similarity measures with a conventional 3-layered neural network, the determined similarity measures were correlated well with the subjective ratings and the precision of retrieving diagnostically relevant images was comparable with that of the conventional method using handcrafted features. By using CNN for determination of similarity measure directly, the result was also comparable. By optimizing the network parameters, results may be further improved. The proposed method has a potential usefulness in determination of similarity measure without precise lesion outlines for retrieval of similar mass images on mammograms.

  4. Significant wave height retrieval from synthetic radar images

    NARCIS (Netherlands)

    Wijaya, Andreas Parama; van Groesen, Embrecht W.C.

    2014-01-01

    In many offshore activities radar imagery is used to observe and predict ocean waves. An important issue in analyzing the radar images is to resolve the significant wave height. Different from 3DFFT methods that use an estimate related to the square root of the signal-to-noise ratio of radar images,

  5. Multiscale Distance Coherence Vector Algorithm for Content-Based Image Retrieval

    Science.gov (United States)

    Jiexian, Zeng; Xiupeng, Liu

    2014-01-01

    Multiscale distance coherence vector algorithm for content-based image retrieval (CBIR) is proposed due to the same descriptor with different shapes and the shortcomings of antinoise performance of the distance coherence vector algorithm. By this algorithm, the image contour curve is evolved by Gaussian function first, and then the distance coherence vector is, respectively, extracted from the contour of the original image and evolved images. Multiscale distance coherence vector was obtained by reasonable weight distribution of the distance coherence vectors of evolved images contour. This algorithm not only is invariable to translation, rotation, and scaling transformation but also has good performance of antinoise. The experiment results show us that the algorithm has a higher recall rate and precision rate for the retrieval of images polluted by noise. PMID:24883416

  6. Three-dimensional imaging using phase retrieval with two focus planes

    Science.gov (United States)

    Ilovitsh, Tali; Ilovitsh, Asaf; Weiss, Aryeh; Meir, Rinat; Zalevsky, Zeev

    2016-03-01

    This work presents a technique for a full 3D imaging of biological samples tagged with gold-nanoparticles (GNPs) using only two images, rather than many images per volume as is currently needed for 3D optical sectioning microscopy. The proposed approach is based on the Gerchberg-Saxton (GS) phase retrieval algorithm. The reconstructed field is free space propagated to all other focus planes using post processing, and the 2D z-stack is merged to create a 3D image of the sample with high fidelity. Because we propose to apply the phase retrieving on nano particles, the regular ambiguities typical to the Gerchberg-Saxton algorithm, are eliminated. In addition, since the method requires the capturing of two images only, it can be suitable for 3D live cell imaging. The proposed concept is presented and validated both on simulated data as well as experimentally.

  7. Content Based Radiographic Images Indexing and Retrieval Using Pattern Orientation Histogram

    Directory of Open Access Journals (Sweden)

    Abolfazl Lakdashti

    2008-06-01

    Full Text Available Introduction: Content Based Image Retrieval (CBIR is a method of image searching and retrieval in a  database. In medical applications, CBIR is a tool used by physicians to compare the previous and current  medical images associated with patients pathological conditions. As the volume of pictorial information  stored in medical image databases is in progress, efficient image indexing and retrieval is increasingly  becoming a necessity.  Materials and Methods: This paper presents a new content based radiographic image retrieval approach  based on histogram of pattern orientations, namely pattern orientation histogram (POH. POH represents  the  spatial  distribution  of  five  different  pattern  orientations:  vertical,  horizontal,  diagonal  down/left,  diagonal down/right and non-orientation. In this method, a given image is first divided into image-blocks  and  the  frequency  of  each  type  of  pattern  is  determined  in  each  image-block.  Then,  local  pattern  histograms for each of these image-blocks are computed.   Results: The method was compared to two well known texture-based image retrieval methods: Tamura  and  Edge  Histogram  Descriptors  (EHD  in  MPEG-7  standard.  Experimental  results  based  on  10000  IRMA  radiography  image  dataset,  demonstrate  that  POH  provides  better  precision  and  recall  rates  compared to Tamura and EHD. For some images, the recall and precision rates obtained by POH are,  respectively, 48% and 18% better than the best of the two above mentioned methods.    Discussion and Conclusion: Since we exploit the absolute location of the pattern in the image as well as  its global composition, the proposed matching method can retrieve semantically similar medical images.

  8. Algorithm for image retrieval based on edge gradient orientation statistical code.

    Science.gov (United States)

    Zeng, Jiexian; Zhao, Yonggang; Li, Weiye; Fu, Xiang

    2014-01-01

    Image edge gradient direction not only contains important information of the shape, but also has a simple, lower complexity characteristic. Considering that the edge gradient direction histograms and edge direction autocorrelogram do not have the rotation invariance, we put forward the image retrieval algorithm which is based on edge gradient orientation statistical code (hereinafter referred to as EGOSC) by sharing the application of the statistics method in the edge direction of the chain code in eight neighborhoods to the statistics of the edge gradient direction. Firstly, we construct the n-direction vector and make maximal summation restriction on EGOSC to make sure this algorithm is invariable for rotation effectively. Then, we use Euclidean distance of edge gradient direction entropy to measure shape similarity, so that this method is not sensitive to scaling, color, and illumination change. The experimental results and the algorithm analysis demonstrate that the algorithm can be used for content-based image retrieval and has good retrieval results.

  9. Multi-instance learning based on instance consistency for image retrieval

    Science.gov (United States)

    Zhang, Miao; Wu, Zhize; Wan, Shouhong; Yue, Lihua; Yin, Bangjie

    2017-07-01

    Multiple-instance learning (MIL) has been successfully utilized in image retrieval. Existing approaches cannot select positive instances correctly from positive bags which may result in a low accuracy. In this paper, we propose a new image retrieval approach called multiple instance learning based on instance-consistency (MILIC) to mitigate such issue. First, we select potential positive instances effectively in each positive bag by ranking instance-consistency (IC) values of instances. Then, we design a feature representation scheme, which can represent the relationship among bags and instances, based on potential positive instances to convert a bag into a single instance. Finally, we can use a standard single-instance learning strategy, such as the support vector machine, for performing object-based image retrieval. Experimental results on two challenging data sets show the effectiveness of our proposal in terms of accuracy and run time.

  10. Large Scale Hierarchical K-Means Based Image Retrieval With MapReduce

    Science.gov (United States)

    2014-03-27

    flat vocabulary on MapReduce. In 2013, Moise and Shestakov [32, 40], have been researching large scale indexing and search with MapReduce. They...time will be greatly reduced, however image retrieval performance will almost certainly suffer. Moise and Shestakov ran tests with 100M images on 108...43–72, 2005. [32] Diana Moise , Denis Shestakov, Gylfi Gudmundsson, and Laurent Amsaleg. Indexing and searching 100m images with map-reduce. In

  11. Effects of Diacritics on Web Search Engines’ Performance for Retrieval of Yoruba Documents

    Directory of Open Access Journals (Sweden)

    Toluwase Victor Asubiaro

    2014-06-01

    Full Text Available This paper aims to find out the possible effect of the use or nonuse of diacritics in Yoruba search queries on the performance of major search engines, AOL, Bing, Google and Yahoo!, in retrieving documents. 30 Yoruba queries created from the most searched keywords from Nigeria on Google search logs were submitted to the search engines. The search queries were posed to the search engines without diacritics and then with diacritics. All of the search engines retrieved more sites in response to the queries without diacritics. Also, they all retrieved more precise results for queries without diacritics. The search engines also answered more queries without diacritics. There was no significant difference in the precision values of any two of the four search engines for diacritized and undiacritized queries. There was a significant difference in the effectiveness of AOL and Yahoo when diacritics were applied and when they were not applied. The findings of the study indicate that the search engines do not find a relationship between the diacritized Yoruba words and the undiacritized versions. Therefore, there is a need for search engines to add normalization steps to pre-process Yoruba queries and indexes. This study concentrates on a problem with search engines that has not been previously investigated.

  12. SWHi system description : A case study in information retrieval, inference, and visualization in the Semantic Web

    NARCIS (Netherlands)

    Fahmi, Ismail; Zhang, Junte; Ellermann, Henk; Bouma, Gosse; Franconi, E; Kifer, M; May, W

    2007-01-01

    Search engines have become the most popular tools for finding information on the Internet. A real-world Semantic Web application can benefit from this by combining its features with some features from search engines. In this paper, we describe methods for indexing and searching a populated ontology

  13. Uncertainties in cloud phase and optical thickness retrievals from the Earth Polychromatic Imaging Camera (EPIC)

    Science.gov (United States)

    Meyer, Kerry; Yang, Yuekui; Platnick, Steven

    2018-01-01

    This paper presents an investigation of the expected uncertainties of a single channel cloud optical thickness (COT) retrieval technique, as well as a simple cloud temperature threshold based thermodynamic phase approach, in support of the Deep Space Climate Observatory (DSCOVR) mission. DSCOVR cloud products will be derived from Earth Polychromatic Imaging Camera (EPIC) observations in the ultraviolet and visible spectra. Since EPIC is not equipped with a spectral channel in the shortwave or mid-wave infrared that is sensitive to cloud effective radius (CER), COT will be inferred from a single visible channel with the assumption of appropriate CER values for liquid and ice phase clouds. One month of Aqua MODIS daytime granules from April 2005 is selected for investigating cloud phase sensitivity, and a subset of these granules that has similar EPIC sun-view geometry is selected for investigating COT uncertainties. EPIC COT retrievals are simulated with the same algorithm as the operational MODIS cloud products (MOD06), except using fixed phase-dependent CER values. Uncertainty estimates are derived by comparing the single channel COT retrievals with the baseline bi-spectral MODIS retrievals. Results show that a single channel COT retrieval is feasible for EPIC. For ice clouds, single channel retrieval errors are minimal (clouds the error is mostly limited to within 10%, although for thin clouds (COT cloud masking and cloud temperature retrievals are not considered in this study. PMID:29619116

  14. Modeling multiple visual words assignment for bag-of-features based medical image retrieval

    KAUST Repository

    Wang, Jim Jing-Yan

    2012-01-01

    In this paper, we investigate the bag-of-features based medical image retrieval methods, which represent an image as a collection of local features, such as image patch and key points with SIFT descriptor. To improve the bag-of-features method, we first model the assignment of local descriptor as contribution functions, and then propose a new multiple assignment strategy. By assuming the local feature can be reconstructed by its neighboring visual words in vocabulary, we solve the reconstruction weights as a QP problem and then use the solved weights as contribution functions, which results in a new assignment method called the QP assignment. We carry our experiments on ImageCLEFmed datasets. Experiments\\' results show that our proposed method exceeds the performances of traditional solutions and works well for the bag-of-features based medical image retrieval tasks.

  15. Automated Region of Interest Retrieval of Metallographic Images for Quality Classification in Industry

    Directory of Open Access Journals (Sweden)

    Petr Kotas

    2012-01-01

    Full Text Available The aim of the research is development and testing of new methods to classify the quality of metallographic samples of steels with high added value (for example grades X70 according API. In this paper, we address the development of methods to classify the quality of slab samples images with the main emphasis on the quality of the image center called as segregation area. For this reason, we introduce an alternative method for automated retrieval of region of interest. In the first step, the metallographic image is segmented using both spectral method and thresholding. Then, the extracted macrostructure of the metallographic image is automatically analyzed by statistical methods. Finally, automatically extracted region of interests are compared with results of human experts.  Practical experience with retrieval of non-homogeneous noised digital images in industrial environment is discussed as well.

  16. Modeling multiple visual words assignment for bag-of-features based medical image retrieval

    KAUST Repository

    Wang, Jim Jing-Yan; Almasri, Islam

    2012-01-01

    In this paper, we investigate the bag-of-features based medical image retrieval methods, which represent an image as a collection of local features, such as image patch and key points with SIFT descriptor. To improve the bag-of-features method, we first model the assignment of local descriptor as contribution functions, and then propose a new multiple assignment strategy. By assuming the local feature can be reconstructed by its neighboring visual words in vocabulary, we solve the reconstruction weights as a QP problem and then use the solved weights as contribution functions, which results in a new assignment method called the QP assignment. We carry our experiments on ImageCLEFmed datasets. Experiments' results show that our proposed method exceeds the performances of traditional solutions and works well for the bag-of-features based medical image retrieval tasks.

  17. Quantifying Vegetation Biophysical Variables from Imaging Spectroscopy Data: A Review on Retrieval Methods

    Science.gov (United States)

    Verrelst, Jochem; Malenovský, Zbyněk; Van der Tol, Christiaan; Camps-Valls, Gustau; Gastellu-Etchegorry, Jean-Philippe; Lewis, Philip; North, Peter; Moreno, Jose

    2018-06-01

    An unprecedented spectroscopic data stream will soon become available with forthcoming Earth-observing satellite missions equipped with imaging spectroradiometers. This data stream will open up a vast array of opportunities to quantify a diversity of biochemical and structural vegetation properties. The processing requirements for such large data streams require reliable retrieval techniques enabling the spatiotemporally explicit quantification of biophysical variables. With the aim of preparing for this new era of Earth observation, this review summarizes the state-of-the-art retrieval methods that have been applied in experimental imaging spectroscopy studies inferring all kinds of vegetation biophysical variables. Identified retrieval methods are categorized into: (1) parametric regression, including vegetation indices, shape indices and spectral transformations; (2) nonparametric regression, including linear and nonlinear machine learning regression algorithms; (3) physically based, including inversion of radiative transfer models (RTMs) using numerical optimization and look-up table approaches; and (4) hybrid regression methods, which combine RTM simulations with machine learning regression methods. For each of these categories, an overview of widely applied methods with application to mapping vegetation properties is given. In view of processing imaging spectroscopy data, a critical aspect involves the challenge of dealing with spectral multicollinearity. The ability to provide robust estimates, retrieval uncertainties and acceptable retrieval processing speed are other important aspects in view of operational processing. Recommendations towards new-generation spectroscopy-based processing chains for operational production of biophysical variables are given.

  18. Multi-Spectral Cloud Retrievals from Moderate Image Spectrometer (MODIS)

    Science.gov (United States)

    Platnick, Steven

    2004-01-01

    MODIS observations from the NASA EOS Terra spacecraft (1030 local time equatorial sun-synchronous crossing) launched in December 1999 have provided a unique set of Earth observation data. With the launch of the NASA EOS Aqua spacecraft (1330 local time crossing! in May 2002: two MODIS daytime (sunlit) and nighttime observations are now available in a 24-hour period allowing some measure of diurnal variability. A comprehensive set of remote sensing algorithms for cloud masking and the retrieval of cloud physical and optical properties has been developed by members of the MODIS atmosphere science team. The archived products from these algorithms have applications in climate modeling, climate change studies, numerical weather prediction, as well as fundamental atmospheric research. In addition to an extensive cloud mask, products include cloud-top properties (temperature, pressure, effective emissivity), cloud thermodynamic phase, cloud optical and microphysical parameters (optical thickness, effective particle radius, water path), as well as derived statistics. An overview of the instrument and cloud algorithms will be presented along with various examples, including an initial analysis of several operational global gridded (Level-3) cloud products from the two platforms. Statistics of cloud optical and microphysical properties as a function of latitude for land and Ocean regions will be shown. Current algorithm research efforts will also be discussed.

  19. The experimental nuclear reaction data (EXFOR): Extended computer database and Web retrieval system

    Science.gov (United States)

    Zerkin, V. V.; Pritychenko, B.

    2018-04-01

    The EXchange FORmat (EXFOR) experimental nuclear reaction database and the associated Web interface provide access to the wealth of low- and intermediate-energy nuclear reaction physics data. This resource is based on numerical data sets and bibliographical information of ∼22,000 experiments since the beginning of nuclear science. The principles of the computer database organization, its extended contents and Web applications development are described. New capabilities for the data sets uploads, renormalization, covariance matrix, and inverse reaction calculations are presented. The EXFOR database, updated monthly, provides an essential support for nuclear data evaluation, application development, and research activities. It is publicly available at the websites of the International Atomic Energy Agency Nuclear Data Section, http://www-nds.iaea.org/exfor, the U.S. National Nuclear Data Center, http://www.nndc.bnl.gov/exfor, and the mirror sites in China, India and Russian Federation.

  20. World-Wide Web Tools for Locating Planetary Images

    Science.gov (United States)

    Kanefsky, Bob; Deiss, Ron (Technical Monitor)

    1995-01-01

    The explosive growth of the World-Wide Web (WWW) in the past year has made it feasible to provide interactive graphical tools to assist scientists in locating planetary images. The highest available resolution images of any site of interest can be quickly found on a map or plot, and, if online, displayed immediately on nearly any computer equipped with a color screen, an Internet connection, and any of the free WWW browsers. The same tools may also be of interest to educators, students, and the general public. Image finding tools have been implemented covering most of the solar system: Earth, Mars, and the moons and planets imaged by Voyager. The Mars image-finder, which plots the footprints of all the high-resolution Viking Orbiter images and can be used to display any that are available online, also contains a complete scrollable atlas and hypertext gazetteer to help locating areas. The Earth image-finder is linked to thousands of Shuttle images stored at NASA/JSC, and displays them as red dots on a globe. The Voyager image-finder plots images as dots, by longitude and apparent target size, linked to online images. The locator (URL) for the top-level page is http: //ic-www.arc.nasa.gov/ic/projects/bayes-group/Atlas/. Through the efforts of the Planetary Data System and other organizations, hundreds of thousands of planetary images are now available on CD-ROM, and many of these have been made available on the WWW. However, locating images of a desired site is still problematic, in practice. For example, many scientists studying Mars use digital image maps, which are one third the resolution of Viking Orbiter survey images. When they douse Viking Orbiter images, they often work with photographically printed hardcopies, which lack the flexibility of digital images: magnification, contrast stretching, and other basic image-processing techniques offered by off-the-shelf software. From the perspective of someone working on an experimental image processing technique for

  1. Words Matter: Scene Text for Image Classification and Retrieval

    NARCIS (Netherlands)

    Karaoglu, S.; Tao, R.; Gevers, T.; Smeulders, A.W.M.

    Text in natural images typically adds meaning to an object or scene. In particular, text specifies which business places serve drinks (e.g., cafe, teahouse) or food (e.g., restaurant, pizzeria), and what kind of service is provided (e.g., massage, repair). The mere presence of text, its words, and

  2. Aspect-based Relevance Learning for Image Retrieval

    NARCIS (Netherlands)

    M.J. Huiskes (Mark)

    2005-01-01

    htmlabstractWe analyze the special structure of the relevance feedback learning problem, focusing particularly on the effects of image selection by partial relevance on the clustering behavior of feedback examples. We propose a scheme, aspect-based relevance learning, which guarantees that feedback

  3. Genetic Algorithm Phase Retrieval for the Systematic Image-Based Optical Alignment Testbed

    Science.gov (United States)

    Taylor, Jaime; Rakoczy, John; Steincamp, James

    2003-01-01

    Phase retrieval requires calculation of the real-valued phase of the pupil fimction from the image intensity distribution and characteristics of an optical system. Genetic 'algorithms were used to solve two one-dimensional phase retrieval problem. A GA successfully estimated the coefficients of a polynomial expansion of the phase when the number of coefficients was correctly specified. A GA also successfully estimated the multiple p h e s of a segmented optical system analogous to the seven-mirror Systematic Image-Based Optical Alignment (SIBOA) testbed located at NASA s Marshall Space Flight Center. The SIBOA testbed was developed to investigate phase retrieval techniques. Tiphilt and piston motions of the mirrors accomplish phase corrections. A constant phase over each mirror can be achieved by an independent tip/tilt correction: the phase Conection term can then be factored out of the Discrete Fourier Tranform (DFT), greatly reducing computations.

  4. Neural mechanism of lmplicit and explicit memory retrieval: functional MR imaging

    International Nuclear Information System (INIS)

    Kang, Heoung Keun; Jeong, Gwang Woo; Park, Tae Jin; Seo, Jeong Jin; Kim, Hyung Joong; Eun, Sung Jong; Chung, Tae Woong

    2003-01-01

    To identify, using functional MR imaging, distinct cerebral centers and to evaluate the neural mechanism associated with implicit and explicit retrieval of words during conceptual processing. Seven healthy volunteers aged 21-25 (mean, 22) years underwent BOLD-based fMR imaging using a 1.5T signa horizon echospeed MR system. To activate the cerebral cortices, a series of tasks was performed as follows: the encoding of two-syllable words, and implicit and explicit retrieval of previously learned words during conceptual processing. The activation paradigm consisted of a cycle of alternating periods of 30 seconds of stimulation and 30 seconds of rest. Stimulation was accomplished by encoding eight two-syllable words and the retrieval of previously presented words, while the control condition was a white screen with a small fixed cross. During the tasks we acquired ten slices (6 mm slice thickness, 1 mm gap) parallel to the AC-PC line, and the resulting functional activation maps were reconstructed using a statistical parametric mapping program (SPM99). A comparison of activation ratios (percentages), based on the number of volunteers, showed that activation of Rhs-35, PoCiG-23 and ICiG-26·30 was associated with explicit retrieval only; other brain areas were activated during the performance of both implicit and explicit retrieval tasks. Activation ratios were higher for explicit tasks than for implicit; in the cingulate gyrus and temporal lobe they were 30% and 10% greater, respectively. During explicit retrieval, a distinct brain activation index (percentage) was seen in the temporal, parietal, and occipital lobe and cingulate gyrus, and PrCeG-4, Pr/ PoCeG-43 in the frontal lobe. During implicit retrieval, on the other hand, activity was greater in the frontal lobe, including the areas of SCA-25, SFG/MFG-10, IFG-44·45, OrbG-11·47, SFG-6·8 and MFG-9·46. Overall, activation was lateralized mainly in the left hemisphere during both implicit and explicit retrieval

  5. Neural mechanism of lmplicit and explicit memory retrieval: functional MR imaging

    Energy Technology Data Exchange (ETDEWEB)

    Kang, Heoung Keun; Jeong, Gwang Woo; Park, Tae Jin; Seo, Jeong Jin; Kim, Hyung Joong; Eun, Sung Jong; Chung, Tae Woong [Chonnam National University Medical School, Gwangju (Korea, Republic of)

    2003-03-01

    To identify, using functional MR imaging, distinct cerebral centers and to evaluate the neural mechanism associated with implicit and explicit retrieval of words during conceptual processing. Seven healthy volunteers aged 21-25 (mean, 22) years underwent BOLD-based fMR imaging using a 1.5T signa horizon echospeed MR system. To activate the cerebral cortices, a series of tasks was performed as follows: the encoding of two-syllable words, and implicit and explicit retrieval of previously learned words during conceptual processing. The activation paradigm consisted of a cycle of alternating periods of 30 seconds of stimulation and 30 seconds of rest. Stimulation was accomplished by encoding eight two-syllable words and the retrieval of previously presented words, while the control condition was a white screen with a small fixed cross. During the tasks we acquired ten slices (6 mm slice thickness, 1 mm gap) parallel to the AC-PC line, and the resulting functional activation maps were reconstructed using a statistical parametric mapping program (SPM99). A comparison of activation ratios (percentages), based on the number of volunteers, showed that activation of Rhs-35, PoCiG-23 and ICiG-26{center_dot}30 was associated with explicit retrieval only; other brain areas were activated during the performance of both implicit and explicit retrieval tasks. Activation ratios were higher for explicit tasks than for implicit; in the cingulate gyrus and temporal lobe they were 30% and 10% greater, respectively. During explicit retrieval, a distinct brain activation index (percentage) was seen in the temporal, parietal, and occipital lobe and cingulate gyrus, and PrCeG-4, Pr/ PoCeG-43 in the frontal lobe. During implicit retrieval, on the other hand, activity was greater in the frontal lobe, including the areas of SCA-25, SFG/MFG-10, IFG-44{center_dot}45, OrbG-11{center_dot}47, SFG-6{center_dot}8 and MFG-9{center_dot}46. Overall, activation was lateralized mainly in the left

  6. Wavelet optimization for content-based image retrieval in medical databases.

    Science.gov (United States)

    Quellec, G; Lamard, M; Cazuguel, G; Cochener, B; Roux, C

    2010-04-01

    We propose in this article a content-based image retrieval (CBIR) method for diagnosis aid in medical fields. In the proposed system, images are indexed in a generic fashion, without extracting domain-specific features: a signature is built for each image from its wavelet transform. These image signatures characterize the distribution of wavelet coefficients in each subband of the decomposition. A distance measure is then defined to compare two image signatures and thus retrieve the most similar images in a database when a query image is submitted by a physician. To retrieve relevant images from a medical database, the signatures and the distance measure must be related to the medical interpretation of images. As a consequence, we introduce several degrees of freedom in the system so that it can be tuned to any pathology and image modality. In particular, we propose to adapt the wavelet basis, within the lifting scheme framework, and to use a custom decomposition scheme. Weights are also introduced between subbands. All these parameters are tuned by an optimization procedure, using the medical grading of each image in the database to define a performance measure. The system is assessed on two medical image databases: one for diabetic retinopathy follow up and one for screening mammography, as well as a general purpose database. Results are promising: a mean precision of 56.50%, 70.91% and 96.10% is achieved for these three databases, when five images are returned by the system. Copyright 2009 Elsevier B.V. All rights reserved.

  7. Supporting Keyword Search for Image Retrieval with Integration of Probabilistic Annotation

    Directory of Open Access Journals (Sweden)

    Tie Hua Zhou

    2015-05-01

    Full Text Available The ever-increasing quantities of digital photo resources are annotated with enriching vocabularies to form semantic annotations. Photo-sharing social networks have boosted the need for efficient and intuitive querying to respond to user requirements in large-scale image collections. In order to help users formulate efficient and effective image retrieval, we present a novel integration of a probabilistic model based on keyword query architecture that models the probability distribution of image annotations: allowing users to obtain satisfactory results from image retrieval via the integration of multiple annotations. We focus on the annotation integration step in order to specify the meaning of each image annotation, thus leading to the most representative annotations of the intent of a keyword search. For this demonstration, we show how a probabilistic model has been integrated to semantic annotations to allow users to intuitively define explicit and precise keyword queries in order to retrieve satisfactory image results distributed in heterogeneous large data sources. Our experiments on SBU (collected by Stony Brook University database show that (i our integrated annotation contains higher quality representatives and semantic matches; and (ii the results indicating annotation integration can indeed improve image search result quality.

  8. OpenMSI: A High-Performance Web-Based Platform for Mass Spectrometry Imaging

    Energy Technology Data Exchange (ETDEWEB)

    Rubel, Oliver; Greiner, Annette; Cholia, Shreyas; Louie, Katherine; Bethel, E. Wes; Northen, Trent R.; Bowen, Benjamin P.

    2013-10-02

    Mass spectrometry imaging (MSI) enables researchers to directly probe endogenous molecules directly within the architecture of the biological matrix. Unfortunately, efficient access, management, and analysis of the data generated by MSI approaches remain major challenges to this rapidly developing field. Despite the availability of numerous dedicated file formats and software packages, it is a widely held viewpoint that the biggest challenge is simply opening, sharing, and analyzing a file without loss of information. Here we present OpenMSI, a software framework and platform that addresses these challenges via an advanced, high-performance, extensible file format and Web API for remote data access (http://openmsi.nersc.gov). The OpenMSI file format supports storage of raw MSI data, metadata, and derived analyses in a single, self-describing format based on HDF5 and is supported by a large range of analysis software (e.g., Matlab and R) and programming languages (e.g., C++, Fortran, and Python). Careful optimization of the storage layout of MSI data sets using chunking, compression, and data replication accelerates common, selective data access operations while minimizing data storage requirements and are critical enablers of rapid data I/O. The OpenMSI file format has shown to provide >2000-fold improvement for image access operations, enabling spectrum and image retrieval in less than 0.3 s across the Internet even for 50 GB MSI data sets. To make remote high-performance compute resources accessible for analysis and to facilitate data sharing and collaboration, we describe an easy-to-use yet powerful Web API, enabling fast and convenient access to MSI data, metadata, and derived analysis results stored remotely to facilitate high-performance data analysis and enable implementation of Web based data sharing, visualization, and analysis.

  9. Qualification of a Null Lens Using Image-Based Phase Retrieval

    Science.gov (United States)

    Bolcar, Matthew R.; Aronstein, David L.; Hill, Peter C.; Smith, J. Scott; Zielinski, Thomas P.

    2012-01-01

    In measuring the figure error of an aspheric optic using a null lens, the wavefront contribution from the null lens must be independently and accurately characterized in order to isolate the optical performance of the aspheric optic alone. Various techniques can be used to characterize such a null lens, including interferometry, profilometry and image-based methods. Only image-based methods, such as phase retrieval, can measure the null-lens wavefront in situ - in single-pass, and at the same conjugates and in the same alignment state in which the null lens will ultimately be used - with no additional optical components. Due to the intended purpose of a Dull lens (e.g., to null a large aspheric wavefront with a near-equal-but-opposite spherical wavefront), characterizing a null-lens wavefront presents several challenges to image-based phase retrieval: Large wavefront slopes and high-dynamic-range data decrease the capture range of phase-retrieval algorithms, increase the requirements on the fidelity of the forward model of the optical system, and make it difficult to extract diagnostic information (e.g., the system F/#) from the image data. In this paper, we present a study of these effects on phase-retrieval algorithms in the context of a null lens used in component development for the Climate Absolute Radiance and Refractivity Observatory (CLARREO) mission. Approaches for mitigation are also discussed.

  10. Content-Based Image Retrieval Benchmarking: Utilizing color categories and color distributions

    NARCIS (Netherlands)

    van den Broek, Egon; Kisters, Peter M.F.; Vuurpijl, Louis G.

    From a human centered perspective three ingredients for Content-Based Image Retrieval (CBIR) were developed. First, with their existence confirmed by experimental data, 11 color categories were utilized for CBIR and used as input for a new color space segmentation technique. The complete HSI color

  11. The utilization of human color categorization for content-based image retrieval

    NARCIS (Netherlands)

    van den Broek, Egon; Rogowitz, Bernice E.; Kisters, Peter M.F.; Pappas, Thrasyvoulos N.; Vuurpijl, Louis G.

    2004-01-01

    We present the concept of intelligent Content-Based Image Retrieval (iCBIR), which incorporates knowledge concerning human cognition in system development. The present research focuses on the utilization of color categories (or focal colors) for CBIR purposes, in particularly considered to be useful

  12. G-Bean: an ontology-graph based web tool for biomedical literature retrieval.

    Science.gov (United States)

    Wang, James Z; Zhang, Yuanyuan; Dong, Liang; Li, Lin; Srimani, Pradip K; Yu, Philip S

    2014-01-01

    Currently, most people use NCBI's PubMed to search the MEDLINE database, an important bibliographical information source for life science and biomedical information. However, PubMed has some drawbacks that make it difficult to find relevant publications pertaining to users' individual intentions, especially for non-expert users. To ameliorate the disadvantages of PubMed, we developed G-Bean, a graph based biomedical search engine, to search biomedical articles in MEDLINE database more efficiently. G-Bean addresses PubMed's limitations with three innovations: (1) Parallel document index creation: a multithreaded index creation strategy is employed to generate the document index for G-Bean in parallel; (2) Ontology-graph based query expansion: an ontology graph is constructed by merging four major UMLS (Version 2013AA) vocabularies, MeSH, SNOMEDCT, CSP and AOD, to cover all concepts in National Library of Medicine (NLM) database; a Personalized PageRank algorithm is used to compute concept relevance in this ontology graph and the Term Frequency - Inverse Document Frequency (TF-IDF) weighting scheme is used to re-rank the concepts. The top 500 ranked concepts are selected for expanding the initial query to retrieve more accurate and relevant information; (3) Retrieval and re-ranking of documents based on user's search intention: after the user selects any article from the existing search results, G-Bean analyzes user's selections to determine his/her true search intention and then uses more relevant and more specific terms to retrieve additional related articles. The new articles are presented to the user in the order of their relevance to the already selected articles. Performance evaluation with 106 OHSUMED benchmark queries shows that G-Bean returns more relevant results than PubMed does when using these queries to search the MEDLINE database. PubMed could not even return any search result for some OHSUMED queries because it failed to form the appropriate Boolean

  13. Research on Techniques of Multifeatures Extraction for Tongue Image and Its Application in Retrieval

    Directory of Open Access Journals (Sweden)

    Liyan Chen

    2017-01-01

    Full Text Available Tongue diagnosis is one of the important methods in the Chinese traditional medicine. Doctors can judge the disease’s situation by observing patient’s tongue color and texture. This paper presents a novel approach to extract color and texture features of tongue images. First, we use improved GLA (Generalized Lloyd Algorithm to extract the main color of tongue image. Considering that the color feature cannot fully express tongue image information, the paper analyzes tongue edge’s texture features and proposes an algorithm to extract them. Then, we integrate the two features in retrieval by different weight. Experimental results show that the proposed method can improve the detection rate of lesion in tongue image relative to single feature retrieval.

  14. The use of web internet technologies to distribute medical images

    International Nuclear Information System (INIS)

    Deller, A.L.; Cheal, D.; Field, J.

    1999-01-01

    Full text: In the past, internet browsers were considered ineffective for image distribution. Today we have the technology to use internet standards for picture archive and communication systems (PACS) and teleradiology effectively. Advanced wavelet compression and state-of-the-art JAVA software allows us to distribute images on normal computer hardware. The use of vendor and database neutral software and industry-standard hardware has many advantages. This standards base approach avoids the costly rapid obsolescence of proprietary PACS and is cheaper to purchase and maintain. Images can be distributed around a hospital site, as well as outside the campus, quickly and inexpensively. It also allows integration between the Hospital Information System (HIS) and the Radiology Information System (RIS). Being able to utilize standard internet technologies and computer hardware for PACS is a cost-effective alternative. A system based on this technology can be used for image distribution, archiving, teleradiology and RIS integration. This can be done without expensive specialized imaging workstations and telecommunication systems. Web distribution of images allows you to send images to multiple places concurrently. A study can be within your Medical Imaging Department, as well as in the ward and on the desktop of referring clinicians - with a report. As long as there is a computer with an internet access account, high-quality images can be at your disposal 24 h a day. The importance of medical images for patient management makes them a valuable component of the patient's medical record. Therefore, an efficient system for displaying and distributing images can improve patient management and make your workplace more effective

  15. Color image encryption using random transforms, phase retrieval, chaotic maps, and diffusion

    Science.gov (United States)

    Annaby, M. H.; Rushdi, M. A.; Nehary, E. A.

    2018-04-01

    The recent tremendous proliferation of color imaging applications has been accompanied by growing research in data encryption to secure color images against adversary attacks. While recent color image encryption techniques perform reasonably well, they still exhibit vulnerabilities and deficiencies in terms of statistical security measures due to image data redundancy and inherent weaknesses. This paper proposes two encryption algorithms that largely treat these deficiencies and boost the security strength through novel integration of the random fractional Fourier transforms, phase retrieval algorithms, as well as chaotic scrambling and diffusion. We show through detailed experiments and statistical analysis that the proposed enhancements significantly improve security measures and immunity to attacks.

  16. High sensitivity phase retrieval method in grating-based x-ray phase contrast imaging

    Energy Technology Data Exchange (ETDEWEB)

    Wu, Zhao; Gao, Kun; Chen, Jian; Wang, Dajiang; Wang, Shenghao; Chen, Heng; Bao, Yuan; Shao, Qigang; Wang, Zhili, E-mail: wangnsrl@ustc.edu.cn [National Synchrotron Radiation Laboratory, University of Science and Technology of China, Hefei 230029 (China); Zhang, Kai [Institute of High Energy Physics, Chinese Academy of Sciences, Beijing 100049 (China); Zhu, Peiping; Wu, Ziyu, E-mail: wuzy@ustc.edu.cn [National Synchrotron Radiation Laboratory, University of Science and Technology of China, Hefei 230029, China and Institute of High Energy Physics, Chinese Academy of Sciences, Beijing 100049 (China)

    2015-02-15

    Purpose: Grating-based x-ray phase contrast imaging is considered as one of the most promising techniques for future medical imaging. Many different methods have been developed to retrieve phase signal, among which the phase stepping (PS) method is widely used. However, further practical implementations are hindered, due to its complex scanning mode and high radiation dose. In contrast, the reverse projection (RP) method is a novel fast and low dose extraction approach. In this contribution, the authors present a quantitative analysis of the noise properties of the refraction signals retrieved by the two methods and compare their sensitivities. Methods: Using the error propagation formula, the authors analyze theoretically the signal-to-noise ratios (SNRs) of the refraction images retrieved by the two methods. Then, the sensitivities of the two extraction methods are compared under an identical exposure dose. Numerical experiments are performed to validate the theoretical results and provide some quantitative insight. Results: The SNRs of the two methods are both dependent on the system parameters, but in different ways. Comparison between their sensitivities reveals that for the refraction signal, the RP method possesses a higher sensitivity, especially in the case of high visibility and/or at the edge of the object. Conclusions: Compared with the PS method, the RP method has a superior sensitivity and provides refraction images with a higher SNR. Therefore, one can obtain highly sensitive refraction images in grating-based phase contrast imaging. This is very important for future preclinical and clinical implementations.

  17. High sensitivity phase retrieval method in grating-based x-ray phase contrast imaging

    International Nuclear Information System (INIS)

    Wu, Zhao; Gao, Kun; Chen, Jian; Wang, Dajiang; Wang, Shenghao; Chen, Heng; Bao, Yuan; Shao, Qigang; Wang, Zhili; Zhang, Kai; Zhu, Peiping; Wu, Ziyu

    2015-01-01

    Purpose: Grating-based x-ray phase contrast imaging is considered as one of the most promising techniques for future medical imaging. Many different methods have been developed to retrieve phase signal, among which the phase stepping (PS) method is widely used. However, further practical implementations are hindered, due to its complex scanning mode and high radiation dose. In contrast, the reverse projection (RP) method is a novel fast and low dose extraction approach. In this contribution, the authors present a quantitative analysis of the noise properties of the refraction signals retrieved by the two methods and compare their sensitivities. Methods: Using the error propagation formula, the authors analyze theoretically the signal-to-noise ratios (SNRs) of the refraction images retrieved by the two methods. Then, the sensitivities of the two extraction methods are compared under an identical exposure dose. Numerical experiments are performed to validate the theoretical results and provide some quantitative insight. Results: The SNRs of the two methods are both dependent on the system parameters, but in different ways. Comparison between their sensitivities reveals that for the refraction signal, the RP method possesses a higher sensitivity, especially in the case of high visibility and/or at the edge of the object. Conclusions: Compared with the PS method, the RP method has a superior sensitivity and provides refraction images with a higher SNR. Therefore, one can obtain highly sensitive refraction images in grating-based phase contrast imaging. This is very important for future preclinical and clinical implementations

  18. An efficient similarity measure for content based image retrieval using memetic algorithm

    Directory of Open Access Journals (Sweden)

    Mutasem K. Alsmadi

    2017-06-01

    Full Text Available Content based image retrieval (CBIR systems work by retrieving images which are related to the query image (QI from huge databases. The available CBIR systems extract limited feature sets which confine the retrieval efficacy. In this work, extensive robust and important features were extracted from the images database and then stored in the feature repository. This feature set is composed of color signature with the shape and color texture features. Where, features are extracted from the given QI in the similar fashion. Consequently, a novel similarity evaluation using a meta-heuristic algorithm called a memetic algorithm (genetic algorithm with great deluge is achieved between the features of the QI and the features of the database images. Our proposed CBIR system is assessed by inquiring number of images (from the test dataset and the efficiency of the system is evaluated by calculating precision-recall value for the results. The results were superior to other state-of-the-art CBIR systems in regard to precision.

  19. The ADAM project: a generic web interface for retrieval and display of ATLAS TDAQ information

    Science.gov (United States)

    Harwood, A.; Lehmann Miotto, G.; Magnoni, L.; Vandelli, W.; Savu, D.

    2012-06-01

    This paper describes a new approach to the visualization of information about the operation of the ATLAS Trigger and Data Acquisition system. ATLAS is one of the two general purpose detectors positioned along the Large Hadron Collider at CERN. Its data acquisition system consists of several thousand computers interconnected via multiple gigabit Ethernet networks, that are constantly monitored via different tools. Operational parameters ranging from the temperature of the computers to the network utilization are stored in several databases for later analysis. Although the ability to view these data-sets individually is already in place, currently there is no way to view this data together, in a uniform format, from one location. The ADAM project has been launched in order to overcome this limitation. It defines a uniform web interface to collect data from multiple providers that have different structures. It is capable of aggregating and correlating the data according to user defined criteria. Finally, it visualizes the collected data using a flexible and interactive front-end web system. Structurally, the project comprises of 3 main levels of the data collection cycle: The Level 0 represents the information sources within ATLAS. These providers do not store information in a uniform fashion. The first step of the project was to define a common interface with which to expose stored data. The interface designed for the project originates from the Google Data Protocol API. The idea is to allow read-only access to data providers, through HTTP requests similar in format to the SQL query structure. This provides a standardized way to access this different information sources within ATLAS. The Level 1 can be considered the engine of the system. The primary task of the Level 1 is to gather data from multiple data sources via the common interface, to correlate this data together, or over a defined time series, and expose the combined data as a whole to the Level 2 web

  20. The ADAM project: a generic web interface for retrieval and display of ATLAS TDAQ information

    International Nuclear Information System (INIS)

    Harwood, A; Miotto, G Lehmann; Magnoni, L; Vandelli, W; Savu, D

    2012-01-01

    This paper describes a new approach to the visualization of information about the operation of the ATLAS Trigger and Data Acquisition system. ATLAS is one of the two general purpose detectors positioned along the Large Hadron Collider at CERN. Its data acquisition system consists of several thousand computers interconnected via multiple gigabit Ethernet networks, that are constantly monitored via different tools. Operational parameters ranging from the temperature of the computers to the network utilization are stored in several databases for later analysis. Although the ability to view these data-sets individually is already in place, currently there is no way to view this data together, in a uniform format, from one location. The ADAM project has been launched in order to overcome this limitation. It defines a uniform web interface to collect data from multiple providers that have different structures. It is capable of aggregating and correlating the data according to user defined criteria. Finally, it visualizes the collected data using a flexible and interactive front-end web system. Structurally, the project comprises of 3 main levels of the data collection cycle: The Level 0 represents the information sources within ATLAS. These providers do not store information in a uniform fashion. The first step of the project was to define a common interface with which to expose stored data. The interface designed for the project originates from the Google Data Protocol API. The idea is to allow read-only access to data providers, through HTTP requests similar in format to the SQL query structure. This provides a standardized way to access this different information sources within ATLAS. The Level 1 can be considered the engine of the system. The primary task of the Level 1 is to gather data from multiple data sources via the common interface, to correlate this data together, or over a defined time series, and expose the combined data as a whole to the Level 2 web

  1. Benchmarking, Research, Development, and Support for ORNL Automated Image and Signature Retrieval (AIR/ASR) Technologies

    Energy Technology Data Exchange (ETDEWEB)

    Tobin, K.W.

    2004-06-01

    This report describes the results of a Cooperative Research and Development Agreement (CRADA) with Applied Materials, Inc. (AMAT) of Santa Clara, California. This project encompassed the continued development and integration of the ORNL Automated Image Retrieval (AIR) technology, and an extension of the technology denoted Automated Signature Retrieval (ASR), and other related technologies with the Defect Source Identification (DSI) software system that was under development by AMAT at the time this work was performed. In the semiconductor manufacturing environment, defect imagery is used to diagnose problems in the manufacturing line, train yield management engineers, and examine historical data for trends. Image management in semiconductor data systems is a growing cause of concern in the industry as fabricators are now collecting up to 20,000 images each week. In response to this concern, researchers at the Oak Ridge National Laboratory (ORNL) developed a semiconductor-specific content-based image retrieval method and system, also known as AIR. The system uses an image-based query-by-example method to locate and retrieve similar imagery from a database of digital imagery using visual image characteristics. The query method is based on a unique architecture that takes advantage of the statistical, morphological, and structural characteristics of image data, generated by inspection equipment in industrial applications. The system improves the manufacturing process by allowing rapid access to historical records of similar events so that errant process equipment can be isolated and corrective actions can be quickly taken to improve yield. The combined ORNL and AMAT technology is referred to hereafter as DSI-AIR and DSI-ASR.

  2. Optical image encryption based on phase retrieval combined with three-dimensional particle-like distribution

    International Nuclear Information System (INIS)

    Chen, Wen; Chen, Xudong; Sheppard, Colin J R

    2012-01-01

    We propose a new phase retrieval algorithm for optical image encryption in three-dimensional (3D) space. The two-dimensional (2D) plaintext is considered as a series of particles distributed in 3D space, and an iterative phase retrieval algorithm is developed to encrypt the series of particles into phase-only masks. The feasibility and effectiveness of the proposed method are demonstrated by a numerical experiment, and the advantages and security of the proposed optical cryptosystems are also analyzed and discussed. (paper)

  3. Using the fuzzy modeling for the retrieval algorithms

    International Nuclear Information System (INIS)

    Mohamed, A.H

    2010-01-01

    A rapid growth in number and size of images in databases and world wide web (www) has created a strong need for more efficient search and retrieval systems to exploit the benefits of this large amount of information. However, the collection of this information is now based on the image technology. One of the limitations of the current image analysis techniques necessitates that most image retrieval systems use some form of text description provided by the users as the basis to index and retrieve images. To overcome this problem, the proposed system introduces the using of fuzzy modeling to describe the image by using the linguistic ambiguities. Also, the proposed system can include vague or fuzzy terms in modeling the queries to match the image descriptions in the retrieval process. This can facilitate the indexing and retrieving process, increase their performance and decrease its computational time . Therefore, the proposed system can improve the performance of the traditional image retrieval algorithms.

  4. The ADAM project: a generic web interface for retrieval and display of ATLAS TDAQ information.

    CERN Document Server

    Harwood, A; The ATLAS collaboration; Magnoni, L; Vandelli, W; Savu, D

    2011-01-01

    This paper describes a new approach to the visualization of stored information about the operation of the ATLAS Trigger and Data Acquisition system. ATLAS is one of the two general purpose detectors positioned along the Large Hadron Collider at CERN. Its data acquisition system consists of several thousand computers interconnected via multiple gigabit Ethernet networks, that are constantly monitored via different tools. Operational parameters ranging from the temperature of the computers to the network utilization are stored in several databases for later analysis. Although the ability to view these data-sets individually is already in place, currently there is no way to view this data together, in a uniform format, from one location. The ADAM project has been launched in order to overcome this limitation. It defines a uniform web interface to collect data from multiple providers that have different structures. It is capable of aggregating and correlating the data according to user defined criteria. Finally, ...

  5. ADAM Project – A generic web interface for retrieval and display of ATLAS TDAQ information.

    CERN Document Server

    Harwood, A; The ATLAS collaboration; Lehmann Miotto, G

    2011-01-01

    This paper describes a new approach to the visualization of stored information about the operation of the ATLAS Trigger and Data Acquisition system. ATLAS is one of the two general purpose detectors positioned along the Large Hadron Collider at CERN. Its data acquisition system consists of several thousand computers interconnected via multiple gigabit Ethernet networks, that are constantly monitored via different tools. Operational parameters ranging from the temperature of the computers, to the network utilization are stored in several databases for a posterior analysis. Although the ability to view these data-sets individually is already in place, there currently is no way to view this data together, in a uniform format, from one location. The ADAM project has been launched in order to overcome this limitation. It defines a uniform web interface to collect data from multiple diversely structured providers. It is capable of aggregating and correlating the data according to user defined criteria. Finally it v...

  6. Web application for recording learners’ mouse trajectories and retrieving their study logs for data analysis

    Directory of Open Access Journals (Sweden)

    Yoshinori Miyazaki

    2012-03-01

    Full Text Available With the accelerated implementation of e-learning systems in educational institutions, it has become possible to record learners’ study logs in recent years. It must be admitted that little research has been conducted upon the analysis of the study logs that are obtained. In addition, there is no software that traces the mouse movements of learners during their learning processes, which the authors believe would enable teachers to better understand their students’ behaviors. The objective of this study is to develop a Web application that records students’ study logs, including their mouse trajectories, and to devise an IR tool that can summarize such diversified data. The results of an experiment are also scrutinized to provide an analysis of the relationship between learners’ activities and their study logs.

  7. Object-Location-Aware Hashing for Multi-Label Image Retrieval via Automatic Mask Learning.

    Science.gov (United States)

    Huang, Chang-Qin; Yang, Shang-Ming; Pan, Yan; Lai, Han-Jiang

    2018-09-01

    Learning-based hashing is a leading approach of approximate nearest neighbor search for large-scale image retrieval. In this paper, we develop a deep supervised hashing method for multi-label image retrieval, in which we propose to learn a binary "mask" map that can identify the approximate locations of objects in an image, so that we use this binary "mask" map to obtain length-limited hash codes which mainly focus on an image's objects but ignore the background. The proposed deep architecture consists of four parts: 1) a convolutional sub-network to generate effective image features; 2) a binary "mask" sub-network to identify image objects' approximate locations; 3) a weighted average pooling operation based on the binary "mask" to obtain feature representations and hash codes that pay most attention to foreground objects but ignore the background; and 4) the combination of a triplet ranking loss designed to preserve relative similarities among images and a cross entropy loss defined on image labels. We conduct comprehensive evaluations on four multi-label image data sets. The results indicate that the proposed hashing method achieves superior performance gains over the state-of-the-art supervised or unsupervised hashing baselines.

  8. Combining semantic technologies with a content-based image retrieval system - Preliminary considerations

    Science.gov (United States)

    Chmiel, P.; Ganzha, M.; Jaworska, T.; Paprzycki, M.

    2017-10-01

    Nowadays, as a part of systematic growth of volume, and variety, of information that can be found on the Internet, we observe also dramatic increase in sizes of available image collections. There are many ways to help users browsing / selecting images of interest. One of popular approaches are Content-Based Image Retrieval (CBIR) systems, which allow users to search for images that match their interests, expressed in the form of images (query by example). However, we believe that image search and retrieval could take advantage of semantic technologies. We have decided to test this hypothesis. Specifically, on the basis of knowledge captured in the CBIR, we have developed a domain ontology of residential real estate (detached houses, in particular). This allows us to semantically represent each image (and its constitutive architectural elements) represented within the CBIR. The proposed ontology was extended to capture not only the elements resulting from image segmentation, but also "spatial relations" between them. As a result, a new approach to querying the image database (semantic querying) has materialized, thus extending capabilities of the developed system.

  9. Parallel content-based sub-image retrieval using hierarchical searching.

    Science.gov (United States)

    Yang, Lin; Qi, Xin; Xing, Fuyong; Kurc, Tahsin; Saltz, Joel; Foran, David J

    2014-04-01

    The capacity to systematically search through large image collections and ensembles and detect regions exhibiting similar morphological characteristics is central to pathology diagnosis. Unfortunately, the primary methods used to search digitized, whole-slide histopathology specimens are slow and prone to inter- and intra-observer variability. The central objective of this research was to design, develop, and evaluate a content-based image retrieval system to assist doctors for quick and reliable content-based comparative search of similar prostate image patches. Given a representative image patch (sub-image), the algorithm will return a ranked ensemble of image patches throughout the entire whole-slide histology section which exhibits the most similar morphologic characteristics. This is accomplished by first performing hierarchical searching based on a newly developed hierarchical annular histogram (HAH). The set of candidates is then further refined in the second stage of processing by computing a color histogram from eight equally divided segments within each square annular bin defined in the original HAH. A demand-driven master-worker parallelization approach is employed to speed up the searching procedure. Using this strategy, the query patch is broadcasted to all worker processes. Each worker process is dynamically assigned an image by the master process to search for and return a ranked list of similar patches in the image. The algorithm was tested using digitized hematoxylin and eosin (H&E) stained prostate cancer specimens. We have achieved an excellent image retrieval performance. The recall rate within the first 40 rank retrieved image patches is ∼90%. Both the testing data and source code can be downloaded from http://pleiad.umdnj.edu/CBII/Bioinformatics/.

  10. Web based tools for visualizing imaging data and development of XNATView, a zero footprint image viewer.

    Science.gov (United States)

    Gutman, David A; Dunn, William D; Cobb, Jake; Stoner, Richard M; Kalpathy-Cramer, Jayashree; Erickson, Bradley

    2014-01-01

    Advances in web technologies now allow direct visualization of imaging data sets without necessitating the download of large file sets or the installation of software. This allows centralization of file storage and facilitates image review and analysis. XNATView is a light framework recently developed in our lab to visualize DICOM images stored in The Extensible Neuroimaging Archive Toolkit (XNAT). It consists of a PyXNAT-based framework to wrap around the REST application programming interface (API) and query the data in XNAT. XNATView was developed to simplify quality assurance, help organize imaging data, and facilitate data sharing for intra- and inter-laboratory collaborations. Its zero-footprint design allows the user to connect to XNAT from a web browser, navigate through projects, experiments, and subjects, and view DICOM images with accompanying metadata all within a single viewing instance.

  11. Experiments with a novel content-based image retrieval software: can we eliminate classification systems in adolescent idiopathic scoliosis?

    Science.gov (United States)

    Menon, K Venugopal; Kumar, Dinesh; Thomas, Tessamma

    2014-02-01

    Study Design Preliminary evaluation of new tool. Objective To ascertain whether the newly developed content-based image retrieval (CBIR) software can be used successfully to retrieve images of similar cases of adolescent idiopathic scoliosis (AIS) from a database to help plan treatment without adhering to a classification scheme. Methods Sixty-two operated cases of AIS were entered into the newly developed CBIR database. Five new cases of different curve patterns were used as query images. The images were fed into the CBIR database that retrieved similar images from the existing cases. These were analyzed by a senior surgeon for conformity to the query image. Results Within the limits of variability set for the query system, all the resultant images conformed to the query image. One case had no similar match in the series. The other four retrieved several images that were matching with the query. No matching case was left out in the series. The postoperative images were then analyzed to check for surgical strategies. Broad guidelines for treatment could be derived from the results. More precise query settings, inclusion of bending films, and a larger database will enhance accurate retrieval and better decision making. Conclusion The CBIR system is an effective tool for accurate documentation and retrieval of scoliosis images. Broad guidelines for surgical strategies can be made from the postoperative images of the existing cases without adhering to any classification scheme.

  12. Content-based image retrieval using spatial layout information in brain tumor T1-weighted contrast-enhanced MR images.

    Science.gov (United States)

    Huang, Meiyan; Yang, Wei; Wu, Yao; Jiang, Jun; Gao, Yang; Chen, Yang; Feng, Qianjin; Chen, Wufan; Lu, Zhentai

    2014-01-01

    This study aims to develop content-based image retrieval (CBIR) system for the retrieval of T1-weighted contrast-enhanced MR (CE-MR) images of brain tumors. When a tumor region is fed to the CBIR system as a query, the system attempts to retrieve tumors of the same pathological category. The bag-of-visual-words (BoVW) model with partition learning is incorporated into the system to extract informative features for representing the image contents. Furthermore, a distance metric learning algorithm called the Rank Error-based Metric Learning (REML) is proposed to reduce the semantic gap between low-level visual features and high-level semantic concepts. The effectiveness of the proposed method is evaluated on a brain T1-weighted CE-MR dataset with three types of brain tumors (i.e., meningioma, glioma, and pituitary tumor). Using the BoVW model with partition learning, the mean average precision (mAP) of retrieval increases beyond 4.6% with the learned distance metrics compared with the spatial pyramid BoVW method. The distance metric learned by REML significantly outperforms three other existing distance metric learning methods in terms of mAP. The mAP of the CBIR system is as high as 91.8% using the proposed method, and the precision can reach 93.1% when the top 10 images are returned by the system. These preliminary results demonstrate that the proposed method is effective and feasible for the retrieval of brain tumors in T1-weighted CE-MR Images.

  13. Content-based image retrieval using spatial layout information in brain tumor T1-weighted contrast-enhanced MR images.

    Directory of Open Access Journals (Sweden)

    Meiyan Huang

    Full Text Available This study aims to develop content-based image retrieval (CBIR system for the retrieval of T1-weighted contrast-enhanced MR (CE-MR images of brain tumors. When a tumor region is fed to the CBIR system as a query, the system attempts to retrieve tumors of the same pathological category. The bag-of-visual-words (BoVW model with partition learning is incorporated into the system to extract informative features for representing the image contents. Furthermore, a distance metric learning algorithm called the Rank Error-based Metric Learning (REML is proposed to reduce the semantic gap between low-level visual features and high-level semantic concepts. The effectiveness of the proposed method is evaluated on a brain T1-weighted CE-MR dataset with three types of brain tumors (i.e., meningioma, glioma, and pituitary tumor. Using the BoVW model with partition learning, the mean average precision (mAP of retrieval increases beyond 4.6% with the learned distance metrics compared with the spatial pyramid BoVW method. The distance metric learned by REML significantly outperforms three other existing distance metric learning methods in terms of mAP. The mAP of the CBIR system is as high as 91.8% using the proposed method, and the precision can reach 93.1% when the top 10 images are returned by the system. These preliminary results demonstrate that the proposed method is effective and feasible for the retrieval of brain tumors in T1-weighted CE-MR Images.

  14. Towards brain-activity-controlled information retrieval: Decoding image relevance from MEG signals.

    Science.gov (United States)

    Kauppi, Jukka-Pekka; Kandemir, Melih; Saarinen, Veli-Matti; Hirvenkari, Lotta; Parkkonen, Lauri; Klami, Arto; Hari, Riitta; Kaski, Samuel

    2015-05-15

    We hypothesize that brain activity can be used to control future information retrieval systems. To this end, we conducted a feasibility study on predicting the relevance of visual objects from brain activity. We analyze both magnetoencephalographic (MEG) and gaze signals from nine subjects who were viewing image collages, a subset of which was relevant to a predetermined task. We report three findings: i) the relevance of an image a subject looks at can be decoded from MEG signals with performance significantly better than chance, ii) fusion of gaze-based and MEG-based classifiers significantly improves the prediction performance compared to using either signal alone, and iii) non-linear classification of the MEG signals using Gaussian process classifiers outperforms linear classification. These findings break new ground for building brain-activity-based interactive image retrieval systems, as well as for systems utilizing feedback both from brain activity and eye movements. Copyright © 2015 Elsevier Inc. All rights reserved.

  15. Neighborhood Discriminant Hashing for Large-Scale Image Retrieval.

    Science.gov (United States)

    Tang, Jinhui; Li, Zechao; Wang, Meng; Zhao, Ruizhen

    2015-09-01

    With the proliferation of large-scale community-contributed images, hashing-based approximate nearest neighbor search in huge databases has aroused considerable interest from the fields of computer vision and multimedia in recent years because of its computational and memory efficiency. In this paper, we propose a novel hashing method named neighborhood discriminant hashing (NDH) (for short) to implement approximate similarity search. Different from the previous work, we propose to learn a discriminant hashing function by exploiting local discriminative information, i.e., the labels of a sample can be inherited from the neighbor samples it selects. The hashing function is expected to be orthogonal to avoid redundancy in the learned hashing bits as much as possible, while an information theoretic regularization is jointly exploited using maximum entropy principle. As a consequence, the learned hashing function is compact and nonredundant among bits, while each bit is highly informative. Extensive experiments are carried out on four publicly available data sets and the comparison results demonstrate the outperforming performance of the proposed NDH method over state-of-the-art hashing techniques.

  16. Web based 3-D medical image visualization on the PC.

    Science.gov (United States)

    Kim, N; Lee, D H; Kim, J H; Kim, Y; Cho, H J

    1998-01-01

    With the recent advance of Web and its associated technologies, information sharing on distribute computing environments has gained a great amount of attention from many researchers in many application areas, such as medicine, engineering, and business. One basic requirement of distributed medical consultation systems is that geographically dispersed, disparate participants are allowed to exchange information readily with each other. Such software also needs to be supported on a broad range of computer platforms to increase the softwares accessibility. In this paper, the development of world-wide-web based medical consultation system for radiology imaging is addressed to provide platform independence and greater accessibility. The system supports sharing of 3-dimensional objects. We use VRML (Virtual Reality Modeling Language), which is the defacto standard in 3-D modeling on the Web. 3-D objects are reconstructed from CT or MRI volume data using a VRML format, which can be viewed and manipulated easily in Web-browsers with a VRML plug-in. A Marching cubes method is used in the transformation of scanned volume data sets to polygonal surfaces of VRML. A decimation algorithm is adopted to reduce the number of meshes in the resulting VRML file. 3-D volume data are often very large in size, hence loading the data on PC level computers requires a significant reduction of the size of the data, while minimizing the loss of the original shape information. This is also important to decrease network delays. A prototype system has been implemented (http://cybernet5.snu.ac.kr/-cyber/mrivrml .html), and several sessions of experiments are carried out.

  17. Conjunctive patches subspace learning with side information for collaborative image retrieval.

    Science.gov (United States)

    Zhang, Lining; Wang, Lipo; Lin, Weisi

    2012-08-01

    Content-Based Image Retrieval (CBIR) has attracted substantial attention during the past few years for its potential practical applications to image management. A variety of Relevance Feedback (RF) schemes have been designed to bridge the semantic gap between the low-level visual features and the high-level semantic concepts for an image retrieval task. Various Collaborative Image Retrieval (CIR) schemes aim to utilize the user historical feedback log data with similar and dissimilar pairwise constraints to improve the performance of a CBIR system. However, existing subspace learning approaches with explicit label information cannot be applied for a CIR task, although the subspace learning techniques play a key role in various computer vision tasks, e.g., face recognition and image classification. In this paper, we propose a novel subspace learning framework, i.e., Conjunctive Patches Subspace Learning (CPSL) with side information, for learning an effective semantic subspace by exploiting the user historical feedback log data for a CIR task. The CPSL can effectively integrate the discriminative information of labeled log images, the geometrical information of labeled log images and the weakly similar information of unlabeled images together to learn a reliable subspace. We formally formulate this problem into a constrained optimization problem and then present a new subspace learning technique to exploit the user historical feedback log data. Extensive experiments on both synthetic data sets and a real-world image database demonstrate the effectiveness of the proposed scheme in improving the performance of a CBIR system by exploiting the user historical feedback log data.

  18. Uncertainties in cloud phase and optical thickness retrievals from the Earth Polychromatic Imaging Camera (EPIC).

    Science.gov (United States)

    Meyer, Kerry; Yang, Yuekui; Platnick, Steven

    2016-01-01

    This paper presents an investigation of the expected uncertainties of a single channel cloud optical thickness (COT) retrieval technique, as well as a simple cloud temperature threshold based thermodynamic phase approach, in support of the Deep Space Climate Observatory (DSCOVR) mission. DSCOVR cloud products will be derived from Earth Polychromatic Imaging Camera (EPIC) observations in the ultraviolet and visible spectra. Since EPIC is not equipped with a spectral channel in the shortwave or mid-wave infrared that is sensitive to cloud effective radius (CER), COT will be inferred from a single visible channel with the assumption of appropriate CER values for liquid and ice phase clouds. One month of Aqua MODIS daytime granules from April 2005 is selected for investigating cloud phase sensitivity, and a subset of these granules that has similar EPIC sun-view geometry is selected for investigating COT uncertainties. EPIC COT retrievals are simulated with the same algorithm as the operational MODIS cloud products (MOD06), except using fixed phase-dependent CER values. Uncertainty estimates are derived by comparing the single channel COT retrievals with the baseline bi-spectral MODIS retrievals. Results show that a single channel COT retrieval is feasible for EPIC. For ice clouds, single channel retrieval errors are minimal (< 2%) due to the particle size insensitivity of the assumed ice crystal (i.e., severely roughened aggregate of hexagonal columns) scattering properties at visible wavelengths, while for liquid clouds the error is mostly limited to within 10%, although for thin clouds (COT < 2) the error can be higher. Potential uncertainties in EPIC cloud masking and cloud temperature retrievals are not considered in this study.

  19. Retrieval of Sentence Sequences for an Image Stream via Coherence Recurrent Convolutional Networks.

    Science.gov (United States)

    Park, Cesc Chunseong; Kim, Youngjin; Kim, Gunhee

    2018-04-01

    We propose an approach for retrieving a sequence of natural sentences for an image stream. Since general users often take a series of pictures on their experiences, much online visual information exists in the form of image streams, for which it would better take into consideration of the whole image stream to produce natural language descriptions. While almost all previous studies have dealt with the relation between a single image and a single natural sentence, our work extends both input and output dimension to a sequence of images and a sequence of sentences. For retrieving a coherent flow of multiple sentences for a photo stream, we propose a multimodal neural architecture called coherence recurrent convolutional network (CRCN), which consists of convolutional neural networks, bidirectional long short-term memory (LSTM) networks, and an entity-based local coherence model. Our approach directly learns from vast user-generated resource of blog posts as text-image parallel training data. We collect more than 22 K unique blog posts with 170 K associated images for the travel topics of NYC, Disneyland , Australia, and Hawaii. We demonstrate that our approach outperforms other state-of-the-art image captioning methods for text sequence generation, using both quantitative measures and user studies via Amazon Mechanical Turk.

  20. Single-image phase retrieval using an edge illumination X-ray phase-contrast imaging setup

    Energy Technology Data Exchange (ETDEWEB)

    Diemoz, Paul C., E-mail: p.diemoz@ucl.ac.uk; Vittoria, Fabio A. [University College London, London WC1 E6BT (United Kingdom); Research Complex at Harwell, Oxford Harwell Campus, Didcot OX11 0FA (United Kingdom); Hagen, Charlotte K.; Endrizzi, Marco [University College London, London WC1 E6BT (United Kingdom); Coan, Paola [Ludwig-Maximilians-University, Munich 81377 (Germany); Ludwig-Maximilians-University, Garching 85748 (Germany); Brun, Emmanuel [Ludwig-Maximilians-University, Garching 85748 (Germany); European Synchrotron Radiation Facility, Grenoble 38043 (France); Wagner, Ulrich H.; Rau, Christoph [Diamond Light Source, Harwell Oxford Campus, Didcot OX11 0DE (United Kingdom); Robinson, Ian K. [Research Complex at Harwell, Oxford Harwell Campus, Didcot OX11 0FA (United Kingdom); London Centre for Nanotechnology, London WC1 H0AH (United Kingdom); Bravin, Alberto [European Synchrotron Radiation Facility, Grenoble 38043 (France); Olivo, Alessandro [University College London, London WC1 E6BT (United Kingdom); Research Complex at Harwell, Oxford Harwell Campus, Didcot OX11 0FA (United Kingdom)

    2015-06-25

    A method enabling the retrieval of thickness or projected electron density of a sample from a single input image is derived theoretically and successfully demonstrated on experimental data. A method is proposed which enables the retrieval of the thickness or of the projected electron density of a sample from a single input image acquired with an edge illumination phase-contrast imaging setup. The method assumes the case of a quasi-homogeneous sample, i.e. a sample with a constant ratio between the real and imaginary parts of its complex refractive index. Compared with current methods based on combining two edge illumination images acquired in different configurations of the setup, this new approach presents advantages in terms of simplicity of acquisition procedure and shorter data collection time, which are very important especially for applications such as computed tomography and dynamical imaging. Furthermore, the fact that phase information is directly extracted, instead of its derivative, can enable a simpler image interpretation and be beneficial for subsequent processing such as segmentation. The method is first theoretically derived and its conditions of applicability defined. Quantitative accuracy in the case of homogeneous objects as well as enhanced image quality for the imaging of complex biological samples are demonstrated through experiments at two synchrotron radiation facilities. The large range of applicability, the robustness against noise and the need for only one input image suggest a high potential for investigations in various research subjects.

  1. Unsupervised symmetrical trademark image retrieval in soccer telecast using wavelet energy and quadtree decomposition

    Science.gov (United States)

    Ong, Swee Khai; Lim, Wee Keong; Soo, Wooi King

    2013-04-01

    Trademark, a distinctive symbol, is used to distinguish products or services provided by a particular person, group or organization from other similar entries. As trademark represents the reputation and credit standing of the owner, it is important to differentiate one trademark from another. Many methods have been proposed to identify, classify and retrieve trademarks. However, most methods required features database and sample sets for training prior to recognition and retrieval process. In this paper, a new feature on wavelet coefficients, the localized wavelet energy, is introduced to extract features of trademarks. With this, unsupervised content-based symmetrical trademark image retrieval is proposed without the database and prior training set. The feature analysis is done by an integration of the proposed localized wavelet energy and quadtree decomposed regional symmetrical vector. The proposed framework eradicates the dependence on query database and human participation during the retrieval process. In this paper, trademarks for soccer games sponsors are the intended trademark category. Video frames from soccer telecast are extracted and processed for this study. Reasonably good localization and retrieval results on certain categories of trademarks are achieved. A distinctive symbol is used to distinguish products or services provided by a particular person, group or organization from other similar entries.

  2. Hurricane Imaging Radiometer (HIRAD) Wind Speed Retrievals and Assessment Using Dropsondes

    Science.gov (United States)

    Cecil, Daniel J.; Biswas, Sayak K.

    2018-01-01

    The Hurricane Imaging Radiometer (HIRAD) is an experimental C-band passive microwave radiometer designed to map the horizontal structure of surface wind speed fields in hurricanes. New data processing and customized retrieval approaches were developed after the 2015 Tropical Cyclone Intensity (TCI) experiment, which featured flights over Hurricanes Patricia, Joaquin, Marty, and the remnants of Tropical Storm Erika. These new approaches produced maps of surface wind speed that looked more realistic than those from previous campaigns. Dropsondes from the High Definition Sounding System (HDSS) that was flown with HIRAD on a WB-57 high altitude aircraft in TCI were used to assess the quality of the HIRAD wind speed retrievals. The root mean square difference between HIRAD-retrieved surface wind speeds and dropsonde-estimated surface wind speeds was 6.0 meters per second. The largest differences between HIRAD and dropsonde winds were from data points where storm motion during dropsonde descent compromised the validity of the comparisons. Accounting for this and for uncertainty in the dropsonde measurements themselves, we estimate the root mean square error for the HIRAD retrievals as around 4.7 meters per second. Prior to the 2015 TCI experiment, HIRAD had previously flown on the WB-57 for missions across Hurricanes Gonzalo (2014), Earl (2010), and Karl (2010). Configuration of the instrument was not identical to the 2015 flights, but the methods devised after the 2015 flights may be applied to that previous data in an attempt to improve retrievals from those cases.

  3. Functional imaging of the semantic system: retrieval of sensory-experienced and verbally learned knowledge.

    Science.gov (United States)

    Noppeney, Uta; Price, Cathy J

    2003-01-01

    This paper considers how functional neuro-imaging can be used to investigate the organization of the semantic system and the limitations associated with this technique. The majority of the functional imaging studies of the semantic system have looked for divisions by varying stimulus category. These studies have led to divergent results and no clear anatomical hypotheses have emerged to account for the dissociations seen in behavioral studies. Only a few functional imaging studies have used task as a variable to differentiate the neural correlates of semantic features more directly. We extend these findings by presenting a new study that contrasts tasks that differentially weight sensory (color and taste) and verbally learned (origin) semantic features. Irrespective of the type of semantic feature retrieved, a common semantic system was activated as demonstrated in many previous studies. In addition, the retrieval of verbally learned, but not sensory-experienced, features enhanced activation in medial and lateral posterior parietal areas. We attribute these "verbally learned" effects to differences in retrieval strategy and conclude that evidence for segregation of semantic features at an anatomical level remains weak. We believe that functional imaging has the potential to increase our understanding of the neuronal infrastructure that sustains semantic processing but progress may require multiple experiments until a consistent explanatory framework emerges.

  4. Hurricane Imaging Radiometer Wind Speed and Rain Rate Retrievals during the 2010 GRIP Flight Experiment

    Science.gov (United States)

    Sahawneh, Saleem; Farrar, Spencer; Johnson, James; Jones, W. Linwood; Roberts, Jason; Biswas, Sayak; Cecil, Daniel

    2014-01-01

    Microwave remote sensing observations of hurricanes, from NOAA and USAF hurricane surveillance aircraft, provide vital data for hurricane research and operations, for forecasting the intensity and track of tropical storms. The current operational standard for hurricane wind speed and rain rate measurements is the Stepped Frequency Microwave Radiometer (SFMR), which is a nadir viewing passive microwave airborne remote sensor. The Hurricane Imaging Radiometer, HIRAD, will extend the nadir viewing SFMR capability to provide wide swath images of wind speed and rain rate, while flying on a high altitude aircraft. HIRAD was first flown in the Genesis and Rapid Intensification Processes, GRIP, NASA hurricane field experiment in 2010. This paper reports on geophysical retrieval results and provides hurricane images from GRIP flights. An overview of the HIRAD instrument and the radiative transfer theory based, wind speed/rain rate retrieval algorithm is included. Results are presented for hurricane wind speed and rain rate for Earl and Karl, with comparison to collocated SFMR retrievals and WP3D Fuselage Radar images for validation purposes.

  5. Deep Hashing Based Fusing Index Method for Large-Scale Image Retrieval

    Directory of Open Access Journals (Sweden)

    Lijuan Duan

    2017-01-01

    Full Text Available Hashing has been widely deployed to perform the Approximate Nearest Neighbor (ANN search for the large-scale image retrieval to solve the problem of storage and retrieval efficiency. Recently, deep hashing methods have been proposed to perform the simultaneous feature learning and the hash code learning with deep neural networks. Even though deep hashing has shown the better performance than traditional hashing methods with handcrafted features, the learned compact hash code from one deep hashing network may not provide the full representation of an image. In this paper, we propose a novel hashing indexing method, called the Deep Hashing based Fusing Index (DHFI, to generate a more compact hash code which has stronger expression ability and distinction capability. In our method, we train two different architecture’s deep hashing subnetworks and fuse the hash codes generated by the two subnetworks together to unify images. Experiments on two real datasets show that our method can outperform state-of-the-art image retrieval applications.

  6. A boosting framework for visuality-preserving distance metric learning and its application to medical image retrieval.

    Science.gov (United States)

    Yang, Liu; Jin, Rong; Mummert, Lily; Sukthankar, Rahul; Goode, Adam; Zheng, Bin; Hoi, Steven C H; Satyanarayanan, Mahadev

    2010-01-01

    Similarity measurement is a critical component in content-based image retrieval systems, and learning a good distance metric can significantly improve retrieval performance. However, despite extensive study, there are several major shortcomings with the existing approaches for distance metric learning that can significantly affect their application to medical image retrieval. In particular, "similarity" can mean very different things in image retrieval: resemblance in visual appearance (e.g., two images that look like one another) or similarity in semantic annotation (e.g., two images of tumors that look quite different yet are both malignant). Current approaches for distance metric learning typically address only one goal without consideration of the other. This is problematic for medical image retrieval where the goal is to assist doctors in decision making. In these applications, given a query image, the goal is to retrieve similar images from a reference library whose semantic annotations could provide the medical professional with greater insight into the possible interpretations of the query image. If the system were to retrieve images that did not look like the query, then users would be less likely to trust the system; on the other hand, retrieving images that appear superficially similar to the query but are semantically unrelated is undesirable because that could lead users toward an incorrect diagnosis. Hence, learning a distance metric that preserves both visual resemblance and semantic similarity is important. We emphasize that, although our study is focused on medical image retrieval, the problem addressed in this work is critical to many image retrieval systems. We present a boosting framework for distance metric learning that aims to preserve both visual and semantic similarities. The boosting framework first learns a binary representation using side information, in the form of labeled pairs, and then computes the distance as a weighted Hamming

  7. Psychophysical studies of the performance of an image database retrieval system

    Science.gov (United States)

    Papathomas, Thomas V.; Conway, Tiffany E.; Cox, Ingemar J.; Ghosn, Joumana; Miller, Matt L.; Minka, Thomas P.; Yianilos, Peter N.

    1998-07-01

    We describe psychophysical experiments conducted to study PicHunter, a content-based image retrieval (CBIR) system. Experiment 1 studies the importance of using (a) semantic information, (2) memory of earlier input and (3) relative, rather than absolute, judgements of image similarity. The target testing paradigm is used in which a user must search for an image identical to a target. We find that the best performance comes from a version of PicHunter that uses only semantic cues, with memory and relative similarity judgements. Second best is use of both pictorial and semantic cues, with memory and relative similarity judgements. Most reports of CBIR systems provide only qualitative measures of performance based on how similar retrieved images are to a target. Experiment 2 puts PicHunter into this context with a more rigorous test. We first establish a baseline for our database by measuring the time required to find an image that is similar to a target when the images are presented in random order. Although PicHunter's performance is measurably better than this, the test is weak because even random presentation of images yields reasonably short search times. This casts doubt on the strength of results given in other reports where no baseline is established.

  8. [Development and evaluation of the medical imaging distribution system with dynamic web application and clustering technology].

    Science.gov (United States)

    Yokohama, Noriya; Tsuchimoto, Tadashi; Oishi, Masamichi; Itou, Katsuya

    2007-01-20

    It has been noted that the downtime of medical informatics systems is often long. Many systems encounter downtimes of hours or even days, which can have a critical effect on daily operations. Such systems remain especially weak in the areas of database and medical imaging data. The scheme design shows the three-layer architecture of the system: application, database, and storage layers. The application layer uses the DICOM protocol (Digital Imaging and Communication in Medicine) and HTTP (Hyper Text Transport Protocol) with AJAX (Asynchronous JavaScript+XML). The database is designed to decentralize in parallel using cluster technology. Consequently, restoration of the database can be done not only with ease but also with improved retrieval speed. In the storage layer, a network RAID (Redundant Array of Independent Disks) system, it is possible to construct exabyte-scale parallel file systems that exploit storage spread. Development and evaluation of the test-bed has been successful in medical information data backup and recovery in a network environment. This paper presents a schematic design of the new medical informatics system that can be accommodated from a recovery and the dynamic Web application for medical imaging distribution using AJAX.

  9. Coincident Aerosol and H2O Retrievals versus HSI Imager Field Campaign ReportH2O Retrievals versus HSI Imager Field Campaign Report

    Energy Technology Data Exchange (ETDEWEB)

    Anderson, Gail P. [National Oceanic and Atmospheric Administration (NOAA), Washington, DC (United States); Cipar, John [Air Force Research Lab. (AFRL), Wright-Patterson AFB, OH (United States); Armstrong, Peter S. [Air Force Research Lab. (AFRL), Wright-Patterson AFB, OH (United States); van den Bosch, J. [Air Force Research Lab. (AFRL), Wright-Patterson AFB, OH (United States)

    2016-05-01

    Two spectrally calibrated tarpaulins (tarps) were co-located at a fixed Global Positioning System (GPS) position on the gravel antenna field at the U.S. Department of Energy (DOE) Atmospheric Radiation Measurement (ARM) Climate Research Facility’s Southern Great Plains (SGP) site. Their placement was timed to coincide with the overflight of a new hyperspectral imaging satellite. The intention was to provide an analysis of the data obtained, including the measured and retrieved spectral albedos for the calibration tarps. Subsequently, a full suite of retrieved values of H2O column, and the aerosol overburden, were to be compared to those determined by alternate SGP ground truth assets. To the extent possible, the down-looking cloud images would be assessed against the all-sky images. Because cloud contamination above a certain level precludes the inversion processing of the satellite data, coupled with infrequent targeting opportunities, clear-sky conditions were imposed. The SGP site was chosen not only as a target of opportunity for satellite validation, but as perhaps the best coincident field measurement site, as established by DOE’s ARM Facility. The satellite team had every expectation of using the information obtained from the SGP to improve the inversion products for all subsequent satellite images, including the cloud and radiative models and parameterizations and, thereby, the performance assessment for subsequent and historic image collections. Coordinating with the SGP onsite team, four visits, all in 2009, to the Central Facility occurred: • June 6-8 (successful exploratory visit to plan tarp placements, etc.) • July 18-24 (canceled because of forecast for heavy clouds) • Sep 9-12 (ground tarps placed, onset of clouds) • Nov 7-9 (visit ultimately canceled because of weather predictions). As noted, in each instance, any significant overcast prediction precluded image collection from the satellite. Given the long task-scheduling procedures

  10. Optical image encryption using password key based on phase retrieval algorithm

    Science.gov (United States)

    Zhao, Tieyu; Ran, Qiwen; Yuan, Lin; Chi, Yingying; Ma, Jing

    2016-04-01

    A novel optical image encryption system is proposed using password key based on phase retrieval algorithm (PRA). In the encryption process, a shared image is taken as a symmetric key and the plaintext is encoded into the phase-only mask based on the iterative PRA. The linear relationship between the plaintext and ciphertext is broken using the password key, which can resist the known plaintext attack. The symmetric key and the retrieved phase are imported into the input plane and Fourier plane of 4f system during the decryption, respectively, so as to obtain the plaintext on the CCD. Finally, we analyse the key space of the password key, and the results show that the proposed scheme can resist a brute force attack due to the flexibility of the password key.

  11. Development of an Aerosol Opacity Retrieval Algorithm for Use with Multi-Angle Land Surface Images

    Science.gov (United States)

    Diner, D.; Paradise, S.; Martonchik, J.

    1994-01-01

    In 1998, the Multi-angle Imaging SpectroRadiometer (MISR) will fly aboard the EOS-AM1 spacecraft. MISR will enable unique methods for retrieving the properties of atmospheric aerosols, by providing global imagery of the Earth at nine viewing angles in four visible and near-IR spectral bands. As part of the MISR algorithm development, theoretical methods of analyzing multi-angle, multi-spectral data are being tested using images acquired by the airborne Advanced Solid-State Array Spectroradiometer (ASAS). In this paper we derive a method to be used over land surfaces for retrieving the change in opacity between spectral bands, which can then be used in conjunction with an aerosol model to derive a bound on absolute opacity.

  12. Retrieval of spruce leaf chlorophyll content from airborne image data using continuum removal and radiative transfer

    Czech Academy of Sciences Publication Activity Database

    Malenovský, Z.; Homolová, L.; Zurita-Milla, R.; Lukeš, Petr; Kaplan, Věroslav; Hanuš, Jan; Gastellu-Etchegory, J.P.; Schaepman, M.E.

    2013-01-01

    Roč. 131, APR (2013), s. 85-102 ISSN 0034-4257 R&D Projects: GA MŠk(CZ) ED1.1.00/02.0073; GA MŠk(CZ) LM2010007 Institutional support: RVO:67179843 Keywords : Chlorophyll retrieval * Imaging spectroscopy * Continuum removal * Radiative transfer * PROSPECT * DART * Optical indices * Norway spruce * High spatial resolution * AISA Subject RIV: EH - Ecology, Behaviour Impact factor: 4.769, year: 2013

  13. Semantics-Based Intelligent Indexing and Retrieval of Digital Images - A Case Study

    Science.gov (United States)

    Osman, Taha; Thakker, Dhavalkumar; Schaefer, Gerald

    The proliferation of digital media has led to a huge interest in classifying and indexing media objects for generic search and usage. In particular, we are witnessing colossal growth in digital image repositories that are difficult to navigate using free-text search mechanisms, which often return inaccurate matches as they typically rely on statistical analysis of query keyword recurrence in the image annotation or surrounding text. In this chapter we present a semantically enabled image annotation and retrieval engine that is designed to satisfy the requirements of commercial image collections market in terms of both accuracy and efficiency of the retrieval process. Our search engine relies on methodically structured ontologies for image annotation, thus allowing for more intelligent reasoning about the image content and subsequently obtaining a more accurate set of results and a richer set of alternatives matchmaking the original query. We also show how our well-analysed and designed domain ontology contributes to the implicit expansion of user queries as well as presenting our initial thoughts on exploiting lexical databases for explicit semantic-based query expansion.

  14. Extraction of Lesion-Partitioned Features and Retrieval of Contrast-Enhanced Liver Images

    Directory of Open Access Journals (Sweden)

    Mei Yu

    2012-01-01

    Full Text Available The most critical step in grayscale medical image retrieval systems is feature extraction. Understanding the interrelatedness between the characteristics of lesion images and corresponding imaging features is crucial for image training, as well as for features extraction. A feature-extraction algorithm is developed based on different imaging properties of lesions and on the discrepancy in density between the lesions and their surrounding normal liver tissues in triple-phase contrast-enhanced computed tomographic (CT scans. The algorithm includes mainly two processes: (1 distance transformation, which is used to divide the lesion into distinct regions and represents the spatial structure distribution and (2 representation using bag of visual words (BoW based on regions. The evaluation of this system based on the proposed feature extraction algorithm shows excellent retrieval results for three types of liver lesions visible on triple-phase scans CT images. The results of the proposed feature extraction algorithm show that although single-phase scans achieve the average precision of 81.9%, 80.8%, and 70.2%, dual- and triple-phase scans achieve 86.3% and 88.0%.

  15. Region-Based Image Retrieval Using an Object Ontology and Relevance Feedback

    Directory of Open Access Journals (Sweden)

    Kompatsiaris Ioannis

    2004-01-01

    Full Text Available An image retrieval methodology suited for search in large collections of heterogeneous images is presented. The proposed approach employs a fully unsupervised segmentation algorithm to divide images into regions and endow the indexing and retrieval system with content-based functionalities. Low-level descriptors for the color, position, size, and shape of each region are subsequently extracted. These arithmetic descriptors are automatically associated with appropriate qualitative intermediate-level descriptors, which form a simple vocabulary termed object ontology. The object ontology is used to allow the qualitative definition of the high-level concepts the user queries for (semantic objects, each represented by a keyword and their relations in a human-centered fashion. When querying for a specific semantic object (or objects, the intermediate-level descriptor values associated with both the semantic object and all image regions in the collection are initially compared, resulting in the rejection of most image regions as irrelevant. Following that, a relevance feedback mechanism, based on support vector machines and using the low-level descriptors, is invoked to rank the remaining potentially relevant image regions and produce the final query results. Experimental results and comparisons demonstrate, in practice, the effectiveness of our approach.

  16. Hybrid phase retrieval algorithm for solving the twin image problem in in-line digital holography

    Science.gov (United States)

    Zhao, Jie; Wang, Dayong; Zhang, Fucai; Wang, Yunxin

    2010-10-01

    For the reconstruction in the in-line digital holography, there are three terms overlapping with each other on the image plane, named the zero order term, the real image and the twin image respectively. The unwanted twin image degrades the real image seriously. A hybrid phase retrieval algorithm is presented to address this problem, which combines the advantages of two popular phase retrieval algorithms. One is the improved version of the universal iterative algorithm (UIA), called the phase flipping-based UIA (PFB-UIA). The key point of this algorithm is to flip the phase of the object iteratively. It is proved that the PFB-UIA is able to find the support of the complicated object. Another one is the Fienup algorithm, which is a kind of well-developed algorithm and uses the support of the object as the constraint among the iteration procedure. Thus, by following the Fienup algorithm immediately after the PFB-UIA, it is possible to produce the amplitude and the phase distributions of the object with high fidelity. The primary simulated results showed that the proposed algorithm is powerful for solving the twin image problem in the in-line digital holography.

  17. Phase retrieval for X-ray in-line phase contrast imaging

    International Nuclear Information System (INIS)

    Scattarella, F.; Bellotti, R.; Tangaro, S.; Gargano, G.; Giannini, C.

    2011-01-01

    A review article about phase retrieval problem in X-ray phase contrast imaging is presented. A simple theoretical framework of Fresnel diffraction imaging by X-rays is introduced. A review of the most important methods for phase retrieval in free-propagation-based X-ray imaging and a new method developed by our collaboration are shown. The proposed algorithm, Combined Mixed Approach (CMA) is based on a mixed transfer function and transport of intensity approach, and it requires at most an initial approximate estimate of the average phase shift introduced by the object as prior knowledge. The accuracy with which this initial estimate is known determines the convenience speed of algorithm. The new proposed algorithm is based on the retrieval of both the object phase and its complex conjugate. The results obtained by the algorithm on simulated data have shown that the obtained reconstructed phase maps are characterized by particularly low normalized mean square errors. The algorithm was also tested on noisy experimental phase contrast data, showing a good efficiency in recovering phase information and enhancing the visibility of details inside soft tissues.

  18. STUDY COMPARISON OF SVM-, K-NN- AND BACKPROPAGATION-BASED CLASSIFIER FOR IMAGE RETRIEVAL

    Directory of Open Access Journals (Sweden)

    Muhammad Athoillah

    2015-03-01

    Full Text Available Classification is a method for compiling data systematically according to the rules that have been set previously. In recent years classification method has been proven to help many people’s work, such as image classification, medical biology, traffic light, text classification etc. There are many methods to solve classification problem. This variation method makes the researchers find it difficult to determine which method is best for a problem. This framework is aimed to compare the ability of classification methods, such as Support Vector Machine (SVM, K-Nearest Neighbor (K-NN, and Backpropagation, especially in study cases of image retrieval with five category of image dataset. The result shows that K-NN has the best average result in accuracy with 82%. It is also the fastest in average computation time with 17,99 second during retrieve session for all categories class. The Backpropagation, however, is the slowest among three of them. In average it needed 883 second for training session and 41,7 second for retrieve session.

  19. Phase Retrieval Using a Genetic Algorithm on the Systematic Image-Based Optical Alignment Testbed

    Science.gov (United States)

    Taylor, Jaime R.

    2003-01-01

    NASA s Marshall Space Flight Center s Systematic Image-Based Optical Alignment (SIBOA) Testbed was developed to test phase retrieval algorithms and hardware techniques. Individuals working with the facility developed the idea of implementing phase retrieval by breaking the determination of the tip/tilt of each mirror apart from the piston motion (or translation) of each mirror. Presented in this report is an algorithm that determines the optimal phase correction associated only with the piston motion of the mirrors. A description of the Phase Retrieval problem is first presented. The Systematic Image-Based Optical Alignment (SIBOA) Testbeb is then described. A Discrete Fourier Transform (DFT) is necessary to transfer the incoming wavefront (or estimate of phase error) into the spatial frequency domain to compare it with the image. A method for reducing the DFT to seven scalar/matrix multiplications is presented. A genetic algorithm is then used to search for the phase error. The results of this new algorithm on a test problem are presented.

  20. apART: system for the acquisition, processing, archiving, and retrieval of digital images in an open, distributed imaging environment

    Science.gov (United States)

    Schneider, Uwe; Strack, Ruediger

    1992-04-01

    apART reflects the structure of an open, distributed environment. According to the general trend in the area of imaging, network-capable, general purpose workstations with capabilities of open system image communication and image input are used. Several heterogeneous components like CCD cameras, slide scanners, and image archives can be accessed. The system is driven by an object-oriented user interface where devices (image sources and destinations), operators (derived from a commercial image processing library), and images (of different data types) are managed and presented uniformly to the user. Browsing mechanisms are used to traverse devices, operators, and images. An audit trail mechanism is offered to record interactive operations on low-resolution image derivatives. These operations are processed off-line on the original image. Thus, the processing of extremely high-resolution raster images is possible, and the performance of resolution dependent operations is enhanced significantly during interaction. An object-oriented database system (APRIL), which can be browsed, is integrated into the system. Attribute retrieval is supported by the user interface. Other essential features of the system include: implementation on top of the X Window System (X11R4) and the OSF/Motif widget set; a SUN4 general purpose workstation, inclusive ethernet, magneto optical disc, etc., as the hardware platform for the user interface; complete graphical-interactive parametrization of all operators; support of different image interchange formats (GIF, TIFF, IIF, etc.); consideration of current IPI standard activities within ISO/IEC for further refinement and extensions.

  1. A STUDY ON RANKING METHOD IN RETRIEVING WEB PAGES BASED ON CONTENT AND LINK ANALYSIS: COMBINATION OF FOURIER DOMAIN SCORING AND PAGERANK SCORING

    Directory of Open Access Journals (Sweden)

    Diana Purwitasari

    2008-01-01

    Full Text Available Ranking module is an important component of search process which sorts through relevant pages. Since collection of Web pages has additional information inherent in the hyperlink structure of the Web, it can be represented as link score and then combined with the usual information retrieval techniques of content score. In this paper we report our studies about ranking score of Web pages combined from link analysis, PageRank Scoring, and content analysis, Fourier Domain Scoring. Our experiments use collection of Web pages relate to Statistic subject from Wikipedia with objectives to check correctness and performance evaluation of combination ranking method. Evaluation of PageRank Scoring show that the highest score does not always relate to Statistic. Since the links within Wikipedia articles exists so that users are always one click away from more information on any point that has a link attached, it it possible that unrelated topics to Statistic are most likely frequently mentioned in the collection. While the combination method show link score which is given proportional weight to content score of Web pages does effect the retrieval results.

  2. Computer-aided diagnosis of mammographic masses using geometric verification-based image retrieval

    Science.gov (United States)

    Li, Qingliang; Shi, Weili; Yang, Huamin; Zhang, Huimao; Li, Guoxin; Chen, Tao; Mori, Kensaku; Jiang, Zhengang

    2017-03-01

    Computer-Aided Diagnosis of masses in mammograms is an important indicator of breast cancer. The use of retrieval systems in breast examination is increasing gradually. In this respect, the method of exploiting the vocabulary tree framework and the inverted file in the mammographic masse retrieval have been proved high accuracy and excellent scalability. However it just considered the features in each image as a visual word and had ignored the spatial configurations of features. It greatly affect the retrieval performance. To overcome this drawback, we introduce the geometric verification method to retrieval in mammographic masses. First of all, we obtain corresponding match features based on the vocabulary tree framework and the inverted file. After that, we grasps the main point of local similarity characteristic of deformations in the local regions by constructing the circle regions of corresponding pairs. Meanwhile we segment the circle to express the geometric relationship of local matches in the area and generate the spatial encoding strictly. Finally we judge whether the matched features are correct or not, based on verifying the all spatial encoding are whether satisfied the geometric consistency. Experiments show the promising results of our approach.

  3. Target-oriented retrieval of subsurface wave fields - Pushing the resolution limits in seismic imaging

    Science.gov (United States)

    Vasconcelos, Ivan; Ozmen, Neslihan; van der Neut, Joost; Cui, Tianci

    2017-04-01

    Travelling wide-bandwidth seismic waves have long been used as a primary tool in exploration seismology because they can probe the subsurface over large distances, while retaining relatively high spatial resolution. The well-known Born resolution limit often seems to be the lower bound on spatial imaging resolution in real life examples. In practice, data acquisition cost, time constraints and other factors can worsen the resolution achieved by wavefield imaging. Could we obtain images whose resolution beats the Born limits? Would it be practical to achieve it, and what are we missing today to achieve this? In this talk, we will cover aspects of linear and nonlinear seismic imaging to understand elements that play a role in obtaining "super-resolved" seismic images. New redatuming techniques, such as the Marchenko method, enable the retrieval of subsurface fields that include multiple scattering interactions, while requiring relatively little knowledge of model parameters. Together with new concepts in imaging, such as Target-Enclosing Extended Images, these new redatuming methods enable new targeted imaging frameworks. We will make a case as to why target-oriented approaches to reconstructing subsurface-domain wavefields from surface data may help in increasing the resolving power of seismic imaging, and in pushing the limits on parameter estimation. We will illustrate this using a field data example. Finally, we will draw connections between seismic and other imaging modalities, and discuss how this framework could be put to use in other applications

  4. A novel method for efficient archiving and retrieval of biomedical images using MPEG-7

    Science.gov (United States)

    Meyer, Joerg; Pahwa, Ash

    2004-10-01

    Digital archiving and efficient retrieval of radiological scans have become critical steps in contemporary medical diagnostics. Since more and more images and image sequences (single scans or video) from various modalities (CT/MRI/PET/digital X-ray) are now available in digital formats (e.g., DICOM-3), hospitals and radiology clinics need to implement efficient protocols capable of managing the enormous amounts of data generated daily in a typical clinical routine. We present a method that appears to be a viable way to eliminate the tedious step of manually annotating image and video material for database indexing. MPEG-7 is a new framework that standardizes the way images are characterized in terms of color, shape, and other abstract, content-related criteria. A set of standardized descriptors that are automatically generated from an image is used to compare an image to other images in a database, and to compute the distance between two images for a given application domain. Text-based database queries can be replaced with image-based queries using MPEG-7. Consequently, image queries can be conducted without any prior knowledge of the keys that were used as indices in the database. Since the decoding and matching steps are not part of the MPEG-7 standard, this method also enables searches that were not planned by the time the keys were generated.

  5. Segmentation Technique for Image Indexing and Retrieval on Discrete Cosines Domain

    Directory of Open Access Journals (Sweden)

    Suhendro Yusuf Irianto

    2013-03-01

    Full Text Available This paper uses region growing segmentation technique to segment the Discrete Cosines (DC  image. The problem of content Based image retrieval (CBIR is the luck of accuracy in matching between image query and image in the database as it matches object and background in the same time.   This the reason previous CBIR techniques inaccurate and time consuming. The CBIR   based on the segmented region proposed in this work  separates object from background as CBIR need only match the object not the background.  By using region growing technique on DC image, it reduces the number of image       regions.    The proposed of recursive region growing is not new technique but its application on DC images to build    indexing keys is quite new and not yet presented by many     authors. The experimental results show  that the proposed methods on   segmented images present good precision which are higher than 0.60 on all classes . It can be concluded that  region growing segmented based CBIR more efficient    compare to DC images  in term of their precision 0.59 and 0.75, respectively. Moreover,  DC based CBIR  can save time and simplify algorithm compare to DCT images.

  6. Plant leaf chlorophyll content retrieval based on a field imaging spectroscopy system.

    Science.gov (United States)

    Liu, Bo; Yue, Yue-Min; Li, Ru; Shen, Wen-Jing; Wang, Ke-Lin

    2014-10-23

    A field imaging spectrometer system (FISS; 380-870 nm and 344 bands) was designed for agriculture applications. In this study, FISS was used to gather spectral information from soybean leaves. The chlorophyll content was retrieved using a multiple linear regression (MLR), partial least squares (PLS) regression and support vector machine (SVM) regression. Our objective was to verify the performance of FISS in a quantitative spectral analysis through the estimation of chlorophyll content and to determine a proper quantitative spectral analysis method for processing FISS data. The results revealed that the derivative reflectance was a more sensitive indicator of chlorophyll content and could extract content information more efficiently than the spectral reflectance, which is more significant for FISS data compared to ASD (analytical spectral devices) data, reducing the corresponding RMSE (root mean squared error) by 3.3%-35.6%. Compared with the spectral features, the regression methods had smaller effects on the retrieval accuracy. A multivariate linear model could be the ideal model to retrieve chlorophyll information with a small number of significant wavelengths used. The smallest RMSE of the chlorophyll content retrieved using FISS data was 0.201 mg/g, a relative reduction of more than 30% compared with the RMSE based on a non-imaging ASD spectrometer, which represents a high estimation accuracy compared with the mean chlorophyll content of the sampled leaves (4.05 mg/g). Our study indicates that FISS could obtain both spectral and spatial detailed information of high quality. Its image-spectrum-in-one merit promotes the good performance of FISS in quantitative spectral analyses, and it can potentially be widely used in the agricultural sector.

  7. Optimizing top precision performance measure of content-based image retrieval by learning similarity function

    KAUST Repository

    Liang, Ru-Ze

    2017-04-24

    In this paper we study the problem of content-based image retrieval. In this problem, the most popular performance measure is the top precision measure, and the most important component of a retrieval system is the similarity function used to compare a query image against a database image. However, up to now, there is no existing similarity learning method proposed to optimize the top precision measure. To fill this gap, in this paper, we propose a novel similarity learning method to maximize the top precision measure. We model this problem as a minimization problem with an objective function as the combination of the losses of the relevant images ranked behind the top-ranked irrelevant image, and the squared Frobenius norm of the similarity function parameter. This minimization problem is solved as a quadratic programming problem. The experiments over two benchmark data sets show the advantages of the proposed method over other similarity learning methods when the top precision is used as the performance measure.

  8. Optimizing top precision performance measure of content-based image retrieval by learning similarity function

    KAUST Repository

    Liang, Ru-Ze; Shi, Lihui; Wang, Haoxiang; Meng, Jiandong; Wang, Jim Jing-Yan; Sun, Qingquan; Gu, Yi

    2017-01-01

    In this paper we study the problem of content-based image retrieval. In this problem, the most popular performance measure is the top precision measure, and the most important component of a retrieval system is the similarity function used to compare a query image against a database image. However, up to now, there is no existing similarity learning method proposed to optimize the top precision measure. To fill this gap, in this paper, we propose a novel similarity learning method to maximize the top precision measure. We model this problem as a minimization problem with an objective function as the combination of the losses of the relevant images ranked behind the top-ranked irrelevant image, and the squared Frobenius norm of the similarity function parameter. This minimization problem is solved as a quadratic programming problem. The experiments over two benchmark data sets show the advantages of the proposed method over other similarity learning methods when the top precision is used as the performance measure.

  9. Leveraging Web Services in Providing Efficient Discovery, Retrieval, and Integration of NASA-Sponsored Observations and Predictions

    Science.gov (United States)

    Bambacus, M.; Alameh, N.; Cole, M.

    2006-12-01

    The Applied Sciences Program at NASA focuses on extending the results of NASA's Earth-Sun system science research beyond the science and research communities to contribute to national priority applications with societal benefits. By employing a systems engineering approach, supporting interoperable data discovery and access, and developing partnerships with federal agencies and national organizations, the Applied Sciences Program facilitates the transition from research to operations in national applications. In particular, the Applied Sciences Program identifies twelve national applications, listed at http://science.hq.nasa.gov/earth-sun/applications/, which can be best served by the results of NASA aerospace research and development of science and technologies. The ability to use and integrate NASA data and science results into these national applications results in enhanced decision support and significant socio-economic benefits for each of the applications. This paper focuses on leveraging the power of interoperability and specifically open standard interfaces in providing efficient discovery, retrieval, and integration of NASA's science research results. Interoperability (the ability to access multiple, heterogeneous geoprocessing environments, either local or remote by means of open and standard software interfaces) can significantly increase the value of NASA-related data by increasing the opportunities to discover, access and integrate that data in the twelve identified national applications (particularly in non-traditional settings). Furthermore, access to data, observations, and analytical models from diverse sources can facilitate interdisciplinary and exploratory research and analysis. To streamline this process, the NASA GeoSciences Interoperability Office (GIO) is developing the NASA Earth-Sun System Gateway (ESG) to enable access to remote geospatial data, imagery, models, and visualizations through open, standard web protocols. The gateway (online

  10. ImageJS: Personalized, participated, pervasive, and reproducible image bioinformatics in the web browser.

    Science.gov (United States)

    Almeida, Jonas S; Iriabho, Egiebade E; Gorrepati, Vijaya L; Wilkinson, Sean R; Grüneberg, Alexander; Robbins, David E; Hackney, James R

    2012-01-01

    Image bioinformatics infrastructure typically relies on a combination of server-side high-performance computing and client desktop applications tailored for graphic rendering. On the server side, matrix manipulation environments are often used as the back-end where deployment of specialized analytical workflows takes place. However, neither the server-side nor the client-side desktop solution, by themselves or combined, is conducive to the emergence of open, collaborative, computational ecosystems for image analysis that are both self-sustained and user driven. ImageJS was developed as a browser-based webApp, untethered from a server-side backend, by making use of recent advances in the modern web browser such as a very efficient compiler, high-end graphical rendering capabilities, and I/O tailored for code migration. Multiple versioned code hosting services were used to develop distinct ImageJS modules to illustrate its amenability to collaborative deployment without compromise of reproducibility or provenance. The illustrative examples include modules for image segmentation, feature extraction, and filtering. The deployment of image analysis by code migration is in sharp contrast with the more conventional, heavier, and less safe reliance on data transfer. Accordingly, code and data are loaded into the browser by exactly the same script tag loading mechanism, which offers a number of interesting applications that would be hard to attain with more conventional platforms, such as NIH's popular ImageJ application. The modern web browser was found to be advantageous for image bioinformatics in both the research and clinical environments. This conclusion reflects advantages in deployment scalability and analysis reproducibility, as well as the critical ability to deliver advanced computational statistical procedures machines where access to sensitive data is controlled, that is, without local "download and installation".

  11. ImageJS: Personalized, participated, pervasive, and reproducible image bioinformatics in the web browser

    Directory of Open Access Journals (Sweden)

    Jonas S Almeida

    2012-01-01

    Full Text Available Background: Image bioinformatics infrastructure typically relies on a combination of server-side high-performance computing and client desktop applications tailored for graphic rendering. On the server side, matrix manipulation environments are often used as the back-end where deployment of specialized analytical workflows takes place. However, neither the server-side nor the client-side desktop solution, by themselves or combined, is conducive to the emergence of open, collaborative, computational ecosystems for image analysis that are both self-sustained and user driven. Materials and Methods: ImageJS was developed as a browser-based webApp, untethered from a server-side backend, by making use of recent advances in the modern web browser such as a very efficient compiler, high-end graphical rendering capabilities, and I/O tailored for code migration. Results : Multiple versioned code hosting services were used to develop distinct ImageJS modules to illustrate its amenability to collaborative deployment without compromise of reproducibility or provenance. The illustrative examples include modules for image segmentation, feature extraction, and filtering. The deployment of image analysis by code migration is in sharp contrast with the more conventional, heavier, and less safe reliance on data transfer. Accordingly, code and data are loaded into the browser by exactly the same script tag loading mechanism, which offers a number of interesting applications that would be hard to attain with more conventional platforms, such as NIH′s popular ImageJ application. Conclusions : The modern web browser was found to be advantageous for image bioinformatics in both the research and clinical environments. This conclusion reflects advantages in deployment scalability and analysis reproducibility, as well as the critical ability to deliver advanced computational statistical procedures machines where access to sensitive data is controlled, that is, without

  12. Using a web-based image quality assurance reporting system to improve image quality.

    Science.gov (United States)

    Czuczman, Gregory J; Pomerantz, Stuart R; Alkasab, Tarik K; Huang, Ambrose J

    2013-08-01

    The purpose of this study is to show the impact of a web-based image quality assurance reporting system on the rates of three common image quality errors at our institution. A web-based image quality assurance reporting system was developed and used beginning in April 2009. Image quality endpoints were assessed immediately before deployment (period 1), approximately 18 months after deployment of a prototype reporting system (period 2), and approximately 12 months after deployment of a subsequent upgraded department-wide reporting system (period 3). A total of 3067 axillary shoulder radiographs were reviewed for correct orientation, 355 shoulder CT scans were reviewed for correct reformatting of coronal and sagittal images, and 346 sacral MRI scans were reviewed for correct acquisition plane of axial images. Error rates for each review period were calculated and compared using the Fisher exact test. Error rates of axillary shoulder radiograph orientation were 35.9%, 7.2%, and 10.0%, respectively, for the three review periods. The decrease in error rate between periods 1 and 2 was statistically significant (p < 0.0001). Error rates of shoulder CT reformats were 9.8%, 2.7%, and 5.8%, respectively, for the three review periods. The decrease in error rate between periods 1 and 2 was statistically significant (p = 0.03). Error rates for sacral MRI axial sequences were 96.5%, 32.5%, and 3.4%, respectively, for the three review periods. The decrease in error rates between periods 1 and 2 and between periods 2 and 3 was statistically significant (p < 0.0001). A web-based system for reporting image quality errors may be effective for improving image quality.

  13. Unified modeling language and design of a case-based retrieval system in medical imaging.

    Science.gov (United States)

    LeBozec, C; Jaulent, M C; Zapletal, E; Degoulet, P

    1998-01-01

    One goal of artificial intelligence research into case-based reasoning (CBR) systems is to develop approaches for designing useful and practical interactive case-based environments. Explaining each step of the design of the case-base and of the retrieval process is critical for the application of case-based systems to the real world. We describe herein our approach to the design of IDEM--Images and Diagnosis from Examples in Medicine--a medical image case-based retrieval system for pathologists. Our approach is based on the expressiveness of an object-oriented modeling language standard: the Unified Modeling Language (UML). We created a set of diagrams in UML notation illustrating the steps of the CBR methodology we used. The key aspect of this approach was selecting the relevant objects of the system according to user requirements and making visualization of cases and of the components of the case retrieval process. Further evaluation of the expressiveness of the design document is required but UML seems to be a promising formalism, improving the communication between the developers and users.

  14. Image encryption using fingerprint as key based on phase retrieval algorithm and public key cryptography

    Science.gov (United States)

    Zhao, Tieyu; Ran, Qiwen; Yuan, Lin; Chi, Yingying; Ma, Jing

    2015-09-01

    In this paper, a novel image encryption system with fingerprint used as a secret key is proposed based on the phase retrieval algorithm and RSA public key algorithm. In the system, the encryption keys include the fingerprint and the public key of RSA algorithm, while the decryption keys are the fingerprint and the private key of RSA algorithm. If the users share the fingerprint, then the system will meet the basic agreement of asymmetric cryptography. The system is also applicable for the information authentication. The fingerprint as secret key is used in both the encryption and decryption processes so that the receiver can identify the authenticity of the ciphertext by using the fingerprint in decryption process. Finally, the simulation results show the validity of the encryption scheme and the high robustness against attacks based on the phase retrieval technique.

  15. Learning Low Dimensional Convolutional Neural Networks for High-Resolution Remote Sensing Image Retrieval

    Directory of Open Access Journals (Sweden)

    Weixun Zhou

    2017-05-01

    Full Text Available Learning powerful feature representations for image retrieval has always been a challenging task in the field of remote sensing. Traditional methods focus on extracting low-level hand-crafted features which are not only time-consuming but also tend to achieve unsatisfactory performance due to the complexity of remote sensing images. In this paper, we investigate how to extract deep feature representations based on convolutional neural networks (CNNs for high-resolution remote sensing image retrieval (HRRSIR. To this end, several effective schemes are proposed to generate powerful feature representations for HRRSIR. In the first scheme, a CNN pre-trained on a different problem is treated as a feature extractor since there are no sufficiently-sized remote sensing datasets to train a CNN from scratch. In the second scheme, we investigate learning features that are specific to our problem by first fine-tuning the pre-trained CNN on a remote sensing dataset and then proposing a novel CNN architecture based on convolutional layers and a three-layer perceptron. The novel CNN has fewer parameters than the pre-trained and fine-tuned CNNs and can learn low dimensional features from limited labelled images. The schemes are evaluated on several challenging, publicly available datasets. The results indicate that the proposed schemes, particularly the novel CNN, achieve state-of-the-art performance.

  16. Annotating image ROIs with text descriptions for multimodal biomedical document retrieval

    Science.gov (United States)

    You, Daekeun; Simpson, Matthew; Antani, Sameer; Demner-Fushman, Dina; Thoma, George R.

    2013-01-01

    Regions of interest (ROIs) that are pointed to by overlaid markers (arrows, asterisks, etc.) in biomedical images are expected to contain more important and relevant information than other regions for biomedical article indexing and retrieval. We have developed several algorithms that localize and extract the ROIs by recognizing markers on images. Cropped ROIs then need to be annotated with contents describing them best. In most cases accurate textual descriptions of the ROIs can be found from figure captions, and these need to be combined with image ROIs for annotation. The annotated ROIs can then be used to, for example, train classifiers that separate ROIs into known categories (medical concepts), or to build visual ontologies, for indexing and retrieval of biomedical articles. We propose an algorithm that pairs visual and textual ROIs that are extracted from images and figure captions, respectively. This algorithm based on dynamic time warping (DTW) clusters recognized pointers into groups, each of which contains pointers with identical visual properties (shape, size, color, etc.). Then a rule-based matching algorithm finds the best matching group for each textual ROI mention. Our method yields a precision and recall of 96% and 79%, respectively, when ground truth textual ROI data is used.

  17. Bag-of-features based medical image retrieval via multiple assignment and visual words weighting

    KAUST Repository

    Wang, Jingyan; Li, Yongping; Zhang, Ying; Wang, Chao; Xie, Honglan; Chen, Guoling; Gao, Xin

    2011-01-01

    Bag-of-features based approaches have become prominent for image retrieval and image classification tasks in the past decade. Such methods represent an image as a collection of local features, such as image patches and key points with scale invariant feature transform (SIFT) descriptors. To improve the bag-of-features methods, we first model the assignments of local descriptors as contribution functions, and then propose a novel multiple assignment strategy. Assuming the local features can be reconstructed by their neighboring visual words in a vocabulary, reconstruction weights can be solved by quadratic programming. The weights are then used to build contribution functions, resulting in a novel assignment method, called quadratic programming (QP) assignment. We further propose a novel visual word weighting method. The discriminative power of each visual word is analyzed by the sub-similarity function in the bin that corresponds to the visual word. Each sub-similarity function is then treated as a weak classifier. A strong classifier is learned by boosting methods that combine those weak classifiers. The weighting factors of the visual words are learned accordingly. We evaluate the proposed methods on medical image retrieval tasks. The methods are tested on three well-known data sets, i.e., the ImageCLEFmed data set, the 304 CT Set, and the basal-cell carcinoma image set. Experimental results demonstrate that the proposed QP assignment outperforms the traditional nearest neighbor assignment, the multiple assignment, and the soft assignment, whereas the proposed boosting based weighting strategy outperforms the state-of-the-art weighting methods, such as the term frequency weights and the term frequency-inverse document frequency weights. © 2011 IEEE.

  18. Bag-of-features based medical image retrieval via multiple assignment and visual words weighting

    KAUST Repository

    Wang, Jingyan

    2011-11-01

    Bag-of-features based approaches have become prominent for image retrieval and image classification tasks in the past decade. Such methods represent an image as a collection of local features, such as image patches and key points with scale invariant feature transform (SIFT) descriptors. To improve the bag-of-features methods, we first model the assignments of local descriptors as contribution functions, and then propose a novel multiple assignment strategy. Assuming the local features can be reconstructed by their neighboring visual words in a vocabulary, reconstruction weights can be solved by quadratic programming. The weights are then used to build contribution functions, resulting in a novel assignment method, called quadratic programming (QP) assignment. We further propose a novel visual word weighting method. The discriminative power of each visual word is analyzed by the sub-similarity function in the bin that corresponds to the visual word. Each sub-similarity function is then treated as a weak classifier. A strong classifier is learned by boosting methods that combine those weak classifiers. The weighting factors of the visual words are learned accordingly. We evaluate the proposed methods on medical image retrieval tasks. The methods are tested on three well-known data sets, i.e., the ImageCLEFmed data set, the 304 CT Set, and the basal-cell carcinoma image set. Experimental results demonstrate that the proposed QP assignment outperforms the traditional nearest neighbor assignment, the multiple assignment, and the soft assignment, whereas the proposed boosting based weighting strategy outperforms the state-of-the-art weighting methods, such as the term frequency weights and the term frequency-inverse document frequency weights. © 2011 IEEE.

  19. Research and implementation of a Web-based remote desktop image monitoring system

    International Nuclear Information System (INIS)

    Ren Weijuan; Li Luofeng; Wang Chunhong

    2010-01-01

    It studied and implemented an ISS (Image Snapshot Server) system based on Web, using Java Web technology. The ISS system consisted of client web browser and server. The server part could be divided into three modules as the screen shots software, web server and Oracle database. Screen shots software intercepted the desktop environment of the remote monitored PC and sent these pictures to a Tomcat web server for displaying on the web at real time. At the same time, these pictures were also saved in an Oracle database. Through the web browser, monitor person can view the real-time and historical desktop pictures of the monitored PC during some period. It is very convenient for any user to monitor the desktop image of remote monitoring PC. (authors)

  20. A robust pointer segmentation in biomedical images toward building a visual ontology for biomedical article retrieval

    Science.gov (United States)

    You, Daekeun; Simpson, Matthew; Antani, Sameer; Demner-Fushman, Dina; Thoma, George R.

    2013-01-01

    Pointers (arrows and symbols) are frequently used in biomedical images to highlight specific image regions of interest (ROIs) that are mentioned in figure captions and/or text discussion. Detection of pointers is the first step toward extracting relevant visual features from ROIs and combining them with textual descriptions for a multimodal (text and image) biomedical article retrieval system. Recently we developed a pointer recognition algorithm based on an edge-based pointer segmentation method, and subsequently reported improvements made on our initial approach involving the use of Active Shape Models (ASM) for pointer recognition and region growing-based method for pointer segmentation. These methods contributed to improving the recall of pointer recognition but not much to the precision. The method discussed in this article is our recent effort to improve the precision rate. Evaluation performed on two datasets and compared with other pointer segmentation methods show significantly improved precision and the highest F1 score.

  1. Simultaneous optical image compression and encryption using error-reduction phase retrieval algorithm

    International Nuclear Information System (INIS)

    Liu, Wei; Liu, Shutian; Liu, Zhengjun

    2015-01-01

    We report a simultaneous image compression and encryption scheme based on solving a typical optical inverse problem. The secret images to be processed are multiplexed as the input intensities of a cascaded diffractive optical system. At the output plane, a compressed complex-valued data with a lot fewer measurements can be obtained by utilizing error-reduction phase retrieval algorithm. The magnitude of the output image can serve as the final ciphertext while its phase serves as the decryption key. Therefore the compression and encryption are simultaneously completed without additional encoding and filtering operations. The proposed strategy can be straightforwardly applied to the existing optical security systems that involve diffraction and interference. Numerical simulations are performed to demonstrate the validity and security of the proposal. (paper)

  2. Attention-based image similarity measure with application to content-based information retrieval

    Science.gov (United States)

    Stentiford, Fred W. M.

    2003-01-01

    Whilst storage and capture technologies are able to cope with huge numbers of images, image retrieval is in danger of rendering many repositories valueless because of the difficulty of access. This paper proposes a similarity measure that imposes only very weak assumptions on the nature of the features used in the recognition process. This approach does not make use of a pre-defined set of feature measurements which are extracted from a query image and used to match those from database images, but instead generates features on a trial and error basis during the calculation of the similarity measure. This has the significant advantage that features that determine similarity can match whatever image property is important in a particular region whether it be a shape, a texture, a colour or a combination of all three. It means that effort is expended searching for the best feature for the region rather than expecting that a fixed feature set will perform optimally over the whole area of an image and over every image in a database. The similarity measure is evaluated on a problem of distinguishing similar shapes in sets of black and white symbols.

  3. Fast DCNN based on FWT, intelligent dropout and layer skipping for image retrieval.

    Science.gov (United States)

    ElAdel, Asma; Zaied, Mourad; Amar, Chokri Ben

    2017-11-01

    Deep Convolutional Neural Network (DCNN) can be marked as a powerful tool for object and image classification and retrieval. However, the training stage of such networks is highly consuming in terms of storage space and time. Also, the optimization is still a challenging subject. In this paper, we propose a fast DCNN based on Fast Wavelet Transform (FWT), intelligent dropout and layer skipping. The proposed approach led to improve the image retrieval accuracy as well as the searching time. This was possible thanks to three key advantages: First, the rapid way to compute the features using FWT. Second, the proposed intelligent dropout method is based on whether or not a unit is efficiently and not randomly selected. Third, it is possible to classify the image using efficient units of earlier layer(s) and skipping all the subsequent hidden layers directly to the output layer. Our experiments were performed on CIFAR-10 and MNIST datasets and the obtained results are very promising. Copyright © 2017 Elsevier Ltd. All rights reserved.

  4. INFLUENCE OF THE VIEWING GEOMETRY WITHIN HYPERSPECTRAL IMAGES RETRIEVED FROM UAV SNAPSHOT CAMERAS

    Directory of Open Access Journals (Sweden)

    H. Aasen

    2016-06-01

    Full Text Available Hyperspectral data has great potential for vegetation parameter retrieval. However, due to angular effects resulting from different sun-surface-sensor geometries, objects might appear differently depending on the position of an object within the field of view of a sensor. Recently, lightweight snapshot cameras have been introduced, which capture hyperspectral information in two spatial and one spectral dimension and can be mounted on unmanned aerial vehicles. This study investigates the influence of the different viewing geometries within an image on the apparent hyperspectral reflection retrieved by these sensors. Additionally, it is evaluated how hyperspectral vegetation indices like the NDVI are effected by the angular effects within a single image and if the viewing geometry influences the apparent heterogeneity with an area of interest. The study is carried out for a barley canopy at booting stage. The results show significant influences of the position of the area of interest within the image. The red region of the spectrum is more influenced by the position than the near infrared. The ability of the NDVI to compensate these effects was limited to the capturing positions close to nadir. The apparent heterogeneity of the area of interest is the highest close to a nadir.

  5. Compact Representation of High-Dimensional Feature Vectors for Large-Scale Image Recognition and Retrieval.

    Science.gov (United States)

    Zhang, Yu; Wu, Jianxin; Cai, Jianfei

    2016-05-01

    In large-scale visual recognition and image retrieval tasks, feature vectors, such as Fisher vector (FV) or the vector of locally aggregated descriptors (VLAD), have achieved state-of-the-art results. However, the combination of the large numbers of examples and high-dimensional vectors necessitates dimensionality reduction, in order to reduce its storage and CPU costs to a reasonable range. In spite of the popularity of various feature compression methods, this paper shows that the feature (dimension) selection is a better choice for high-dimensional FV/VLAD than the feature (dimension) compression methods, e.g., product quantization. We show that strong correlation among the feature dimensions in the FV and the VLAD may not exist, which renders feature selection a natural choice. We also show that, many dimensions in FV/VLAD are noise. Throwing them away using feature selection is better than compressing them and useful dimensions altogether using feature compression methods. To choose features, we propose an efficient importance sorting algorithm considering both the supervised and unsupervised cases, for visual recognition and image retrieval, respectively. Combining with the 1-bit quantization, feature selection has achieved both higher accuracy and less computational cost than feature compression methods, such as product quantization, on the FV and the VLAD image representations.

  6. Moderate Imaging Resolution Spectroradiometer (MODIS) Aerosol Optical Depth Retrieval for Aerosol Radiative Forcing

    Science.gov (United States)

    Asmat, A.; Jalal, K. A.; Ahmad, N.

    2018-02-01

    The present study uses the Aerosol Optical Depth (AOD) retrieved from Moderate Imaging Resolution Spectroradiometer (MODIS) data for the period from January 2011 until December 2015 over an urban area in Kuching, Sarawak. The results show the minimum AOD value retrieved from MODIS is -0.06 and the maximum value is 6.0. High aerosol loading with high AOD value observed during dry seasons and low AOD monitored during wet seasons. Multi plane regression technique used to retrieve AOD from MODIS (AODMODIS) and different statistics parameter is proposed by using relative absolute error for accuracy assessment in spatial and temporal averaging approach. The AODMODIS then compared with AOD derived from Aerosol Robotic Network (AERONET) Sunphotometer (AODAERONET) and the results shows high correlation coefficient (R2) for AODMODIS and AODAERONET with 0.93. AODMODIS used as an input parameters into Santa Barbara Discrete Ordinate Radiative Transfer (SBDART) model to estimate urban radiative forcing at Kuching. The observed hourly averaged for urban radiative forcing is -0.12 Wm-2 for top of atmosphere (TOA), -2.13 Wm-2 at the surface and 2.00 Wm-2 in the atmosphere. There is a moderate relationship observed between urban radiative forcing calculated using SBDART and AERONET which are 0.75 at the surface, 0.65 at TOA and 0.56 in atmosphere. Overall, variation in AOD tends to cause large bias in the estimated urban radiative forcing.

  7. Data-driven Green's function retrieval and application to imaging with multidimensional deconvolution

    Science.gov (United States)

    Broggini, Filippo; Wapenaar, Kees; van der Neut, Joost; Snieder, Roel

    2014-01-01

    An iterative method is presented that allows one to retrieve the Green's function originating from a virtual source located inside a medium using reflection data measured only at the acquisition surface. In addition to the reflection response, an estimate of the travel times corresponding to the direct arrivals is required. However, no detailed information about the heterogeneities in the medium is needed. The iterative scheme generalizes the Marchenko equation for inverse scattering to the seismic reflection problem. To give insight in the mechanism of the iterative method, its steps for a simple layered medium are analyzed using physical arguments based on the stationary phase method. The retrieved Green's wavefield is shown to correctly contain the multiples due to the inhomogeneities present in the medium. Additionally, a variant of the iterative scheme enables decomposition of the retrieved wavefield into its downgoing and upgoing components. These wavefields then enable creation of a ghost-free image of the medium with either cross correlation or multidimensional deconvolution, presenting an advantage over standard prestack migration.

  8. Nonlinear approaches for phase retrieval in the Fresnel region for hard X-ray imaging

    International Nuclear Information System (INIS)

    Davidoiu, Valentina

    2013-01-01

    The development of highly coherent X-ray sources offers new possibilities to image biological structures at different scales exploiting the refraction of X-rays. The coherence properties of the third-generation synchrotron radiation sources enables efficient implementations of phase contrast techniques. One of the first measurements of the intensity variations due to phase contrast has been reported in 1995 at the European Synchrotron Radiation Facility (ESRF). Phase imaging coupled to tomography acquisition allows three dimensional imaging with an increased sensitivity compared to absorption CT. This technique is particularly attractive to image samples with low absorption constituents. Phase contrast has many applications, ranging from material science, paleontology, bone research to medicine and biology. Several methods to achieve X-ray phase contrast have been proposed during the last years. In propagation based phase contrast, the measurements are made at different sample-to-detector distances. While the intensity data can be acquired and recorded, the phase information of the signal has to be 'retrieved' from the modulus data only. Phase retrieval is thus an ill-posed nonlinear problem and regularization techniques including a priori knowledge are necessary to obtain stable solutions. Several phase recovery methods have been developed in recent years. These approaches generally formulate the phase retrieval problem as a linear one. Nonlinear treatments have not been much investigated. The main purpose of this work was to propose and evaluate new algorithms, in particularly taking into account the nonlinearity of the direct problem. In the first part of this work, we present a Landweber type nonlinear iterative scheme to solve the propagation based phase retrieval problem. This approach uses the analytic expression of the Frechet derivative of the phase-intensity relationship and of its adjoint, which are presented in detail. We also study the effect of

  9. Retrieval of ion distributions in RC from TWINS ENA images by CT technique

    Science.gov (United States)

    Ma, S.; Yan, W.; Xu, L.; Goldstein, J.; McComas, D. J.

    2010-12-01

    The Two Wide-angle Imaging Neutral-atom Spectrometers (TWINS) mission is the first constellation to employ imagers on two separate spacecraft to measure energetic neutral atoms (ENA) produced by charge exchange between ring current energetic ions and cold exospheric neutral atoms. By applying the 3-D volumetric pixel (voxel) computed tomography (CT) inversion method to TWINS images, parent ion populations in the ring current (RC) and auroral regions are retrieved from their ENA signals. This methodology is implemented for data obtained during the main phase of a moderate geomagnetic storm on 11 October 2008. For this storm the two TWINS satellites were located in nearly the same meridian plane at vantage points widely separated in magnetic local time, and both more than 5 RE geocentric distance from the Earth. In the retrieval process, the energetic ion fluxes to be retrieved are assumed being isotropic with respect to pitch angle. The ENA data used in this study are differential fluxes averaged over 12 sweeps (corresponding to an interval of 16 min.) at different energy levels ranging throughout the full 1--100 keV energy range of TWINS. The ENA signals have two main components: (1) a low-latitude/ high-altitude signal from trapped RC ions and (2) a low-altitude signal from precipitating ions in the auroral/subauroral ionosphere. In the retrieved ion distributions, the main part of the RC component is located around midnight toward dawn sector with L from 3 to 7 or farther, while the subauroral low-altitude component is mainly at pre-midnight. It seems that the dominant energy of the RC ions for this storm is at the lowest energy level of 1-2 keV, with another important energy band centered about 44 keV. The low-altitude component is consistent with in situ observations by DMSP/SSJ4. The result of this study demonstrates that with satellite constellations such as TWINS, using all-sky ENA imagers deployed at multiple vantage points, 3-D distribution of RC ion

  10. Synergistic Instance-Level Subspace Alignment for Fine-Grained Sketch-Based Image Retrieval.

    Science.gov (United States)

    Li, Ke; Pang, Kaiyue; Song, Yi-Zhe; Hospedales, Timothy M; Xiang, Tao; Zhang, Honggang

    2017-08-25

    We study the problem of fine-grained sketch-based image retrieval. By performing instance-level (rather than category-level) retrieval, it embodies a timely and practical application, particularly with the ubiquitous availability of touchscreens. Three factors contribute to the challenging nature of the problem: (i) free-hand sketches are inherently abstract and iconic, making visual comparisons with photos difficult, (ii) sketches and photos are in two different visual domains, i.e. black and white lines vs. color pixels, and (iii) fine-grained distinctions are especially challenging when executed across domain and abstraction-level. To address these challenges, we propose to bridge the image-sketch gap both at the high-level via parts and attributes, as well as at the low-level, via introducing a new domain alignment method. More specifically, (i) we contribute a dataset with 304 photos and 912 sketches, where each sketch and image is annotated with its semantic parts and associated part-level attributes. With the help of this dataset, we investigate (ii) how strongly-supervised deformable part-based models can be learned that subsequently enable automatic detection of part-level attributes, and provide pose-aligned sketch-image comparisons. To reduce the sketch-image gap when comparing low-level features, we also (iii) propose a novel method for instance-level domain-alignment, that exploits both subspace and instance-level cues to better align the domains. Finally (iv) these are combined in a matching framework integrating aligned low-level features, mid-level geometric structure and high-level semantic attributes. Extensive experiments conducted on our new dataset demonstrate effectiveness of the proposed method.

  11. Color Texture Image Retrieval Based on Local Extrema Features and Riemannian Distance

    Directory of Open Access Journals (Sweden)

    Minh-Tan Pham

    2017-10-01

    Full Text Available A novel efficient method for content-based image retrieval (CBIR is developed in this paper using both texture and color features. Our motivation is to represent and characterize an input image by a set of local descriptors extracted from characteristic points (i.e., keypoints within the image. Then, dissimilarity measure between images is calculated based on the geometric distance between the topological feature spaces (i.e., manifolds formed by the sets of local descriptors generated from each image of the database. In this work, we propose to extract and use the local extrema pixels as our feature points. Then, the so-called local extrema-based descriptor (LED is generated for each keypoint by integrating all color, spatial as well as gradient information captured by its nearest local extrema. Hence, each image is encoded by an LED feature point cloud and Riemannian distances between these point clouds enable us to tackle CBIR. Experiments performed on several color texture databases including Vistex, STex, color Brodazt, USPtex and Outex TC-00013 using the proposed approach provide very efficient and competitive results compared to the state-of-the-art methods.

  12. A unified framework for image retrieval using keyword and visual features.

    Science.gov (United States)

    Jing, Feng; Li, Mingling; Zhang, Hong-Jiang; Zhang, Bo

    2005-07-01

    In this paper, a unified image retrieval framework based on both keyword annotations and visual features is proposed. In this framework, a set of statistical models are built based on visual features of a small set of manually labeled images to represent semantic concepts and used to propagate keywords to other unlabeled images. These models are updated periodically when more images implicitly labeled by users become available through relevance feedback. In this sense, the keyword models serve the function of accumulation and memorization of knowledge learned from user-provided relevance feedback. Furthermore, two sets of effective and efficient similarity measures and relevance feedback schemes are proposed for query by keyword scenario and query by image example scenario, respectively. Keyword models are combined with visual features in these schemes. In particular, a new, entropy-based active learning strategy is introduced to improve the efficiency of relevance feedback for query by keyword. Furthermore, a new algorithm is proposed to estimate the keyword features of the search concept for query by image example. It is shown to be more appropriate than two existing relevance feedback algorithms. Experimental results demonstrate the effectiveness of the proposed framework.

  13. Retrieval of long and short lists from long term memory: a functional magnetic resonance imaging study with human subjects.

    Science.gov (United States)

    Zysset, S; Müller, K; Lehmann, C; Thöne-Otto, A I; von Cramon, D Y

    2001-11-13

    Previous studies have shown that reaction time in an item-recognition task with both short and long lists is a quadratic function of list length. This suggests that either different memory retrieval processes are implied for short and long lists or an adaptive process is involved. An event-related functional magnetic resonance imaging study with nine subjects and list lengths varying between 3 and 18 words was conducted to identify the underlying neuronal structures of retrieval from long and short lists. For the retrieval and processing of word-lists a single fronto-parietal network, including premotor, left prefrontal, left precuneal and left parietal regions, was activated. With increasing list length, no additional regions became involved in retrieving information from long-term memory, suggesting that not necessarily different, but highly adaptive retrieval processes are involved.

  14. An efficient and robutst method for shape-based image retrieval

    International Nuclear Information System (INIS)

    Salih, N.D.; Besar, R.; Abas, F.S.

    2007-01-01

    Shapes can be thought as being the words oft he visual language. Shape boundaries need to be simplified and estimated in a wide variety of image analysis applications. Representation and description of Shapes is one of the major problems in content-based image retrieval (CBIR). This paper present an a novel method for shape representation and description named block-based shape representation (BSR), which is capable of extracting reliable information of the object outline in a concise manner. Our technique is translation, scale, and rotation invariant. It works well on different types of shapes and fast enough for use in real-time. This technique has been implemented and evaluated in order to analyze its accuracy and Efficiency. Based on the experimental results, we urge that the proposed BSR is a compact and reliable shape representation method. (author)

  15. Model-based VQ for image data archival, retrieval and distribution

    Science.gov (United States)

    Manohar, Mareboyana; Tilton, James C.

    1995-01-01

    An ideal image compression technique for image data archival, retrieval and distribution would be one with the asymmetrical computational requirements of Vector Quantization (VQ), but without the complications arising from VQ codebooks. Codebook generation and maintenance are stumbling blocks which have limited the use of VQ as a practical image compression algorithm. Model-based VQ (MVQ), a variant of VQ described here, has the computational properties of VQ but does not require explicit codebooks. The codebooks are internally generated using mean removed error and Human Visual System (HVS) models. The error model assumed is the Laplacian distribution with mean, lambda-computed from a sample of the input image. A Laplacian distribution with mean, lambda, is generated with uniform random number generator. These random numbers are grouped into vectors. These vectors are further conditioned to make them perceptually meaningful by filtering the DCT coefficients from each vector. The DCT coefficients are filtered by multiplying by a weight matrix that is found to be optimal for human perception. The inverse DCT is performed to produce the conditioned vectors for the codebook. The only image dependent parameter used in the generation of codebook is the mean, lambda, that is included in the coded file to repeat the codebook generation process for decoding.

  16. Linear information retrieval method in X-ray grating-based phase contrast imaging and its interchangeability with tomographic reconstruction

    Science.gov (United States)

    Wu, Z.; Gao, K.; Wang, Z. L.; Shao, Q. G.; Hu, R. F.; Wei, C. X.; Zan, G. B.; Wali, F.; Luo, R. H.; Zhu, P. P.; Tian, Y. C.

    2017-06-01

    In X-ray grating-based phase contrast imaging, information retrieval is necessary for quantitative research, especially for phase tomography. However, numerous and repetitive processes have to be performed for tomographic reconstruction. In this paper, we report a novel information retrieval method, which enables retrieving phase and absorption information by means of a linear combination of two mutually conjugate images. Thanks to the distributive law of the multiplication as well as the commutative law and associative law of the addition, the information retrieval can be performed after tomographic reconstruction, thus simplifying the information retrieval procedure dramatically. The theoretical model of this method is established in both parallel beam geometry for Talbot interferometer and fan beam geometry for Talbot-Lau interferometer. Numerical experiments are also performed to confirm the feasibility and validity of the proposed method. In addition, we discuss its possibility in cone beam geometry and its advantages compared with other methods. Moreover, this method can also be employed in other differential phase contrast imaging methods, such as diffraction enhanced imaging, non-interferometric imaging, and edge illumination.

  17. Using the open Web as an information resource and scholarly Web search engines as retrieval tools for academic and research purposes

    OpenAIRE

    Filistea Naude; Chris Rensleigh; Adeline S.A. du Toit

    2010-01-01

    This study provided insight into the significance of the open Web as an information resource and Web search engines as research tools amongst academics. The academic staff establishment of the University of South Africa (Unisa) was invited to participate in a questionnaire survey and included 1188 staff members from five colleges. This study culminated in a PhD dissertation in 2008. One hundred and eighty seven respondents participated in the survey which gave a response rate of 15.7%. The re...

  18. Optimization of reference library used in content-based medical image retrieval scheme

    International Nuclear Information System (INIS)

    Park, Sang Cheol; Sukthankar, Rahul; Mummert, Lily; Satyanarayanan, Mahadev; Zheng Bin

    2007-01-01

    Building an optimal image reference library is a critical step in developing the interactive computer-aided detection and diagnosis (I-CAD) systems of medical images using content-based image retrieval (CBIR) schemes. In this study, the authors conducted two experiments to investigate (1) the relationship between I-CAD performance and size of reference library and (2) a new reference selection strategy to optimize the library and improve I-CAD performance. The authors assembled a reference library that includes 3153 regions of interest (ROI) depicting either malignant masses (1592) or CAD-cued false-positive regions (1561) and an independent testing data set including 200 masses and 200 false-positive regions. A CBIR scheme using a distance-weighted K-nearest neighbor algorithm is applied to retrieve references that are considered similar to the testing sample from the library. The area under receiver operating characteristic curve (A z ) is used as an index to evaluate the I-CAD performance. In the first experiment, the authors systematically increased reference library size and tested I-CAD performance. The result indicates that scheme performance improves initially from A z =0.715 to 0.874 and then plateaus when the library size reaches approximately half of its maximum capacity. In the second experiment, based on the hypothesis that a ROI should be removed if it performs poorly compared to a group of similar ROIs in a large and diverse reference library, the authors applied a new strategy to identify 'poorly effective' references. By removing 174 identified ROIs from the reference library, I-CAD performance significantly increases to A z =0.914 (p<0.01). The study demonstrates that increasing reference library size and removing poorly effective references can significantly improve I-CAD performance

  19. Use of web-based simulators and YouTube for teaching of Magnetic Resonance Imaging

    DEFF Research Database (Denmark)

    Hanson, Lars G.

    Interactive web-based software for teaching of 3D vector dynamics involved in Magnetic Resonance Imaging (MRI) was developed. The software is briefly discussed along with the background, design, implementation, dissemination and educational value....

  20. A Flexible Approach for Managing Digital Images on the Semantic Web

    National Research Council Canada - National Science Library

    Halaschek-Wiener, Christian; Schain, Andrew; Golbeck, Jennifer; Grove, Michael; Parsia, Bijan; Hendler, Jim

    2006-01-01

    .... While progress has been made, through a representative use case, we provide motivation for further work in developing more domain independent techniques for both annotating and managing images on the Web...

  1. Effects of Per-Pixel Variability on Uncertainties in Bathymetric Retrievals from High-Resolution Satellite Images

    Directory of Open Access Journals (Sweden)

    Elizabeth J. Botha

    2016-05-01

    Full Text Available Increased sophistication of high spatial resolution multispectral satellite sensors provides enhanced bathymetric mapping capability. However, the enhancements are counter-acted by per-pixel variability in sunglint, atmospheric path length and directional effects. This case-study highlights retrieval errors from images acquired at non-optimal geometrical combinations. The effects of variations in the environmental noise on water surface reflectance and the accuracy of environmental variable retrievals were quantified. Two WorldView-2 satellite images were acquired, within one minute of each other, with Image 1 placed in a near-optimal sun-sensor geometric configuration and Image 2 placed close to the specular point of the Bidirectional Reflectance Distribution Function (BRDF. Image 2 had higher total environmental noise due to increased surface glint and higher atmospheric path-scattering. Generally, depths were under-estimated from Image 2, compared to Image 1. A partial improvement in retrieval error after glint correction of Image 2 resulted in an increase of the maximum depth to which accurate depth estimations were returned. This case-study indicates that critical analysis of individual images, accounting for the entire sun elevation and azimuth and satellite sensor pointing and geometry as well as anticipated wave height and direction, is required to ensure an image is fit for purpose for aquatic data analysis.

  2. Overview of intelligent data retrieval methods for waveforms and images in massive fusion databases

    Energy Technology Data Exchange (ETDEWEB)

    Vega, J. [JET-EFDA, Culham Science Center, OX14 3DB Abingdon (United Kingdom); Asociacion EURATOM/CIEMAT para Fusion, Avda. Complutense 22, 28040 Madrid (Spain)], E-mail: jesus.vega@ciemat.es; Murari, A. [JET-EFDA, Culham Science Center, OX14 3DB Abingdon (United Kingdom); Consorzio RFX-Associazione EURATOM ENEA per la Fusione, I-35127 Padua (Italy); Pereira, A.; Portas, A.; Ratta, G.A.; Castro, R. [JET-EFDA, Culham Science Center, OX14 3DB Abingdon (United Kingdom); Asociacion EURATOM/CIEMAT para Fusion, Avda. Complutense 22, 28040 Madrid (Spain)

    2009-06-15

    JET database contains more than 42 Tbytes of data (waveforms and images) and it doubles its size about every 2 years. ITER database is expected to be orders of magnitude above this quantity. Therefore, data access in such huge databases can no longer be efficiently based on shot number or temporal interval. Taking into account that diagnostics generate reproducible signal patterns (structural shapes) for similar physical behaviour, high level data access systems can be developed. In these systems, the input parameter is a pattern and the outputs are the shot numbers and the temporal locations where similar patterns appear inside the database. These pattern oriented techniques can be used for first data screening of any type of morphological aspect of waveforms and images. The article shows a new technique to look for similar images in huge databases in a fast an efficient way. Also, previous techniques to search for similar waveforms and to retrieve time-series data or images containing any kind of patterns are reviewed.

  3. Hierarchical Recurrent Neural Hashing for Image Retrieval With Hierarchical Convolutional Features.

    Science.gov (United States)

    Lu, Xiaoqiang; Chen, Yaxiong; Li, Xuelong

    Hashing has been an important and effective technology in image retrieval due to its computational efficiency and fast search speed. The traditional hashing methods usually learn hash functions to obtain binary codes by exploiting hand-crafted features, which cannot optimally represent the information of the sample. Recently, deep learning methods can achieve better performance, since deep learning architectures can learn more effective image representation features. However, these methods only use semantic features to generate hash codes by shallow projection but ignore texture details. In this paper, we proposed a novel hashing method, namely hierarchical recurrent neural hashing (HRNH), to exploit hierarchical recurrent neural network to generate effective hash codes. There are three contributions of this paper. First, a deep hashing method is proposed to extensively exploit both spatial details and semantic information, in which, we leverage hierarchical convolutional features to construct image pyramid representation. Second, our proposed deep network can exploit directly convolutional feature maps as input to preserve the spatial structure of convolutional feature maps. Finally, we propose a new loss function that considers the quantization error of binarizing the continuous embeddings into the discrete binary codes, and simultaneously maintains the semantic similarity and balanceable property of hash codes. Experimental results on four widely used data sets demonstrate that the proposed HRNH can achieve superior performance over other state-of-the-art hashing methods.Hashing has been an important and effective technology in image retrieval due to its computational efficiency and fast search speed. The traditional hashing methods usually learn hash functions to obtain binary codes by exploiting hand-crafted features, which cannot optimally represent the information of the sample. Recently, deep learning methods can achieve better performance, since deep

  4. Robust image obfuscation for privacy protection in Web 2.0 applications

    Science.gov (United States)

    Poller, Andreas; Steinebach, Martin; Liu, Huajian

    2012-03-01

    We present two approaches to robust image obfuscation based on permutation of image regions and channel intensity modulation. The proposed concept of robust image obfuscation is a step towards end-to-end security in Web 2.0 applications. It helps to protect the privacy of the users against threats caused by internet bots and web applications that extract biometric and other features from images for data-linkage purposes. The approaches described in this paper consider that images uploaded to Web 2.0 applications pass several transformations, such as scaling and JPEG compression, until the receiver downloads them. In contrast to existing approaches, our focus is on usability, therefore the primary goal is not a maximum of security but an acceptable trade-off between security and resulting image quality.

  5. Use of web-based simulators and YouTube for teaching of Magnetic Resonance Imaging

    DEFF Research Database (Denmark)

    Hanson, Lars G.

    Interactive web-based software for teaching of 3D vector dynamics involved in Magnetic Resonance Imaging (MRI) was developed. The software is briefly discussed along with the background, design, implementation, dissemination and educational value.......Interactive web-based software for teaching of 3D vector dynamics involved in Magnetic Resonance Imaging (MRI) was developed. The software is briefly discussed along with the background, design, implementation, dissemination and educational value....

  6. Data Retrieval Algorithms for Validating the Optical Transient Detector and the Lightning Imaging Sensor

    Science.gov (United States)

    Koshak, W. J.; Blakeslee, R. J.; Bailey, J. C.

    2000-01-01

    A linear algebraic solution is provided for the problem of retrieving the location and time of occurrence of lightning ground strikes from an Advanced Lightning Direction Finder (ALDF) network. The ALDF network measures field strength, magnetic bearing, and arrival time of lightning radio emissions. Solutions for the plane (i.e., no earth curvature) are provided that implement all of these measurements. The accuracy of the retrieval method is tested using computer-simulated datasets, and the relative influence of bearing and arrival time data an the outcome of the final solution is formally demonstrated. The algorithm is sufficiently accurate to validate NASA:s Optical Transient Detector and Lightning Imaging Sensor. A quadratic planar solution that is useful when only three arrival time measurements are available is also introduced. The algebra of the quadratic root results are examined in detail to clarify what portions of the analysis region lead to fundamental ambiguities in sc)iirce location, Complex root results are shown to be associated with the presence of measurement errors when the lightning source lies near an outer sensor baseline of the ALDF network. For arbitrary noncollinear network geometries and in the absence of measurement errors, it is shown that the two quadratic roots are equivalent (no source location ambiguity) on the outer sensor baselines. The accuracy of the quadratic planar method is tested with computer-generated datasets, and the results are generally better than those obtained from the three-station linear planar method when bearing errors are about 2 deg.

  7. Learning binary code via PCA of angle projection for image retrieval

    Science.gov (United States)

    Yang, Fumeng; Ye, Zhiqiang; Wei, Xueqi; Wu, Congzhong

    2018-01-01

    With benefits of low storage costs and high query speeds, binary code representation methods are widely researched for efficiently retrieving large-scale data. In image hashing method, learning hashing function to embed highdimensions feature to Hamming space is a key step for accuracy retrieval. Principal component analysis (PCA) technical is widely used in compact hashing methods, and most these hashing methods adopt PCA projection functions to project the original data into several dimensions of real values, and then each of these projected dimensions is quantized into one bit by thresholding. The variances of different projected dimensions are different, and with real-valued projection produced more quantization error. To avoid the real-valued projection with large quantization error, in this paper we proposed to use Cosine similarity projection for each dimensions, the angle projection can keep the original structure and more compact with the Cosine-valued. We used our method combined the ITQ hashing algorithm, and the extensive experiments on the public CIFAR-10 and Caltech-256 datasets validate the effectiveness of the proposed method.

  8. Stochastic Optimized Relevance Feedback Particle Swarm Optimization for Content Based Image Retrieval

    Directory of Open Access Journals (Sweden)

    Muhammad Imran

    2014-01-01

    Full Text Available One of the major challenges for the CBIR is to bridge the gap between low level features and high level semantics according to the need of the user. To overcome this gap, relevance feedback (RF coupled with support vector machine (SVM has been applied successfully. However, when the feedback sample is small, the performance of the SVM based RF is often poor. To improve the performance of RF, this paper has proposed a new technique, namely, PSO-SVM-RF, which combines SVM based RF with particle swarm optimization (PSO. The aims of this proposed technique are to enhance the performance of SVM based RF and also to minimize the user interaction with the system by minimizing the RF number. The PSO-SVM-RF was tested on the coral photo gallery containing 10908 images. The results obtained from the experiments showed that the proposed PSO-SVM-RF achieved 100% accuracy in 8 feedback iterations for top 10 retrievals and 80% accuracy in 6 iterations for 100 top retrievals. This implies that with PSO-SVM-RF technique high accuracy rate is achieved at a small number of iterations.

  9. A new web-based system for unsupervised classification of satellite images from the Google Maps engine

    Science.gov (United States)

    Ferrán, Ángel; Bernabé, Sergio; García-Rodríguez, Pablo; Plaza, Antonio

    2012-10-01

    In this paper, we develop a new web-based system for unsupervised classification of satellite images available from the Google Maps engine. The system has been developed using the Google Maps API and incorporates functionalities such as unsupervised classification of image portions selected by the user (at the desired zoom level). For this purpose, we use a processing chain made up of the well-known ISODATA and k-means algorithms, followed by spatial post-processing based on majority voting. The system is currently hosted on a high performance server which performs the execution of classification algorithms and returns the obtained classification results in a very efficient way. The previous functionalities are necessary to use efficient techniques for the classification of images and the incorporation of content-based image retrieval (CBIR). Several experimental validation types of the classification results with the proposed system are performed by comparing the classification accuracy of the proposed chain by means of techniques available in the well-known Environment for Visualizing Images (ENVI) software package. The server has access to a cluster of commodity graphics processing units (GPUs), hence in future work we plan to perform the processing in parallel by taking advantage of the cluster.

  10. Texture Retrieval from VHR Optical Remote Sensed Images Using the Local Extrema Descriptor with Application to Vineyard Parcel Detection

    Directory of Open Access Journals (Sweden)

    Minh-Tan Pham

    2016-04-01

    Full Text Available In this article, we develop a novel method for the detection of vineyard parcels in agricultural landscapes based on very high resolution (VHR optical remote sensing images. Our objective is to perform texture-based image retrieval and supervised classification algorithms. To do that, the local textural and structural features inside each image are taken into account to measure its similarity to other images. In fact, VHR images usually involve a variety of local textures and structures that may verify a weak stationarity hypothesis. Hence, an approach only based on characteristic points, not on all pixels of the image, is supposed to be relevant. This work proposes to construct the local extrema-based descriptor (LED by using the local maximum and local minimum pixels extracted from the image. The LED descriptor is formed based on the radiometric, geometric and gradient features from these local extrema. We first exploit the proposed LED descriptor for the retrieval task to evaluate its performance on texture discrimination. Then, it is embedded into a supervised classification framework to detect vine parcels using VHR satellite images. Experiments performed on VHR panchromatic PLEIADES image data prove the effectiveness of the proposed strategy. Compared to state-of-the-art methods, an enhancement of about 7% in retrieval rate is achieved. For the detection task, about 90% of vineyards are correctly detected.

  11. Application of Google Maps API service for creating web map of information retrieved from CORINE land cover databases

    Directory of Open Access Journals (Sweden)

    Kilibarda Milan

    2010-01-01

    Full Text Available Today, Google Maps API application based on Ajax technology as standard web service; facilitate users with publication interactive web maps, thus opening new possibilities in relation to the classical analogue maps. CORINE land cover databases are recognized as the fundamental reference data sets for numerious spatial analysis. The theoretical and applicable aspects of Google Maps API cartographic service are considered on the case of creating web map of change in urban areas in Belgrade and surround from 2000. to 2006. year, obtained from CORINE databases.

  12. PROTOTYPE CONTENT BASED IMAGE RETRIEVAL UNTUK DETEKSI PEN YAKIT KULIT DENGAN METODE EDGE DETECTION

    Directory of Open Access Journals (Sweden)

    Erick Fernando

    2016-05-01

    Full Text Available Dokter spesialis kulit melakukan pemeriksa secara visual objek mata, capture objek dengan kamera digital dan menanyakan riwayat perjalanan penyakit pasien, tanpa melakukan perbandingan terhadap gejala dan tanda yang ada sebelummnya. Sehingga pemeriksaan dan perkiraan jenis penyakit kulit. Pengolahan data citra dalam bentuk digital khususnya citra medis sudah sangat dibutuhkan dengan pra-processing. Banyak pasien yang dilayani di rumah sakit masih menggunakan data citra analog. Data analog ini membutuhkan ruangan khusus untuk menyimpan guna menghindarkan kerusakan mekanis. Uraian mengatasi permasalahan ini, citra medis dibuat dalam bentuk digital dan disimpan dalam sistem database dan dapat melihat kesamaan citra kulit yang baru. Citra akan dapat ditampilkan dengan pra- processing dengan identifikasi kesamaan dengan Content Based Image Retrieval (CBIR bekerja dengan cara mengukur kemiripan citra query dengan semua citra yang ada dalam database sehingga query cost berbanding lurus dengan jumlah citra dalam database.

  13. Rotation-robust math symbol recognition and retrieval using outer contours and image subsampling

    Science.gov (United States)

    Zhu, Siyu; Hu, Lei; Zanibbi, Richard

    2013-01-01

    This paper presents an unified recognition and retrieval system for isolated offline printed mathematical symbols for the first time. The system is based on nearest neighbor scheme and uses modified Turning Function and Grid Features to calculate the distance between two symbols based on Sum of Squared Difference. An unwrap process and an alignment process are applied to modify Turning Function to deal with the horizontal and vertical shift caused by the changing of staring point and rotation. This modified Turning Function make our system robust against rotation of the symbol image. The system obtains top-1 recognition rate of 96.90% and 47.27% Area Under Curve (AUC) of precision/recall plot on the InftyCDB-3 dataset. Experiment result shows that the system with modified Turning Function performs significantly better than the system with original Turning Function on the rotated InftyCDB-3 dataset.

  14. An Approach for Foliar Trait Retrieval from Airborne Imaging Spectroscopy of Tropical Forests

    Directory of Open Access Journals (Sweden)

    Roberta E. Martin

    2018-01-01

    Full Text Available Spatial information on forest functional composition is needed to inform management and conservation efforts, yet this information is lacking, particularly in tropical regions. Canopy foliar traits underpin the functional biodiversity of forests, and have been shown to be remotely measurable using airborne 350–2510 nm imaging spectrometers. We used newly acquired imaging spectroscopy data constrained with concurrent light detection and ranging (LiDAR measurements from the Carnegie Airborne Observatory (CAO, and field measurements, to test the performance of the Spectranomics approach for foliar trait retrieval. The method was previously developed in Neotropical forests, and was tested here in the humid tropical forests of Malaysian Borneo. Multiple foliar chemical traits, as well as leaf mass per area (LMA, were estimated with demonstrable precision and accuracy. The results were similar to those observed for Neotropical forests, suggesting a more general use of the Spectranomics approach for mapping canopy traits in tropical forests. Future mapping studies using this approach can advance scientific investigations and applications based on imaging spectroscopy.

  15. Application of an internet web-site of medical images in tele-radiology

    International Nuclear Information System (INIS)

    Wang Weizhong; Wang Hua; Xie Jingxia; Wang Songzhang; Li Xiangdong; Qian Min; Cao Huixia

    2000-01-01

    Objective: To build and Internet web-site of medical images for tele-education and tele-consultation. Methods: Collecting medical images of cases that fulfilled diagnostic standards for teaching and were pathologically proved. The images were digitized using digital camera and scanner. Frontpage 98, Homesite 2.5 and text editors were used for programming. Results: The web site encompasses many useful cases and was update every week. With smart and friendly interface, easy used navigation, the site runs reliably in TCP/IP environment. The site's URL is http://imager.163.net. At present, the site has received about 100 visits per week. Conclusion: The well-designed and programmed internet web site of medical images would be easily acceptable and is going to play an important role in tele-education and tele-consultation

  16. Content-Based High-Resolution Remote Sensing Image Retrieval via Unsupervised Feature Learning and Collaborative Affinity Metric Fusion

    Directory of Open Access Journals (Sweden)

    Yansheng Li

    2016-08-01

    Full Text Available With the urgent demand for automatic management of large numbers of high-resolution remote sensing images, content-based high-resolution remote sensing image retrieval (CB-HRRS-IR has attracted much research interest. Accordingly, this paper proposes a novel high-resolution remote sensing image retrieval approach via multiple feature representation and collaborative affinity metric fusion (IRMFRCAMF. In IRMFRCAMF, we design four unsupervised convolutional neural networks with different layers to generate four types of unsupervised features from the fine level to the coarse level. In addition to these four types of unsupervised features, we also implement four traditional feature descriptors, including local binary pattern (LBP, gray level co-occurrence (GLCM, maximal response 8 (MR8, and scale-invariant feature transform (SIFT. In order to fully incorporate the complementary information among multiple features of one image and the mutual information across auxiliary images in the image dataset, this paper advocates collaborative affinity metric fusion to measure the similarity between images. The performance evaluation of high-resolution remote sensing image retrieval is implemented on two public datasets, the UC Merced (UCM dataset and the Wuhan University (WH dataset. Large numbers of experiments show that our proposed IRMFRCAMF can significantly outperform the state-of-the-art approaches.

  17. A new method for information retrieval in two-dimensional grating-based X-ray phase contrast imaging

    International Nuclear Information System (INIS)

    Wang Zhi-Li; Gao Kun; Chen Jian; Ge Xin; Tian Yang-Chao; Wu Zi-Yu; Zhu Pei-Ping

    2012-01-01

    Grating-based X-ray phase contrast imaging has been demonstrated to be an extremely powerful phase-sensitive imaging technique. By using two-dimensional (2D) gratings, the observable contrast is extended to two refraction directions. Recently, we have developed a novel reverse-projection (RP) method, which is capable of retrieving the object information efficiently with one-dimensional (1D) grating-based phase contrast imaging. In this contribution, we present its extension to the 2D grating-based X-ray phase contrast imaging, named the two-dimensional reverse-projection (2D-RP) method, for information retrieval. The method takes into account the nonlinear contributions of two refraction directions and allows the retrieval of the absorption, the horizontal and the vertical refraction images. The obtained information can be used for the reconstruction of the three-dimensional phase gradient field, and for an improved phase map retrieval and reconstruction. Numerical experiments are carried out, and the results confirm the validity of the 2D-RP method

  18. Fractional Fourier domain optical image hiding using phase retrieval algorithm based on iterative nonlinear double random phase encoding.

    Science.gov (United States)

    Wang, Xiaogang; Chen, Wen; Chen, Xudong

    2014-09-22

    We present a novel image hiding method based on phase retrieval algorithm under the framework of nonlinear double random phase encoding in fractional Fourier domain. Two phase-only masks (POMs) are efficiently determined by using the phase retrieval algorithm, in which two cascaded phase-truncated fractional Fourier transforms (FrFTs) are involved. No undesired information disclosure, post-processing of the POMs or digital inverse computation appears in our proposed method. In order to achieve the reduction in key transmission, a modified image hiding method based on the modified phase retrieval algorithm and logistic map is further proposed in this paper, in which the fractional orders and the parameters with respect to the logistic map are regarded as encryption keys. Numerical results have demonstrated the feasibility and effectiveness of the proposed algorithms.

  19. Computer-aided diagnostics of screening mammography using content-based image retrieval

    Science.gov (United States)

    Deserno, Thomas M.; Soiron, Michael; de Oliveira, Júlia E. E.; de A. Araújo, Arnaldo

    2012-03-01

    Breast cancer is one of the main causes of death among women in occidental countries. In the last years, screening mammography has been established worldwide for early detection of breast cancer, and computer-aided diagnostics (CAD) is being developed to assist physicians reading mammograms. A promising method for CAD is content-based image retrieval (CBIR). Recently, we have developed a classification scheme of suspicious tissue pattern based on the support vector machine (SVM). In this paper, we continue moving towards automatic CAD of screening mammography. The experiments are based on in total 10,509 radiographs that have been collected from different sources. From this, 3,375 images are provided with one and 430 radiographs with more than one chain code annotation of cancerous regions. In different experiments, this data is divided into 12 and 20 classes, distinguishing between four categories of tissue density, three categories of pathology and in the 20 class problem two categories of different types of lesions. Balancing the number of images in each class yields 233 and 45 images remaining in each of the 12 and 20 classes, respectively. Using a two-dimensional principal component analysis, features are extracted from small patches of 128 x 128 pixels and classified by means of a SVM. Overall, the accuracy of the raw classification was 61.6 % and 52.1 % for the 12 and the 20 class problem, respectively. The confusion matrices are assessed for detailed analysis. Furthermore, an implementation of a SVM-based CBIR system for CADx in screening mammography is presented. In conclusion, with a smarter patch extraction, the CBIR approach might reach precision rates that are helpful for the physicians. This, however, needs more comprehensive evaluation on clinical data.

  20. PCI bus content-addressable-memory (CAM) implementation on FPGA for pattern recognition/image retrieval in a distributed environment

    Science.gov (United States)

    Megherbi, Dalila B.; Yan, Yin; Tanmay, Parikh; Khoury, Jed; Woods, C. L.

    2004-11-01

    Recently surveillance and Automatic Target Recognition (ATR) applications are increasing as the cost of computing power needed to process the massive amount of information continues to fall. This computing power has been made possible partly by the latest advances in FPGAs and SOPCs. In particular, to design and implement state-of-the-Art electro-optical imaging systems to provide advanced surveillance capabilities, there is a need to integrate several technologies (e.g. telescope, precise optics, cameras, image/compute vision algorithms, which can be geographically distributed or sharing distributed resources) into a programmable system and DSP systems. Additionally, pattern recognition techniques and fast information retrieval, are often important components of intelligent systems. The aim of this work is using embedded FPGA as a fast, configurable and synthesizable search engine in fast image pattern recognition/retrieval in a distributed hardware/software co-design environment. In particular, we propose and show a low cost Content Addressable Memory (CAM)-based distributed embedded FPGA hardware architecture solution with real time recognition capabilities and computing for pattern look-up, pattern recognition, and image retrieval. We show how the distributed CAM-based architecture offers a performance advantage of an order-of-magnitude over RAM-based architecture (Random Access Memory) search for implementing high speed pattern recognition for image retrieval. The methods of designing, implementing, and analyzing the proposed CAM based embedded architecture are described here. Other SOPC solutions/design issues are covered. Finally, experimental results, hardware verification, and performance evaluations using both the Xilinx Virtex-II and the Altera Apex20k are provided to show the potential and power of the proposed method for low cost reconfigurable fast image pattern recognition/retrieval at the hardware/software co-design level.

  1. BIRD: Bio-Image Referral Database. Design and implementation of a new web based and patient multimedia data focused system for effective medical diagnosis and therapy.

    Science.gov (United States)

    Pinciroli, Francesco; Masseroli, Marco; Acerbo, Livio A; Bonacina, Stefano; Ferrari, Roberto; Marchente, Mario

    2004-01-01

    This paper presents a low cost software platform prototype supporting health care personnel in retrieving patient referral multimedia data. These information are centralized in a server machine and structured by using a flexible eXtensible Markup Language (XML) Bio-Image Referral Database (BIRD). Data are distributed on demand to requesting client in an Intranet network and transformed via eXtensible Stylesheet Language (XSL) to be visualized in an uniform way on market browsers. The core server operation software has been developed in PHP Hypertext Preprocessor scripting language, which is very versatile and useful for crafting a dynamic Web environment.

  2. Encryption of QR code and grayscale image in interference-based scheme with high quality retrieval and silhouette problem removal

    Science.gov (United States)

    Qin, Yi; Wang, Hongjuan; Wang, Zhipeng; Gong, Qiong; Wang, Danchen

    2016-09-01

    In optical interference-based encryption (IBE) scheme, the currently available methods have to employ the iterative algorithms in order to encrypt two images and retrieve cross-talk free decrypted images. In this paper, we shall show that this goal can be achieved via an analytical process if one of the two images is QR code. For decryption, the QR code is decrypted in the conventional architecture and the decryption has a noisy appearance. Nevertheless, the robustness of QR code against noise enables the accurate acquisition of its content from the noisy retrieval, as a result of which the primary QR code can be exactly regenerated. Thereafter, a novel optical architecture is proposed to recover the grayscale image by aid of the QR code. In addition, the proposal has totally eliminated the silhouette problem existing in the previous IBE schemes, and its effectiveness and feasibility have been demonstrated by numerical simulations.

  3. Retrieval of suspended sediment concentrations using Landsat-8 OLI satellite images in the Orinoco River (Venezuela)

    Science.gov (United States)

    Yepez, Santiago; Laraque, Alain; Martinez, Jean-Michel; De Sa, Jose; Carrera, Juan Manuel; Castellanos, Bartolo; Gallay, Marjorie; Lopez, Jose L.

    2018-01-01

    In this study, 81 Landsat-8 scenes acquired from 2013 to 2015 were used to estimate the suspended sediment concentration (SSC) in the Orinoco River at its main hydrological station at Ciudad Bolivar, Venezuela. This gauging station monitors an upstream area corresponding to 89% of the total catchment area where the mean discharge is of 33,000 m3·s-1. SSC spatial and temporal variabilities were analyzed in relation to the hydrological cycle and to local geomorphological characteristics of the river mainstream. Three types of atmospheric correction models were evaluated to correct the Landsat-8 images: DOS, FLAASH, and L8SR. Surface reflectance was compared with monthly water sampling to calibrate a SSC retrieval model using a bootstrapping resampling. A regression model based on surface reflectance at the Near-Infrared wavelengths showed the best performance: R2 = 0.92 (N = 27) for the whole range of SSC (18 to 203 mg·l-1) measured at this station during the studied period. The method offers a simple new approach to estimate the SSC along the lower Orinoco River and demonstrates the feasibility and reliability of remote sensing images to map the spatiotemporal variability in sediment transport over large rivers.

  4. The library without walls: images, medical dictionaries, atlases, medical encyclopedias free on web.

    Science.gov (United States)

    Giglia, E

    2008-09-01

    The aim of this article was to present the ''reference room'' of the Internet, a real library without walls. The reader will find medical encyclopedias, dictionaries, atlases, e-books, images, and will also learn something useful about the use and reuse of images in a text and in a web site, according to the copyright law.

  5. Atmospheric retrieval analysis of the directly imaged exoplanet HR 8799b

    Energy Technology Data Exchange (ETDEWEB)

    Lee, Jae-Min [University of Zürich, Institute for Theoretical Physics, Winterthurerstrasse 190, CH-8057 Zürich (Switzerland); Heng, Kevin [University of Bern, Center for Space and Habitability, Sidlerstrasse 5, CH-3012 Bern (Switzerland); Irwin, Patrick G. J., E-mail: lee@physik.uzh.ch, E-mail: kevin.heng@csh.unibe.ch, E-mail: irwin@atm.ox.ac.uk [University of Oxford, Atmospheric, Oceanic and Planetary Physics, Clarendon Laboratory, Parks Road, Oxford OX1 3PU (United Kingdom)

    2013-12-01

    Directly imaged exoplanets are unexplored laboratories for the application of the spectral and temperature retrieval method, where the chemistry and composition of their atmospheres are inferred from inverse modeling of the available data. As a pilot study, we focus on the extrasolar gas giant HR 8799b, for which more than 50 data points are available. We upgrade our non-linear optimal estimation retrieval method to include a phenomenological model of clouds that requires the cloud optical depth and monodisperse particle size to be specified. Previous studies have focused on forward models with assumed values of the exoplanetary properties; there is no consensus on the best-fit values of the radius, mass, surface gravity, and effective temperature of HR 8799b. We show that cloud-free models produce reasonable fits to the data if the atmosphere is of super-solar metallicity and non-solar elemental abundances. Intermediate cloudy models with moderate values of the cloud optical depth and micron-sized particles provide an equally reasonable fit to the data and require a lower mean molecular weight. We report our best-fit values for the radius, mass, surface gravity, and effective temperature of HR 8799b. The mean molecular weight is about 3.8, while the carbon-to-oxygen ratio is about unity due to the prevalence of carbon monoxide. Our study emphasizes the need for robust claims about the nature of an exoplanetary atmosphere to be based on analyses involving both photometry and spectroscopy and inferred from beyond a few photometric data points, such as are typically reported for hot Jupiters.

  6. Facilitating medical information search using Google Glass connected to a content-based medical image retrieval system.

    Science.gov (United States)

    Widmer, Antoine; Schaer, Roger; Markonis, Dimitrios; Muller, Henning

    2014-01-01

    Wearable computing devices are starting to change the way users interact with computers and the Internet. Among them, Google Glass includes a small screen located in front of the right eye, a camera filming in front of the user and a small computing unit. Google Glass has the advantage to provide online services while allowing the user to perform tasks with his/her hands. These augmented glasses uncover many useful applications, also in the medical domain. For example, Google Glass can easily provide video conference between medical doctors to discuss a live case. Using these glasses can also facilitate medical information search by allowing the access of a large amount of annotated medical cases during a consultation in a non-disruptive fashion for medical staff. In this paper, we developed a Google Glass application able to take a photo and send it to a medical image retrieval system along with keywords in order to retrieve similar cases. As a preliminary assessment of the usability of the application, we tested the application under three conditions (images of the skin; printed CT scans and MRI images; and CT and MRI images acquired directly from an LCD screen) to explore whether using Google Glass affects the accuracy of the results returned by the medical image retrieval system. The preliminary results show that despite minor problems due to the relative stability of the Google Glass, images can be sent to and processed by the medical image retrieval system and similar images are returned to the user, potentially helping in the decision making process.

  7. High-resolution fluorescence imaging for red and far-red SIF retrieval at leaf and canopy scales

    Science.gov (United States)

    Albert, L.; Alonso, L.; Cushman, K.; Kellner, J. R.

    2017-12-01

    New commercial-off-the-shelf imaging spectrometers promise the combination of high spatial and spectral resolution needed to retrieve solar induced fluorescence (SIF) at multiple wavelengths for individual plants and even individual leaves from low-altitude airborne or ground-based platforms. Data from these instruments could provide insight into the status of the photosynthetic apparatus at scales of space and time not observable from high-altitude and space-based platforms, and could support calibration and validation activities of current and forthcoming space missions to quantify SIF (OCO-2, OCO-3, FLEX, and GEOCARB). High-spectral resolution enables SIF retrieval from regions of strong telluric absorption by molecular oxygen, and also within numerous solar Fraunhofer lines in atmospheric windows not obscured by oxygen or water absorptions. Here we evaluate algorithms for SIF retrieval using a commercial-off-the-shelf diffraction-grating imaging spectrometer with a spectral sampling interval of 0.05 nm and a FWHM 650 or 700 nm. These filters enable a direct measurement of SIF emission > 650 or 700 nm that serves as a benchmark against which retrievals from reflectance spectra can be evaluated. We repeated this comparison between leaf-level SIF emission spectra and retrieved SIF emission spectra for leaves treated with drought stress and an herbicide (DCMU) that inhibits electron transfer from QA to QB of PSII.

  8. MPEG-7 low level image descriptors for modeling users' web pages visual appeal opinion

    OpenAIRE

    Uribe Mayoral, Silvia; Alvarez Garcia, Federico; Menendez Garcia, Jose Manuel

    2015-01-01

    The study of the users' web pages first impression is an important factor for interface designers, due to its influence over the final opinion about a site. In this regard, the analysis of web aesthetics can be considered as an interesting tool for evaluating this early impression, and the use of low level image descriptors for modeling it in an objective way represents an innovative research field. According to this, in this paper we present a new model for website aesthetics evaluation and ...

  9. Imaging the 3D structure of secondary osteons in human cortical bone using phase-retrieval tomography

    Energy Technology Data Exchange (ETDEWEB)

    Arhatari, B D; Peele, A G [Department of Physics, La Trobe University, Victoria 3086 (Australia); Cooper, D M L [Department of Anatomy and Cell Biology, University of Saskatchewan, Saskatoon (Canada); Thomas, C D L; Clement, J G [Melbourne Dental School, University of Melbourne, Victoria 3010 (Australia)

    2011-08-21

    By applying a phase-retrieval step before carrying out standard filtered back-projection reconstructions in tomographic imaging, we were able to resolve structures with small differences in density within a densely absorbing sample. This phase-retrieval tomography is particularly suited for the three-dimensional segmentation of secondary osteons (roughly cylindrical structures) which are superimposed upon an existing cortical bone structure through the process of turnover known as remodelling. The resulting images make possible the analysis of the secondary osteon structure and the relationship between an osteon and the surrounding tissue. Our observations have revealed many different and complex 3D structures of osteons that could not be studied using previous methods. This work was carried out using a laboratory-based x-ray source, which makes obtaining these sorts of images readily accessible.

  10. Nuclear expert web mining system: monitoring and analysis of nuclear acceptance by information retrieval and opinion extraction on the Internet

    Energy Technology Data Exchange (ETDEWEB)

    Reis, Thiago; Barroso, Antonio C.O.; Imakuma, Kengo, E-mail: thiagoreis@usp.b, E-mail: barroso@ipen.b, E-mail: kimakuma@ipen.b [Instituto de Pesquisas Energeticas e Nucleares (IPEN/CNEN-SP), Sao Paulo, SP (Brazil)

    2011-07-01

    This paper presents a research initiative that aims to collect nuclear related information and to analyze opinionated texts by mining the hypertextual data environment and social networks web sites on the Internet. Different from previous approaches that employed traditional statistical techniques, it is being proposed a novel Web Mining approach, built using the concept of Expert Systems, for massive and autonomous data collection and analysis. The initial step has been accomplished, resulting in a framework design that is able to gradually encompass a set of evolving techniques, methods, and theories in such a way that this work will build a platform upon which new researches can be performed more easily by just substituting modules or plugging in new ones. Upon completion it is expected that this research will contribute to the understanding of the population views on nuclear technology and its acceptance. (author)

  11. Nuclear expert web mining system: monitoring and analysis of nuclear acceptance by information retrieval and opinion extraction on the Internet

    International Nuclear Information System (INIS)

    Reis, Thiago; Barroso, Antonio C.O.; Imakuma, Kengo

    2011-01-01

    This paper presents a research initiative that aims to collect nuclear related information and to analyze opinionated texts by mining the hypertextual data environment and social networks web sites on the Internet. Different from previous approaches that employed traditional statistical techniques, it is being proposed a novel Web Mining approach, built using the concept of Expert Systems, for massive and autonomous data collection and analysis. The initial step has been accomplished, resulting in a framework design that is able to gradually encompass a set of evolving techniques, methods, and theories in such a way that this work will build a platform upon which new researches can be performed more easily by just substituting modules or plugging in new ones. Upon completion it is expected that this research will contribute to the understanding of the population views on nuclear technology and its acceptance. (author)

  12. Prototype Web-based continuing medical education using FlashPix images.

    Science.gov (United States)

    Landman, A; Yagi, Y; Gilbertson, J; Dawson, R; Marchevsky, A; Becich, M J

    2000-01-01

    Continuing Medical Education (CME) is a requirement among practicing physicians to promote continuous enhancement of clinical knowledge to reflect new developments in medical care. Previous research has harnessed the Web to disseminate complete pathology CME case studies including history, images, diagnoses, and discussions to the medical community. Users submit real-time diagnoses and receive instantaneous feedback, eliminating the need for hard copies of case material and case evaluation forms. This project extends the Web-based CME paradigm with the incorporation of multi-resolution FlashPix images and an intuitive, interactive user interface. The FlashPix file format combines a high-resolution version of an image with a hierarchy of several lower resolution copies, providing real-time magnification via a single image file. The Web interface was designed specifically to simulate microscopic analysis, using the latest Javascript, Java and Common Gateway Interface tools. As the project progresses to the evaluation stage, it is hoped that this active learning format will provide a practical and efficacious environment for continuing medical education with additional application potential in classroom demonstrations, proficiency testing, and telepathology. Using Microsoft Internet Explorer 4.0 and above, the working prototype Web-based CME environment is accessible at http://telepathology.upmc.edu/WebInterface/NewInterface/welcome.html.

  13. Lensless coherent imaging of proteins and supramolecular assemblies: Efficient phase retrieval by the charge flipping algorithm.

    Science.gov (United States)

    Dumas, Christian; van der Lee, Arie; Palatinus, Lukáš

    2013-05-01

    Diffractive imaging using the intense and coherent beam of X-ray free-electron lasers opens new perspectives for structural studies of single nanoparticles and biomolecules. Simulations were carried out to generate 3D oversampled diffraction patterns of non-crystalline biological samples, ranging from peptides and proteins to megadalton complex assemblies, and to recover their molecular structure from nanometer to near-atomic resolutions. Using these simulated data, we show here that iterative reconstruction methods based on standard and variant forms of the charge flipping algorithm, can efficiently solve the phase retrieval problem and extract a unique and reliable molecular structure. Contrary to the case of conventional algorithms, where the estimation and the use of a compact support is imposed, our approach does not require any prior information about the molecular assembly, and is amenable to a wide range of biological assemblies. Importantly, the robustness of this ab initio approach is illustrated by the fact that it tolerates experimental noise and incompleteness of the intensity data at the center of the speckle pattern. Copyright © 2013 Elsevier Inc. All rights reserved.

  14. CERES cloud property retrievals from imagers on TRMM, Terra, and Aqua

    Science.gov (United States)

    Minnis, Patrick; Young, David F.; Sun-Mack, Sunny; Heck, Patrick W.; Doelling, David R.; Trepte, Qing Z.

    2004-02-01

    The micro- and macrophysical properties of clouds play a crucial role in Earth"s radiation budget. The NASA Clouds and Earth"s Radiant Energy System (CERES) is providing simultaneous measurements of the radiation and cloud fields on a global basis to improve the understanding and modeling of the interaction between clouds and radiation at the top of the atmosphere, at the surface, and within the atmosphere. Cloud properties derived for CERES from the Moderate Resolution Imaging Spectroradiometer (MODIS) on the Terra and Aqua satellites are compared to ensure consistency between the products to ensure the reliability of the retrievals from multiple platforms at different times of day. Comparisons of cloud fraction, height, optical depth, phase, effective particle size, and ice and liquid water paths from the two satellites show excellent consistency. Initial calibration comparisons are also very favorable. Differences between the Aqua and Terra results are generally due to diurnally dependent changes in the clouds. Additional algorithm refinement is needed over the polar regions for Aqua and at night over those same areas for Terra. The results should be extremely valuable for model validation and improvement and for improving our understanding of the relationship between clouds and the radiation budget.

  15. Retrieval of Garstang's emission function from all-sky camera images

    Science.gov (United States)

    Kocifaj, Miroslav; Solano Lamphar, Héctor Antonio; Kundracik, František

    2015-10-01

    The emission function from ground-based light sources predetermines the skyglow features to a large extent, while most mathematical models that are used to predict the night sky brightness require the information on this function. The radiant intensity distribution on a clear sky is experimentally determined as a function of zenith angle using the theoretical approach published only recently in MNRAS, 439, 3405-3413. We have made the experiments in two localities in Slovakia and Mexico by means of two digital single lens reflex professional cameras operating with different lenses that limit the system's field-of-view to either 180º or 167º. The purpose of using two cameras was to identify variances between two different apertures. Images are taken at different distances from an artificial light source (a city) with intention to determine the ratio of zenith radiance relative to horizontal irradiance. Subsequently, the information on the fraction of the light radiated directly into the upward hemisphere (F) is extracted. The results show that inexpensive devices can properly identify the upward emissions with adequate reliability as long as the clear sky radiance distribution is dominated by a largest ground-based light source. Highly unstable turbidity conditions can also make the parameter F difficult to find or even impossible to retrieve. The measurements at low elevation angles should be avoided due to a potentially parasitic effect of direct light emissions from luminaires surrounding the measuring site.

  16. A review of content-based image retrieval systems in medical applications-clinical benefits and future directions.

    Science.gov (United States)

    Müller, Henning; Michoux, Nicolas; Bandon, David; Geissbuhler, Antoine

    2004-02-01

    Content-based visual information retrieval (CBVIR) or content-based image retrieval (CBIR) has been one on the most vivid research areas in the field of computer vision over the last 10 years. The availability of large and steadily growing amounts of visual and multimedia data, and the development of the Internet underline the need to create thematic access methods that offer more than simple text-based queries or requests based on matching exact database fields. Many programs and tools have been developed to formulate and execute queries based on the visual or audio content and to help browsing large multimedia repositories. Still, no general breakthrough has been achieved with respect to large varied databases with documents of differing sorts and with varying characteristics. Answers to many questions with respect to speed, semantic descriptors or objective image interpretations are still unanswered. In the medical field, images, and especially digital images, are produced in ever-increasing quantities and used for diagnostics and therapy. The Radiology Department of the University Hospital of Geneva alone produced more than 12,000 images a day in 2002. The cardiology is currently the second largest producer of digital images, especially with videos of cardiac catheterization ( approximately 1800 exams per year containing almost 2000 images each). The total amount of cardiologic image data produced in the Geneva University Hospital was around 1 TB in 2002. Endoscopic videos can equally produce enormous amounts of data. With digital imaging and communications in medicine (DICOM), a standard for image communication has been set and patient information can be stored with the actual image(s), although still a few problems prevail with respect to the standardization. In several articles, content-based access to medical images for supporting clinical decision-making has been proposed that would ease the management of clinical data and scenarios for the integration of

  17. Search and retrieval of medical images for improved diagnosis of neurodegenerative diseases

    Science.gov (United States)

    Ekin, Ahmet; Jasinschi, Radu; Turan, Erman; Engbers, Rene; van der Grond, Jeroen; van Buchem, Mark A.

    2007-01-01

    In the medical world, the accuracy of diagnosis is mainly affected by either lack of sufficient understanding of some diseases or the inter-, and/or intra-observer variability of the diagnoses. The former requires understanding the progress of diseases at much earlier stages, extraction of important information from ever growing amounts of data, and finally finding correlations with certain features and complications that will illuminate the disease progression. The latter (inter-, and intra- observer variability) is caused by the differences in the experience levels of different medical experts (inter-observer variability) or by mental and physical tiredness of one expert (intra-observer variability). We believe that the use of large databases can help improve the current status of disease understanding and decision making. By comparing large number of patients, some of the otherwise hidden relations can be revealed that results in better understanding, patients with similar complications can be found, the diagnosis and treatment can be compared so that the medical expert can make a better diagnosis. To this effect, this paper introduces a search and retrieval system for brain MR databases and shows that brain iron accumulation shape provides additional information to the shape-insensitive features, such as the total brain iron load, that are commonly used in the clinics. We propose to use Kendall's correlation value to automatically compare various returns to a query. We also describe a fully automated and fast brain MR image analysis system to detect degenerative iron accumulation in brain, as it is the case in Alzheimer's and Parkinson's. The system is composed of several novel image processing algorithms and has been extensively tested in Leiden University Medical Center over so far more than 600 patients.

  18. Linear iterative near-field phase retrieval (LIPR) for dual-energy x-ray imaging and material discrimination.

    Science.gov (United States)

    Li, Heyang Thomas; Kingston, Andrew M; Myers, Glenn R; Beeching, Levi; Sheppard, Adrian P

    2018-01-01

    Near-field x-ray refraction (phase) contrast is unavoidable in many lab-based micro-CT imaging systems. Quantitative analysis of x-ray refraction (a.k.a. phase retrieval) is in general an under-constrained problem. Regularizing assumptions may not hold true for interesting samples; popular single-material methods are inappropriate for heterogeneous samples, leading to undesired blurring and/or over-sharpening. In this paper, we constrain and solve the phase-retrieval problem for heterogeneous objects, using the Alvarez-Macovski model for x-ray attenuation. Under this assumption we neglect Rayleigh scattering and pair production, considering only Compton scattering and the photoelectric effect. We formulate and test the resulting method to extract the material properties of density and atomic number from single-distance, dual-energy imaging of both strongly and weakly attenuating multi-material objects with polychromatic x-ray spectra. Simulation and experimental data are used to compare our proposed method with the Paganin single-material phase-retrieval algorithm, and an innovative interpretation of the data-constrained modeling phase-retrieval technique.

  19. AN ENSEMBLE TEMPLATE MATCHING AND CONTENT-BASED IMAGE RETRIEVAL SCHEME TOWARDS EARLY STAGE DETECTION OF MELANOMA

    Directory of Open Access Journals (Sweden)

    Spiros Kostopoulos

    2016-12-01

    Full Text Available Malignant melanoma represents the most dangerous type of skin cancer. In this study we present an ensemble classification scheme, employing the mutual information, the cross-correlation and the clustering based on proximity of image features methods, for early stage assessment of melanomas on plain photography images. The proposed scheme performs two main operations. First, it retrieves the most similar, to the unknown case, image samples from an available image database with verified benign moles and malignant melanoma cases. Second, it provides an automated estimation regarding the nature of the unknown image sample based on the majority of the most similar images retrieved from the available database. Clinical material comprised 75 melanoma and 75 benign plain photography images collected from publicly available dermatological atlases. Results showed that the ensemble scheme outperformed all other methods tested in terms of accuracy with 94.9±1.5%, following an external cross-validation evaluation methodology. The proposed scheme may benefit patients by providing a second opinion consultation during the self-skin examination process and the physician by providing a second opinion estimation regarding the nature of suspicious moles that may assist towards decision making especially for ambiguous cases, safeguarding, in this way from potential diagnostic misinterpretations.

  20. Differences of Perceived Image Generated through the Web Site: Empirical Evidence Obtained in Spanish Destinations

    Science.gov (United States)

    Blazquez-Resino, Juan J.; Muro-Rodriguez, Ana I.; Perez-Jimenez, Israel R.

    2016-01-01

    In this paper, a study of the perceived destination image created by promotional Web Pages is expounded in an attempt to identify their differences as generators of destination image in the consumers' mind. Specifically, it seeks to analyse whether the web sites of different Spanish regions improve the image that consumers have of the destination, identifying their main dimensions and analysing its effect on satisfaction and intentions of the future behavior of potential visitors. To achieve these objectives and verify the hypotheses, a laboratory experiment was performed, where it was determined what changes are produced in the tourist's previous image after browsing the tourist webs of three different regions. Moreover, it analyses the differences in the effect of the perceived image on satisfaction and potential visitors' future behavioral intentions. The results obtained enable us to identify differences in the composition of the perceived image according to the destination, while confirming the significant effect of different perceived image dimensions regarding satisfaction. The results allow managers to gain a better understanding of the effectiveness of their sites from a consumer perspective as well as suggestions to follow in order to achieve greater efficiency in their communication actions in order to improve the motivation of visitors to go to the destination. PMID:27933027

  1. Differences of perceived image generated through the Web site: Empirical Evidence Obtained in Spanish Destinations

    Directory of Open Access Journals (Sweden)

    Juan Jose Blazquez-Resino

    2016-11-01

    Full Text Available In this paper, a study of the perceived destination image created by promotional Web Pages is expounded in an attempt to identify their differences as generators of destination image in the consumers’ mind. Specifically, it seeks to analyse whether the web sites of different Spanish regions improve the image that consumers have of the destination, identifying their main dimensions and analysing its effect on satisfaction and intentions of the future behaviour of potential visitors. To achieve these objectives and verify the hypotheses, a laboratory experiment was performed, where it was determined what changes are produced in the tourist´s previous image after browsing the tourist webs of three different regions. Moreover, it analyses the differences in the effect of the perceived image on satisfaction and potential visitors´ future behavioural intentions. The results obtained enable us to identify differences in the composition of the perceived image according to the destination, while confirming the significant effect of different perceived image dimensions regarding satisfaction. The results allow managers to gain a better understanding of the effectiveness of their sites from a consumer perspective as well as suggestions to follow in order to achieve greater efficiency in their communication actions in order to improve the motivation of visitors to go to the destination.

  2. Retrieval of the ocean wave spectrum in open and thin ice covered ocean waters from ERS Synthetic Aperture Radar images

    International Nuclear Information System (INIS)

    De Carolis, G.

    2001-01-01

    This paper concerns with the task of retrieving ocean wave spectra form imagery provided by space-borne SAR systems such as that on board ERS satellite. SAR imagery of surface wave fields travelling into open ocean and into thin sea ice covers composed of frazil and pancake icefields is considered. The major purpose is to gain insight on how the spectral changes can be related to sea ice properties of geophysical interest such as the thickness. Starting from SAR image cross spectra computed from Single Look Complex (SLC) SAR images, the ocean wave spectrum is retrieved using an inversion procedure based on the gradient descent algorithm. The capability of this method when applied to satellite SAR sensors is investigated. Interest in the SAR image cross spectrum exploitation is twofold: first, the directional properties of the ocean wave spectra are retained; second, external wave information needed to initialize the inversion procedure may be greatly reduced using only information included in the SAR image cross spectrum itself. The main drawback is that the wind waves spectrum could be partly lost and its spectral peak wave number underestimated. An ERS-SAR SLC image acquired on April 10, 1993 over the Greenland Sea was selected as test image. A pair of windows that include open-sea only and sea ice cover, respectively, were selected. The inversions were carried out using different guess wave spectra taken from SAR image cross spectra. Moreover, care was taken to properly handle negative values eventually occurring during the inversion runs. This results in a modification of the gradient descending the technique that is required if a non-negative solution of the wave spectrum is searched for. Results are discussed in view of the possibility of SAR data to detect ocean wave dispersion as a means for the retrieval of ice thickness

  3. Detection rates in pediatric diagnostic imaging: a picture archive and communication system compared with a web-based imaging system

    International Nuclear Information System (INIS)

    McDonald, L.; Cramer, B.; Barrett, B.

    2006-01-01

    This prospective study assesses whether there are differences in accuracy of interpretation of diagnostic images among users of a picture archive and communication system (PACS) diagnostic workstation, compared with a less costly Web-based imaging system on a personal computer (PC) with a high resolution monitor. One hundred consecutive pediatric chest or abdomen and skeletal X-rays were selected from hospital inpatient and outpatient studies over a 5-month interval. They were classified as normal (n = 32), obviously abnormal (n = 33), or having subtle abnormal findings (n = 35) by 2 senior radiologists who reached a consensus for each individual case. Subsequently, 5 raters with varying degrees of experience independently viewed and interpreted the cases as normal or abnormal. Raters viewed each image 1 month apart on a PACS and on the Web-based PC imaging system. There was no relation between accuracy of detection and the system used to evaluate X-ray images (P = 0.92). The total percentage of incorrect interpretations on the Web-based PC imaging system was 23.2%, compared with 23.6% on the PACS (P = 0.92). For all raters combined, the overall difference in proportion assessed incorrectly on the PACS, compared with the PC system, was not significant at 0.4% (95%CI, -3.5% to 4.3%). The high-resolution Web-based imaging system via PC is an adequate alternative to a PACS clinical workstation. Accordingly, the provision of a more extensive network of workstations throughout the hospital setting could have potentially significant cost savings. (author)

  4. Unlocking the Potential of Web Localizers as Contributors to Image Accessibility: What Do Evaluation Tools Have to Offer?

    OpenAIRE

    Rodriguez Vazquez, Silvia

    2015-01-01

    Creating appropriate text alternatives to render images accessible in the web is a shared responsibility among all actors involved in the web development cycle, including web localization professionals. However, they often lack the knowledge needed to correctly transfer image accessibility across different website language versions. In this paper, we provide insight into translators' performance as regards their accessibility achievements during text alternatives adaptation from English into ...

  5. Biomedical image representation approach using visualness and spatial information in a concept feature space for interactive region-of-interest-based retrieval.

    Science.gov (United States)

    Rahman, Md Mahmudur; Antani, Sameer K; Demner-Fushman, Dina; Thoma, George R

    2015-10-01

    This article presents an approach to biomedical image retrieval by mapping image regions to local concepts where images are represented in a weighted entropy-based concept feature space. The term "concept" refers to perceptually distinguishable visual patches that are identified locally in image regions and can be mapped to a glossary of imaging terms. Further, the visual significance (e.g., visualness) of concepts is measured as the Shannon entropy of pixel values in image patches and is used to refine the feature vector. Moreover, the system can assist the user in interactively selecting a region-of-interest (ROI) and searching for similar image ROIs. Further, a spatial verification step is used as a postprocessing step to improve retrieval results based on location information. The hypothesis that such approaches would improve biomedical image retrieval is validated through experiments on two different data sets, which are collected from open access biomedical literature.

  6. Web Based Rapid Mapping of Disaster Areas using Satellite Images, Web Processing Service, Web Mapping Service, Frequency Based Change Detection Algorithm and J-iView

    Science.gov (United States)

    Bandibas, J. C.; Takarada, S.

    2013-12-01

    Timely identification of areas affected by natural disasters is very important for a successful rescue and effective emergency relief efforts. This research focuses on the development of a cost effective and efficient system of identifying areas affected by natural disasters, and the efficient distribution of the information. The developed system is composed of 3 modules which are the Web Processing Service (WPS), Web Map Service (WMS) and the user interface provided by J-iView (fig. 1). WPS is an online system that provides computation, storage and data access services. In this study, the WPS module provides online access of the software implementing the developed frequency based change detection algorithm for the identification of areas affected by natural disasters. It also sends requests to WMS servers to get the remotely sensed data to be used in the computation. WMS is a standard protocol that provides a simple HTTP interface for requesting geo-registered map images from one or more geospatial databases. In this research, the WMS component provides remote access of the satellite images which are used as inputs for land cover change detection. The user interface in this system is provided by J-iView, which is an online mapping system developed at the Geological Survey of Japan (GSJ). The 3 modules are seamlessly integrated into a single package using J-iView, which could rapidly generate a map of disaster areas that is instantaneously viewable online. The developed system was tested using ASTER images covering the areas damaged by the March 11, 2011 tsunami in northeastern Japan. The developed system efficiently generated a map showing areas devastated by the tsunami. Based on the initial results of the study, the developed system proved to be a useful tool for emergency workers to quickly identify areas affected by natural disasters.

  7. Quantification of signal detection performance degradation induced by phase-retrieval in propagation-based x-ray phase-contrast imaging

    Science.gov (United States)

    Chou, Cheng-Ying; Anastasio, Mark A.

    2016-04-01

    In propagation-based X-ray phase-contrast (PB XPC) imaging, the measured image contains a mixture of absorption- and phase-contrast. To obtain separate images of the projected absorption and phase (i.e., refractive) properties of a sample, phase retrieval methods can be employed. It has been suggested that phase-retrieval can always improve image quality in PB XPC imaging. However, when objective (task-based) measures of image quality are employed, this is not necessarily true and phase retrieval can be detrimental. In this work, signal detection theory is utilized to quantify the performance of a Hotelling observer (HO) for detecting a known signal in a known background. Two cases are considered. In the first case, the HO acts directly on the measured intensity data. In the second case, the HO acts on either the retrieved phase or absorption image. We demonstrate that the performance of the HO is superior when acting on the measured intensity data. The loss of task-specific information induced by phase-retrieval is quantified by computing the efficiency of the HO as the ratio of the test statistic signal-to-noise ratio (SNR) for the two cases. The effect of the system geometry on this efficiency is systematically investigated. Our findings confirm that phase-retrieval can impair signal detection performance in XPC imaging.

  8. A midas plugin to enable construction of reproducible web-based image processing pipelines.

    Science.gov (United States)

    Grauer, Michael; Reynolds, Patrick; Hoogstoel, Marion; Budin, Francois; Styner, Martin A; Oguz, Ipek

    2013-01-01

    Image processing is an important quantitative technique for neuroscience researchers, but difficult for those who lack experience in the field. In this paper we present a web-based platform that allows an expert to create a brain image processing pipeline, enabling execution of that pipeline even by those biomedical researchers with limited image processing knowledge. These tools are implemented as a plugin for Midas, an open-source toolkit for creating web based scientific data storage and processing platforms. Using this plugin, an image processing expert can construct a pipeline, create a web-based User Interface, manage jobs, and visualize intermediate results. Pipelines are executed on a grid computing platform using BatchMake and HTCondor. This represents a new capability for biomedical researchers and offers an innovative platform for scientific collaboration. Current tools work well, but can be inaccessible for those lacking image processing expertise. Using this plugin, researchers in collaboration with image processing experts can create workflows with reasonable default settings and streamlined user interfaces, and data can be processed easily from a lab environment without the need for a powerful desktop computer. This platform allows simplified troubleshooting, centralized maintenance, and easy data sharing with collaborators. These capabilities enable reproducible science by sharing datasets and processing pipelines between collaborators. In this paper, we present a description of this innovative Midas plugin, along with results obtained from building and executing several ITK based image processing workflows for diffusion weighted MRI (DW MRI) of rodent brain images, as well as recommendations for building automated image processing pipelines. Although the particular image processing pipelines developed were focused on rodent brain MRI, the presented plugin can be used to support any executable or script-based pipeline.

  9. A Midas Plugin to Enable Construction of Reproducible Web-based Image Processing Pipelines

    Directory of Open Access Journals (Sweden)

    Michael eGrauer

    2013-12-01

    Full Text Available Image processing is an important quantitative technique for neuroscience researchers, but difficult for those who lack experience in the field. In this paper we present a web-based platform that allows an expert to create a brain image processing pipeline, enabling execution of that pipeline even by those biomedical researchers with limited image processing knowledge. These tools are implemented as a plugin for Midas, an open-source toolkit for creating web based scientific data storage and processing platforms. Using this plugin, an image processing expert can construct a pipeline, create a web-based UI, manage jobs, and visualize intermediate results. Pipelines are executed on a grid computing platform using BatchMake and HTCondor. This represents a new capability for biomedical researchers and offers an innovative platform for scientific collaboration. Current tools work well, but can be inaccessible for those lacking image processing expertise. Using this plugin, researchers in collaboration with image processing experts can create workflows with reasonable default settings and streamlined user interfaces, and data can be processed easily from a lab environment without the need for a powerful desktop computer. This platform allows simplified troubleshooting, centralized maintenance, and easy data sharing with collaborators. These capabilities enable reproducible science by sharing datasets and processing pipelines between collaborators. In this paper, we present a description of this innovative Midas plugin, along with results obtained from building and executing several ITK based image processing workflows for diffusion weighted MRI (DW MRI of rodent brain images, as well as recommendations for building automated image processing pipelines. Although the particular image processing pipelines developed were focused on rodent brain MRI, the presented plugin can be used to support any executable or script-based pipeline.

  10. THE IMAGE OF INVESTMENT AND FINANCIAL SERVICES COMPANIES IN WWW LANDSCAPE (WORLD WIDE WEB

    Directory of Open Access Journals (Sweden)

    Iancu Ioana Ancuta

    2011-07-01

    Full Text Available In a world where the internet and its image are becoming more and more important, this study is about the importance of Investment and Financial Services Companies web sites. Market competition, creates the need of studies, focused on assessing and analyzing the websites of companies who are active in this sector. Our study wants to respond at several questions related to Romanian Investment and Financial Services Companies web sites through four dimensions: content, layout, handling and interactivity. Which web sites are best and from what point of view? Where should financial services companies direct their investments to differentiate themselves and their sites? In fact we want to rank the 58 Investment and Financial Services Companies web sites based on 127 criteria. There are numerous methods for evaluating web pages. The evaluation methods are similar from the structural point of view and the most popular are: Serqual, Sitequal, Webqual / Equal EtailQ, Ewam, e-Serqual, WebQEM (Badulescu, 2008:58. In the paper: "Assessment of Romanian Banks E-Image: A Marketing Perspective" (Catana, Catana and Constantinescu, 2006: 4 the authors point out that there are at least four complex variables: accessibility, functionality, performance and usability. Each of these can be decomposed into simple ones. We used the same method, and we examined from the utility point of view, 58 web sites of Investment and Financial Services Companies based on 127 criteria following a procedure developed by Institut fur ProfNet Internet Marketing, Munster (Germany. The data collection period was 1-30 September 2010. The results show that there are very large differences between corporate sites; their creators are concentrating on the information required by law and aesthetics, neglecting other aspects as communication and online service. In the future we want to extend this study at international level, by applying the same methods of research in 5 countries from

  11. Asymmetric double-image encryption method by using iterative phase retrieval algorithm in fractional Fourier transform domain

    Science.gov (United States)

    Sui, Liansheng; Lu, Haiwei; Ning, Xiaojuan; Wang, Yinghui

    2014-02-01

    A double-image encryption scheme is proposed based on an asymmetric technique, in which the encryption and decryption processes are different and the encryption keys are not identical to the decryption ones. First, a phase-only function (POF) of each plain image is retrieved by using an iterative process and then encoded into an interim matrix. Two interim matrices are directly modulated into a complex image by using the convolution operation in the fractional Fourier transform (FrFT) domain. Second, the complex image is encrypted into the gray scale ciphertext with stationary white-noise distribution by using the FrFT. In the encryption process, three random phase functions are used as encryption keys to retrieve the POFs of plain images. Simultaneously, two decryption keys are generated in the encryption process, which make the optical implementation of the decryption process convenient and efficient. The proposed encryption scheme has high robustness to various attacks, such as brute-force attack, known plaintext attack, cipher-only attack, and specific attack. Numerical simulations demonstrate the validity and security of the proposed method.

  12. Information Retrieval Models

    NARCIS (Netherlands)

    Hiemstra, Djoerd; Göker, Ayse; Davies, John

    2009-01-01

    Many applications that handle information on the internet would be completely inadequate without the support of information retrieval technology. How would we find information on the world wide web if there were no web search engines? How would we manage our email without spam filtering? Much of the

  13. A review of images of nurses and smoking on the World Wide Web.

    Science.gov (United States)

    Sarna, Linda; Bialous, Stella Aguinaga

    2012-01-01

    With the advent of the World Wide Web, historic images previously having limited distributions are now widely available. As tobacco use has evolved, so have images of nurses related to smoking. Using a systematic search, the purpose of this article is to describe types of images of nurses and smoking available on the World Wide Web. Approximately 10,000 images of nurses and smoking published over the past century were identified through search engines and digital archives. Seven major themes were identified: nurses smoking, cigarette advertisements, helping patients smoke, "naughty" nurse, teaching women to smoke, smoking in and outside of health care facilities, and antitobacco images. The use of nursing images to market cigarettes was known but the extent of the use of these images has not been reported previously. Digital archives can be used to explore the past, provide a perspective for understanding the present, and suggest directions for the future in confronting negative images of nursing. Copyright © 2012 Elsevier Inc. All rights reserved.

  14. CellBase, a comprehensive collection of RESTful web services for retrieving relevant biological information from heterogeneous sources.

    Science.gov (United States)

    Bleda, Marta; Tarraga, Joaquin; de Maria, Alejandro; Salavert, Francisco; Garcia-Alonso, Luz; Celma, Matilde; Martin, Ainoha; Dopazo, Joaquin; Medina, Ignacio

    2012-07-01

    During the past years, the advances in high-throughput technologies have produced an unprecedented growth in the number and size of repositories and databases storing relevant biological data. Today, there is more biological information than ever but, unfortunately, the current status of many of these repositories is far from being optimal. Some of the most common problems are that the information is spread out in many small databases; frequently there are different standards among repositories and some databases are no longer supported or they contain too specific and unconnected information. In addition, data size is increasingly becoming an obstacle when accessing or storing biological data. All these issues make very difficult to extract and integrate information from different sources, to analyze experiments or to access and query this information in a programmatic way. CellBase provides a solution to the growing necessity of integration by easing the access to biological data. CellBase implements a set of RESTful web services that query a centralized database containing the most relevant biological data sources. The database is hosted in our servers and is regularly updated. CellBase documentation can be found at http://docs.bioinfo.cipf.es/projects/cellbase.

  15. Time-series MODIS image-based retrieval and distribution analysis of total suspended matter concentrations in Lake Taihu (China).

    Science.gov (United States)

    Zhang, Yuchao; Lin, Shan; Liu, Jianping; Qian, Xin; Ge, Yi

    2010-09-01

    Although there has been considerable effort to use remotely sensed images to provide synoptic maps of total suspended matter (TSM), there are limited studies on universal TSM retrieval models. In this paper, we have developed a TSM retrieval model for Lake Taihu using TSM concentrations measured in situ and a time series of quasi-synchronous MODIS 250 m images from 2005. After simple geometric and atmospheric correction, we found a significant relationship (R = 0.8736, N = 166) between in situ measured TSM concentrations and MODIS band normalization difference of band 3 and band 1. From this, we retrieved TSM concentrations in eight regions of Lake Taihu in 2007 and analyzed the characteristic distribution and variation of TSM. Synoptic maps of model-estimated TSM of 2007 showed clear geographical and seasonal variations. TSM in Central Lake and Southern Lakeshore were consistently higher than in other regions, while TSM in East Taihu was generally the lowest among the regions throughout the year. Furthermore, a wide range of TSM concentrations appeared from winter to summer. TSM in winter could be several times that in summer.

  16. A Study on Retrieval Algorithm of Black Water Aggregation in Taihu Lake Based on HJ-1 Satellite Images

    International Nuclear Information System (INIS)

    Lei, Zou; Bing, Zhang; Junsheng, Li; Qian, Shen; Fangfang, Zhang; Ganlin, Wang

    2014-01-01

    The phenomenon of black water aggregation (BWA) occurs in inland water when massive algal bodies aggregate, die, and react with the toxic sludge in certain climate conditions to deprive the water of oxygen. This process results in the deterioration of water quality and damage to the ecosystem. Because charge coupled device (CCD) camera data from the Chinese HJ environmental satellite shows high potential in monitoring BWA, we acquired four HJ-CCD images of Taihu Lake captured during 2009 to 2011 to study this phenomenon. The first study site was selected near the Shore of Taihu Lake. We pre-processed the HJ-CCD images and analyzed the digital number (DN) gray values in the research area and in typical BWA areas. The results show that the DN values of visible bands in BWA areas are obviously lower than those in the research areas. Moreover, we developed an empirical retrieving algorithm of BWA based on the DN mean values and variances of research areas. Finally, we tested the accuracy of this empirical algorithm. The retrieving accuracies were89.9%, 58.1%, 73.4%, and 85.5%, respectively, which demonstrates the efficiency of empirical algorithm in retrieving the approximate distributions of BWA

  17. Combined multi-plane phase retrieval and super-resolution optical fluctuation imaging for 4D cell microscopy

    Science.gov (United States)

    Descloux, A.; Grußmayer, K. S.; Bostan, E.; Lukes, T.; Bouwens, A.; Sharipov, A.; Geissbuehler, S.; Mahul-Mellier, A.-L.; Lashuel, H. A.; Leutenegger, M.; Lasser, T.

    2018-03-01

    Super-resolution fluorescence microscopy provides unprecedented insight into cellular and subcellular structures. However, going `beyond the diffraction barrier' comes at a price, since most far-field super-resolution imaging techniques trade temporal for spatial super-resolution. We propose the combination of a novel label-free white light quantitative phase imaging with fluorescence to provide high-speed imaging and spatial super-resolution. The non-iterative phase retrieval relies on the acquisition of single images at each z-location and thus enables straightforward 3D phase imaging using a classical microscope. We realized multi-plane imaging using a customized prism for the simultaneous acquisition of eight planes. This allowed us to not only image live cells in 3D at up to 200 Hz, but also to integrate fluorescence super-resolution optical fluctuation imaging within the same optical instrument. The 4D microscope platform unifies the sensitivity and high temporal resolution of phase imaging with the specificity and high spatial resolution of fluorescence microscopy.

  18. Hierarchical multiple binary image encryption based on a chaos and phase retrieval algorithm in the Fresnel domain

    International Nuclear Information System (INIS)

    Wang, Zhipeng; Hou, Chenxia; Lv, Xiaodong; Wang, Hongjuan; Gong, Qiong; Qin, Yi

    2016-01-01

    Based on the chaos and phase retrieval algorithm, a hierarchical multiple binary image encryption is proposed. In the encryption process, each plaintext is encrypted into a diffraction intensity pattern by two chaos-generated random phase masks (RPMs). Thereafter, the captured diffraction intensity patterns are partially selected by different binary masks and then combined together to form a single intensity pattern. The combined intensity pattern is saved as ciphertext. For decryption, an iterative phase retrieval algorithm is performed, in which a support constraint in the output plane and a median filtering operation are utilized to achieve a rapid convergence rate without a stagnation problem. The proposed scheme has a simple optical setup and large encryption capacity. In particular, it is well suited for constructing a hierarchical security system. The security and robustness of the proposal are also investigated. (letter)

  19. Robust information encryption diffractive-imaging-based scheme with special phase retrieval algorithm for a customized data container

    Science.gov (United States)

    Qin, Yi; Wang, Zhipeng; Wang, Hongjuan; Gong, Qiong; Zhou, Nanrun

    2018-06-01

    The diffractive-imaging-based encryption (DIBE) scheme has aroused wide interesting due to its compact architecture and low requirement of conditions. Nevertheless, the primary information can hardly be recovered exactly in the real applications when considering the speckle noise and potential occlusion imposed on the ciphertext. To deal with this issue, the customized data container (CDC) into DIBE is introduced and a new phase retrieval algorithm (PRA) for plaintext retrieval is proposed. The PRA, designed according to the peculiarity of the CDC, combines two key techniques from previous approaches, i.e., input-support-constraint and median-filtering. The proposed scheme can guarantee totally the reconstruction of the primary information despite heavy noise or occlusion and its effectiveness and feasibility have been demonstrated with simulation results.

  20. Alaskan Auroral All-Sky Images on the World Wide Web

    Science.gov (United States)

    Stenbaek-Nielsen, H. C.

    1997-01-01

    In response to a 1995 NASA SPDS announcement of support for preservation and distribution of important data sets online, the Geophysical Institute, University of Alaska Fairbanks, Alaska, proposed to provide World Wide Web access to the Poker Flat Auroral All-sky Camera images in real time. The Poker auroral all-sky camera is located in the Davis Science Operation Center at Poker Flat Rocket Range about 30 miles north-east of Fairbanks, Alaska, and is connected, through a microwave link, with the Geophysical Institute where we maintain the data base linked to the Web. To protect the low light-level all-sky TV camera from damage due to excessive light, we only operate during the winter season when the moon is down. The camera and data acquisition is now fully computer controlled. Digital images are transmitted each minute to the Web linked data base where the data are available in a number of different presentations: (1) Individual JPEG compressed images (1 minute resolution); (2) Time lapse MPEG movie of the stored images; and (3) A meridional plot of the entire night activity.

  1. Web Page Recommendation Using Web Mining

    OpenAIRE

    Modraj Bhavsar; Mrs. P. M. Chavan

    2014-01-01

    On World Wide Web various kind of content are generated in huge amount, so to give relevant result to user web recommendation become important part of web application. On web different kind of web recommendation are made available to user every day that includes Image, Video, Audio, query suggestion and web page. In this paper we are aiming at providing framework for web page recommendation. 1) First we describe the basics of web mining, types of web mining. 2) Details of each...

  2. A Web Service for File-Level Access to Disk Images

    Directory of Open Access Journals (Sweden)

    Sunitha Misra

    2014-07-01

    Full Text Available Digital forensics tools have many potential applications in the curation of digital materials in libraries, archives and museums (LAMs. Open source digital forensics tools can help LAM professionals to extract digital contents from born-digital media and make more informed preservation decisions. Many of these tools have ways to display the metadata of the digital media, but few provide file-level access without having to mount the device or use complex command-line utilities. This paper describes a project to develop software that supports access to the contents of digital media without having to mount or download the entire image. The work examines two approaches in creating this tool: First, a graphical user interface running on a local machine. Second, a web-based application running in web browser. The project incorporates existing open source forensics tools and libraries including The Sleuth Kit and libewf along with the Flask web application framework and custom Python scripts to generate web pages supporting disk image browsing.

  3. Web tools for large-scale 3D biological images and atlases

    Directory of Open Access Journals (Sweden)

    Husz Zsolt L

    2012-06-01

    Full Text Available Abstract Background Large-scale volumetric biomedical image data of three or more dimensions are a significant challenge for distributed browsing and visualisation. Many images now exceed 10GB which for most users is too large to handle in terms of computer RAM and network bandwidth. This is aggravated when users need to access tens or hundreds of such images from an archive. Here we solve the problem for 2D section views through archive data delivering compressed tiled images enabling users to browse through very-large volume data in the context of a standard web-browser. The system provides an interactive visualisation for grey-level and colour 3D images including multiple image layers and spatial-data overlay. Results The standard Internet Imaging Protocol (IIP has been extended to enable arbitrary 2D sectioning of 3D data as well a multi-layered images and indexed overlays. The extended protocol is termed IIP3D and we have implemented a matching server to deliver the protocol and a series of Ajax/Javascript client codes that will run in an Internet browser. We have tested the server software on a low-cost linux-based server for image volumes up to 135GB and 64 simultaneous users. The section views are delivered with response times independent of scale and orientation. The exemplar client provided multi-layer image views with user-controlled colour-filtering and overlays. Conclusions Interactive browsing of arbitrary sections through large biomedical-image volumes is made possible by use of an extended internet protocol and efficient server-based image tiling. The tools open the possibility of enabling fast access to large image archives without the requirement of whole image download and client computers with very large memory configurations. The system was demonstrated using a range of medical and biomedical image data extending up to 135GB for a single image volume.

  4. Towards a semantic PACS: Using Semantic Web technology to represent imaging data.

    Science.gov (United States)

    Van Soest, Johan; Lustberg, Tim; Grittner, Detlef; Marshall, M Scott; Persoon, Lucas; Nijsten, Bas; Feltens, Peter; Dekker, Andre

    2014-01-01

    The DICOM standard is ubiquitous within medicine. However, improved DICOM semantics would significantly enhance search operations. Furthermore, databases of current PACS systems are not flexible enough for the demands within image analysis research. In this paper, we investigated if we can use Semantic Web technology, to store and represent metadata of DICOM image files, as well as linking additional computational results to image metadata. Therefore, we developed a proof of concept containing two applications: one to store commonly used DICOM metadata in an RDF repository, and one to calculate imaging biomarkers based on DICOM images, and store the biomarker values in an RDF repository. This enabled us to search for all patients with a gross tumor volume calculated to be larger than 50 cc. We have shown that we can successfully store the DICOM metadata in an RDF repository and are refining our proof of concept with regards to volume naming, value representation, and the applications themselves.

  5. Prototype Web-based continuing medical education using FlashPix images.

    OpenAIRE

    Landman, A.; Yagi, Y.; Gilbertson, J.; Dawson, R.; Marchevsky, A.; Becich, M. J.

    2000-01-01

    Continuing Medical Education (CME) is a requirement among practicing physicians to promote continuous enhancement of clinical knowledge to reflect new developments in medical care. Previous research has harnessed the Web to disseminate complete pathology CME case studies including history, images, diagnoses, and discussions to the medical community. Users submit real-time diagnoses and receive instantaneous feedback, eliminating the need for hard copies of case material and case evaluation fo...

  6. A Novel Relevance Feedback Approach Based on Similarity Measure Modification in an X-Ray Image Retrieval System Based on Fuzzy Representation Using Fuzzy Attributed Relational Graph

    Directory of Open Access Journals (Sweden)

    Hossien Pourghassem

    2011-04-01

    Full Text Available Relevance feedback approaches is used to improve the performance of content-based image retrieval systems. In this paper, a novel relevance feedback approach based on similarity measure modification in an X-ray image retrieval system based on fuzzy representation using fuzzy attributed relational graph (FARG is presented. In this approach, optimum weight of each feature in feature vector is calculated using similarity rate between query image and relevant and irrelevant images in user feedback. The calculated weight is used to tune fuzzy graph matching algorithm as a modifier parameter in similarity measure. The standard deviation of the retrieved image features is applied to calculate the optimum weight. The proposed image retrieval system uses a FARG for representation of images, a fuzzy matching graph algorithm as similarity measure and a semantic classifier based on merging scheme for determination of the search space in image database. To evaluate relevance feedback approach in the proposed system, a standard X-ray image database consisting of 10000 images in 57 classes is used. The improvement of the evaluation parameters shows proficiency and efficiency of the proposed system.

  7. What Can Pictures Tell Us About Web Pages? Improving Document Search Using Images.

    Science.gov (United States)

    Rodriguez-Vaamonde, Sergio; Torresani, Lorenzo; Fitzgibbon, Andrew W

    2015-06-01

    Traditional Web search engines do not use the images in the HTML pages to find relevant documents for a given query. Instead, they typically operate by computing a measure of agreement between the keywords provided by the user and only the text portion of each page. In this paper we study whether the content of the pictures appearing in a Web page can be used to enrich the semantic description of an HTML document and consequently boost the performance of a keyword-based search engine. We present a Web-scalable system that exploits a pure text-based search engine to find an initial set of candidate documents for a given query. Then, the candidate set is reranked using visual information extracted from the images contained in the pages. The resulting system retains the computational efficiency of traditional text-based search engines with only a small additional storage cost needed to encode the visual information. We test our approach on one of the TREC Million Query Track benchmarks where we show that the exploitation of visual content yields improvement in accuracies for two distinct text-based search engines, including the system with the best reported performance on this benchmark. We further validate our approach by collecting document relevance judgements on our search results using Amazon Mechanical Turk. The results of this experiment confirm the improvement in accuracy produced by our image-based reranker over a pure text-based system.

  8. Promoting Your Web Site.

    Science.gov (United States)

    Raeder, Aggi

    1997-01-01

    Discussion of ways to promote sites on the World Wide Web focuses on how search engines work and how they retrieve and identify sites. Appropriate Web links for submitting new sites and for Internet marketing are included. (LRW)

  9. Information retrieval based on single-pixel optical imaging with quick-response code

    Science.gov (United States)

    Xiao, Yin; Chen, Wen

    2018-04-01

    Quick-response (QR) code technique is combined with ghost imaging (GI) to recover original information with high quality. An image is first transformed into a QR code. Then the QR code is treated as an input image in the input plane of a ghost imaging setup. After measurements, traditional correlation algorithm of ghost imaging is utilized to reconstruct an image (QR code form) with low quality. With this low-quality image as an initial guess, a Gerchberg-Saxton-like algorithm is used to improve its contrast, which is actually a post processing. Taking advantage of high error correction capability of QR code, original information can be recovered with high quality. Compared to the previous method, our method can obtain a high-quality image with comparatively fewer measurements, which means that the time-consuming postprocessing procedure can be avoided to some extent. In addition, for conventional ghost imaging, the larger the image size is, the more measurements are needed. However, for our method, images with different sizes can be converted into QR code with the same small size by using a QR generator. Hence, for the larger-size images, the time required to recover original information with high quality will be dramatically reduced. Our method makes it easy to recover a color image in a ghost imaging setup, because it is not necessary to divide the color image into three channels and respectively recover them.

  10. User Driven Image Stacking for ODI Data and Beyond via a Highly Customizable Web Interface

    Science.gov (United States)

    Hayashi, S.; Gopu, A.; Young, M. D.; Kotulla, R.

    2015-09-01

    While some astronomical archives have begun serving standard calibrated data products, the process of producing stacked images remains a challenge left to the end-user. The benefits of astronomical image stacking are well established, and dither patterns are recommended for almost all observing targets. Some archives automatically produce stacks of limited scientific usefulness without any fine-grained user or operator configurability. In this paper, we present PPA Stack, a web based stacking framework within the ODI - Portal, Pipeline, and Archive system. PPA Stack offers a web user interface with built-in heuristics (based on pointing, filter, and other metadata information) to pre-sort images into a set of likely stacks while still allowing the user or operator complete control over the images and parameters for each of the stacks they wish to produce. The user interface, designed using AngularJS, provides multiple views of the input dataset and parameters, all of which are synchronized in real time. A backend consisting of a Python application optimized for ODI data, wrapped around the SWarp software, handles the execution of stacking workflow jobs on Indiana University's Big Red II supercomputer, and the subsequent ingestion of the combined images back into the PPA archive. PPA Stack is designed to enable seamless integration of other stacking applications in the future, so users can select the most appropriate option for their science.

  11. INFLUENCE OF THE VIEWING GEOMETRY WITHIN HYPERSPECTRAL IMAGES RETRIEVED FROM UAV SNAPSHOT CAMERAS

    OpenAIRE

    Aasen, Helge

    2016-01-01

    Hyperspectral data has great potential for vegetation parameter retrieval. However, due to angular effects resulting from different sun-surface-sensor geometries, objects might appear differently depending on the position of an object within the field of view of a sensor. Recently, lightweight snapshot cameras have been introduced, which capture hyperspectral information in two spatial and one spectral dimension and can be mounted on unmanned aerial vehicles. This study investigates th...

  12. Enhancing the Teaching of Digital Processing of Remote Sensing Image Course through Geospatial Web Processing Services

    Science.gov (United States)

    di, L.; Deng, M.

    2010-12-01

    Remote sensing (RS) is an essential method to collect data for Earth science research. Huge amount of remote sensing data, most of them in the image form, have been acquired. Almost all geography departments in the world offer courses in digital processing of remote sensing images. Such courses place emphasis on how to digitally process large amount of multi-source images for solving real world problems. However, due to the diversity and complexity of RS images and the shortcomings of current data and processing infrastructure, obstacles for effectively teaching such courses still remain. The major obstacles include 1) difficulties in finding, accessing, integrating and using massive RS images by students and educators, and 2) inadequate processing functions and computing facilities for students to freely explore the massive data. Recent development in geospatial Web processing service systems, which make massive data, computing powers, and processing capabilities to average Internet users anywhere in the world, promises the removal of the obstacles. The GeoBrain system developed by CSISS is an example of such systems. All functions available in GRASS Open Source GIS have been implemented as Web services in GeoBrain. Petabytes of remote sensing images in NASA data centers, the USGS Landsat data archive, and NOAA CLASS are accessible transparently and processable through GeoBrain. The GeoBrain system is operated on a high performance cluster server with large disk storage and fast Internet connection. All GeoBrain capabilities can be accessed by any Internet-connected Web browser. Dozens of universities have used GeoBrain as an ideal platform to support data-intensive remote sensing education. This presentation gives a specific example of using GeoBrain geoprocessing services to enhance the teaching of GGS 588, Digital Remote Sensing taught at the Department of Geography and Geoinformation Science, George Mason University. The course uses the textbook "Introductory

  13. The second-order differential phase contrast and its retrieval for imaging with x-ray Talbot interferometry

    International Nuclear Information System (INIS)

    Yang Yi; Tang Xiangyang

    2012-01-01

    Purpose: The x-ray differential phase contrast imaging implemented with the Talbot interferometry has recently been reported to be capable of providing tomographic images corresponding to attenuation-contrast, phase-contrast, and dark-field contrast, simultaneously, from a single set of projection data. The authors believe that, along with small-angle x-ray scattering, the second-order phase derivative Φ ″ s (x) plays a role in the generation of dark-field contrast. In this paper, the authors derive the analytic formulae to characterize the contribution made by the second-order phase derivative to the dark-field contrast (namely, second-order differential phase contrast) and validate them via computer simulation study. By proposing a practical retrieval method, the authors investigate the potential of second-order differential phase contrast imaging for extensive applications. Methods: The theoretical derivation starts at assuming that the refractive index decrement of an object can be decomposed into δ=δ s +δ f , where δ f corresponds to the object's fine structures and manifests itself in the dark-field contrast via small-angle scattering. Based on the paraxial Fresnel-Kirchhoff theory, the analytic formulae to characterize the contribution made by δ s , which corresponds to the object's smooth structures, to the dark-field contrast are derived. Through computer simulation with specially designed numerical phantoms, an x-ray differential phase contrast imaging system implemented with the Talbot interferometry is utilized to evaluate and validate the derived formulae. The same imaging system is also utilized to evaluate and verify the capability of the proposed method to retrieve the second-order differential phase contrast for imaging, as well as its robustness over the dimension of detector cell and the number of steps in grating shifting. Results: Both analytic formulae and computer simulations show that, in addition to small-angle scattering, the

  14. The retrieval of two-dimensional distribution of the earth's surface aerodynamic roughness using SAR image and TM thermal infrared image

    Institute of Scientific and Technical Information of China (English)

    ZHANG; Renhua; WANG; Jinfeng; ZHU; Caiying; SUN; Xiaomin

    2004-01-01

    After having analyzed the requirement on the aerodynamic earth's surface roughness in two-dimensional distribution in the research field of interaction between land surface and atmosphere, this paper presents a new way to calculate the aerodynamic roughness using the earth's surface geometric roughness retrieved from SAR (Synthetic Aperture Radar) and TM thermal infrared image data. On the one hand, the SPM (Small Perturbation Model) was used as a theoretical SAR backscattering model to describe the relationship between the SAR backscattering coefficient and the earth's surface geometric roughness and its dielectric constant retrieved from the physical model between the soil thermal inertia and the soil surface moisture with the simultaneous TM thermal infrared image data and the ground microclimate data. On the basis of the SAR image matching with the TM image, the non-volume scattering surface geometric information was obtained from the SPM model at the TM image pixel scale, and the ground pixel surface's equivalent geometric roughness-height standard RMS (Root Mean Square) was achieved from the geometric information by the transformation of the typical topographic factors. The vegetation (wheat, tree) height retrieved from spectrum model was also transferred into its equivalent geometric roughness. A completely two-dimensional distribution map of the equivalent geometric roughness over the experimental area was produced by the data mosaic technique. On the other hand, according to the atmospheric eddy currents theory, the aerodynamic surface roughness was iterated out with the atmosphere stability correction method using the wind and the temperature profiles data measured at several typical fields such as bare soil field and vegetation field. After having analyzed the effect of surface equivalent geometric roughness together with dynamic and thermodynamic factors on the aerodynamic surface roughness within the working area, this paper first establishes a scale

  15. Gender differences in autobiographical memory for everyday events: retrieval elicited by SenseCam images versus verbal cues.

    Science.gov (United States)

    St Jacques, Peggy L; Conway, Martin A; Cabeza, Roberto

    2011-10-01

    Gender differences are frequently observed in autobiographical memory (AM). However, few studies have investigated the neural basis of potential gender differences in AM. In the present functional MRI (fMRI) study we investigated gender differences in AMs elicited using dynamic visual images vs verbal cues. We used a novel technology called a SenseCam, a wearable device that automatically takes thousands of photographs. SenseCam differs considerably from other prospective methods of generating retrieval cues because it does not disrupt the ongoing experience. This allowed us to control for potential gender differences in emotional processing and elaborative rehearsal, while manipulating how the AMs were elicited. We predicted that males would retrieve more richly experienced AMs elicited by the SenseCam images vs the verbal cues, whereas females would show equal sensitivity to both cues. The behavioural results indicated that there were no gender differences in subjective ratings of reliving, importance, vividness, emotion, and uniqueness, suggesting that gender differences in brain activity were not due to differences in these measures of phenomenological experience. Consistent with our predictions, the fMRI results revealed that males showed a greater difference in functional activity associated with the rich experience of SenseCam vs verbal cues, than did females.

  16. System and technique for retrieving depth information about a surface by projecting a composite image of modulated light patterns

    Science.gov (United States)

    Hassebrook, Laurence G. (Inventor); Lau, Daniel L. (Inventor); Guan, Chun (Inventor)

    2010-01-01

    A technique, associated system and program code, for retrieving depth information about at least one surface of an object, such as an anatomical feature. Core features include: projecting a composite image comprising a plurality of modulated structured light patterns, at the anatomical feature; capturing an image reflected from the surface; and recovering pattern information from the reflected image, for each of the modulated structured light patterns. Pattern information is preferably recovered for each modulated structured light pattern used to create the composite, by performing a demodulation of the reflected image. Reconstruction of the surface can be accomplished by using depth information from the recovered patterns to produce a depth map/mapping thereof. Each signal waveform used for the modulation of a respective structured light pattern, is distinct from each of the other signal waveforms used for the modulation of other structured light patterns of a composite image; these signal waveforms may be selected from suitable types in any combination of distinct signal waveforms, provided the waveforms used are uncorrelated with respect to each other. The depth map/mapping to be utilized in a host of applications, for example: displaying a 3-D view of the object; virtual reality user-interaction interface with a computerized device; face--or other animal feature or inanimate object--recognition and comparison techniques for security or identification purposes; and 3-D video teleconferencing/telecollaboration.

  17. Data ontology and an information system realization for web-based management of image measurements

    Directory of Open Access Journals (Sweden)

    Dimiter eProdanov

    2011-11-01

    Full Text Available Image acquisition, processing and quantification of objects (morphometry require the integration of data inputs and outputs originating from heterogeneous sources. Management of the data exchange along this workflow in a systematic manner poses several challenges, notably the description of the heterogeneous meta data and the interoperability between the software used. The use of integrated software solutions for morphometry and management of imaging data in combination ontologies can reduce metadata data loss and greatly facilitate subsequent data analysis. This paper presents an integrated information system, called LabIS. The system has the objectives to automate (i the process of storage, annotation and querying of image measurements and (ii to provide means for data sharing with 3rd party applications consuming measurement data using open standard communication protocols. LabIS implements 3-tier architecture with a relational database back-end and an application logic middle tier realizing web-based user interface for reporting and annotation and a web service communication layer. The image processing and morphometry functionality is backed by interoperability with ImageJ, a public domain image processing software, via integrated clients. Instrumental for the latter was the construction of a data ontology representing the common measurement data model. LabIS supports user profiling and can store arbitrary types of measurements, regions of interest, calibrations and ImageJ settings. Integration of the stored measurements is facilitated by atlas mapping and ontology-based markup. The system can be used as an experimental workflow management tool allowing for description and reporting of the performed experiments. LabIS can be also used as a measurements repository that can be transparently accessed by computational environments, such as Matlab. Finally, the system can be used as a data sharing tool.

  18. Coupled Retrieval of Liquid Water Cloud and Above-Cloud Aerosol Properties Using the Airborne Multiangle SpectroPolarimetric Imager (AirMSPI)

    Science.gov (United States)

    Xu, Feng; van Harten, Gerard; Diner, David J.; Davis, Anthony B.; Seidel, Felix C.; Rheingans, Brian; Tosca, Mika; Alexandrov, Mikhail D.; Cairns, Brian; Ferrare, Richard A.; Burton, Sharon P.; Fenn, Marta A.; Hostetler, Chris A.; Wood, Robert; Redemann, Jens

    2018-03-01

    An optimization algorithm is developed to retrieve liquid water cloud properties including cloud optical depth (COD), droplet size distribution and cloud top height (CTH), and above-cloud aerosol properties including aerosol optical depth (AOD), single-scattering albedo, and microphysical properties from sweep-mode observations by Jet Propulsion Laboratory's Airborne Multiangle SpectroPolarimetric Imager (AirMSPI) instrument. The retrieval is composed of three major steps: (1) initial estimate of the mean droplet size distribution across the entire image of 80-100 km along track by 10-25 km across track from polarimetric cloudbow observations, (2) coupled retrieval of image-scale cloud and above-cloud aerosol properties by fitting the polarimetric data at all observation angles, and (3) iterative retrieval of 1-D radiative transfer-based COD and droplet size distribution at pixel scale (25 m) by establishing relationships between COD and droplet size and fitting the total radiance measurements. Our retrieval is tested using 134 AirMSPI data sets acquired during the National Aeronautics and Space Administration (NASA) field campaign ObseRvations of Aerosols above CLouds and their intEractionS. The retrieved above-cloud AOD and CTH are compared to coincident HSRL-2 (HSRL-2, NASA Langley Research Center) data, and COD and droplet size distribution parameters (effective radius reff and effective variance veff) are compared to coincident Research Scanning Polarimeter (RSP) (NASA Goddard Institute for Space Studies) data. Mean absolute differences between AirMSPI and HSRL-2 retrievals of above-cloud AOD at 532 nm and CTH are 0.03 and mean absolute differences between RSP and AirMSPI retrievals of COD, reff, and veff in the cloudbow area are 2.33, 0.69 μm, and 0.020, respectively. Neglect of smoke aerosols above cloud leads to an underestimate of image-averaged COD by 15%.

  19. High throughput web inspection system using time-stretch real-time imaging

    Science.gov (United States)

    Kim, Chanju

    Photonic time-stretch is a novel technology that enables capturing of fast, rare and non-repetitive events. Therefore, it operates in real-time with ability to record over long period of time while having fine temporal resolution. The powerful property of photonic time-stretch has already been employed in various fields of application such as analog-to-digital conversion, spectroscopy, laser scanner and microscopy. Further expanding the scope, we fully exploit the time-stretch technology to demonstrate a high throughput web inspection system. Web inspection, namely surface inspection is a nondestructive evaluation method which is crucial for semiconductor wafer and thin film production. We successfully report a dark-field web inspection system with line scan speed of 90.9 MHz which is up to 1000 times faster than conventional inspection instruments. The manufacturing of high quality semiconductor wafer and thin film may directly benefit from this technology as it can easily locate defects with area of less than 10 microm x 10 microm where it allows maximum web flow speed of 1.8 km/s. The thesis provides an overview of our web inspection technique, followed by description of the photonic time-stretch technique which is the keystone in our system. A detailed explanation of each component is covered to provide quantitative understanding of the system. Finally, imaging results from a hard-disk sample and flexible films are presented along with performance analysis of the system. This project was the first application of time-stretch to industrial inspection, and was conducted under financial support and with close involvement by Hitachi, Ltd.

  20. Introduction to information retrieval

    CERN Document Server

    Manning, Christopher D; Schütze, Hinrich

    2008-01-01

    Class-tested and coherent, this textbook teaches classical and web information retrieval, including web search and the related areas of text classification and text clustering from basic concepts. It gives an up-to-date treatment of all aspects of the design and implementation of systems for gathering, indexing, and searching documents; methods for evaluating systems; and an introduction to the use of machine learning methods on text collections. All the important ideas are explained using examples and figures, making it perfect for introductory courses in information retrieval for advanced un

  1. Mining big data sets of plankton images: a zero-shot learning approach to retrieve labels without training data

    Science.gov (United States)

    Orenstein, E. C.; Morgado, P. M.; Peacock, E.; Sosik, H. M.; Jaffe, J. S.

    2016-02-01

    Technological advances in instrumentation and computing have allowed oceanographers to develop imaging systems capable of collecting extremely large data sets. With the advent of in situ plankton imaging systems, scientists must now commonly deal with "big data" sets containing tens of millions of samples spanning hundreds of classes, making manual classification untenable. Automated annotation methods are now considered to be the bottleneck between collection and interpretation. Typically, such classifiers learn to approximate a function that predicts a predefined set of classes for which a considerable amount of labeled training data is available. The requirement that the training data span all the classes of concern is problematic for plankton imaging systems since they sample such diverse, rapidly changing populations. These data sets may contain relatively rare, sparsely distributed, taxa that will not have associated training data; a classifier trained on a limited set of classes will miss these samples. The computer vision community, leveraging advances in Convolutional Neural Networks (CNNs), has recently attempted to tackle such problems using "zero-shot" object categorization methods. Under a zero-shot framework, a classifier is trained to map samples onto a set of attributes rather than a class label. These attributes can include visual and non-visual information such as what an organism is made out of, where it is distributed globally, or how it reproduces. A second stage classifier is then used to extrapolate a class. In this work, we demonstrate a zero-shot classifier, implemented with a CNN, to retrieve out-of-training-set labels from images. This method is applied to data from two continuously imaging, moored instruments: the Scripps Plankton Camera System (SPCS) and the Imaging FlowCytobot (IFCB). Results from simulated deployment scenarios indicate zero-shot classifiers could be successful at recovering samples of rare taxa in image sets. This

  2. Image storage, cataloguing and retrieval using a personal computer database software application

    International Nuclear Information System (INIS)

    Lewis, G.; Howman-Giles, R.

    1999-01-01

    Full text: Interesting images and cases are collected and collated by most nuclear medicine practitioners throughout the world. Changing imaging technology has altered the way in which images may be presented and are reported, with less reliance on 'hard copy' for both reporting and archiving purposes. Digital image generation and storage is rapidly replacing film in both radiological and nuclear medicine practice. A personal computer database based interesting case filing system is described and demonstrated. The digital image storage format allows instant access to both case information (e.g. history and examination, scan report or teaching point) and the relevant images. The database design allows rapid selection of cases and images appropriate to a particular diagnosis, scan type, age or other search criteria. Correlative X-ray, CT, MRI and ultrasound images can also be stored and accessed. The application is in use at The New Children's Hospital as an aid to postgraduate medical education, with new cases being regularly added to the database

  3. Using the World Wide Web as a Teaching Tool: Analyzing Images of Aging and the Visual Needs of an Aging Society.

    Science.gov (United States)

    Jakobi, Patricia

    1999-01-01

    Analysis of Web site images of aging to identify positive and negative representations can help teach students about social perceptions of older adults. Another learning experience involves consideration of the needs of older adults in Web site design. (SK)

  4. Rotation Invariant Color Retrieval

    OpenAIRE

    Swapna Borde; Udhav Bhosle

    2013-01-01

    The new technique for image retrieval using the color features extracted from images based on LogHistogram is proposed. The proposed technique is compared with Global color histogram and histogram ofcorners .It has been observed that number of histogram bins used for retrieval comparison of proposedtechnique (Log Histogram)is less as compared to Global Color Histogram and Histogram of corners. Theexperimental results on a database of 792 images with 11 classes indicate that proposed method (L...

  5. MilxXplore: a web-based system to explore large imaging datasets.

    Science.gov (United States)

    Bourgeat, P; Dore, V; Villemagne, V L; Rowe, C C; Salvado, O; Fripp, J

    2013-01-01

    As large-scale medical imaging studies are becoming more common, there is an increasing reliance on automated software to extract quantitative information from these images. As the size of the cohorts keeps increasing with large studies, there is a also a need for tools that allow results from automated image processing and analysis to be presented in a way that enables fast and efficient quality checking, tagging and reporting on cases in which automatic processing failed or was problematic. MilxXplore is an open source visualization platform, which provides an interface to navigate and explore imaging data in a web browser, giving the end user the opportunity to perform quality control and reporting in a user friendly, collaborative and efficient way. Compared to existing software solutions that often provide an overview of the results at the subject's level, MilxXplore pools the results of individual subjects and time points together, allowing easy and efficient navigation and browsing through the different acquisitions of a subject over time, and comparing the results against the rest of the population. MilxXplore is fast, flexible and allows remote quality checks of processed imaging data, facilitating data sharing and collaboration across multiple locations, and can be easily integrated into a cloud computing pipeline. With the growing trend of open data and open science, such a tool will become increasingly important to share and publish results of imaging analysis.

  6. Retrieving atmospheric dust opacity on Mars by imaging spectroscopy at large angles

    Science.gov (United States)

    Douté, S.; Ceamanos, X.; Appéré, T.

    2013-09-01

    We propose a new method to retrieve the optical depth of Martian aerosols (AOD) from OMEGA and CRISM hyperspectral imagery at a reference wavelength of 1 μm. Our method works even if the underlying surface is completely made of minerals, corresponding to a low contrast between surface and atmospheric dust, while being observed at a fixed geometry. Minimizing the effect of the surface reflectance properties on the AOD retrieval is the second principal asset of our method. The method is based on the parametrization of the radiative coupling between particles and gas determining, with local altimetry, acquisition geometry, and the meteorological situation, the absorption band depth of gaseous CO2. Because the last three factors can be predicted to some extent, we can define a new parameter β that expresses specifically the strength of the gas-aerosols coupling while directly depending on the AOD. Combining estimations of β and top of the atmosphere radiance values extracted from the observed spectra within the CO2 gas band at 2 μm, we evaluate the AOD and the surface reflectance by radiative transfer inversion. One should note that practically β can be estimated for a large variety of mineral or icy surfaces with the exception of CO2 ice when its 2 μm solid band is not sufficiently saturated. Validation of the proposed method shows that it is reliable if two conditions are fulfilled: (i) the observation conditions provide large incidence or/and emergence angles (ii) the aerosols are vertically well mixed in the atmosphere. Experiments conducted on OMEGA nadir looking observations as well as CRISM multi-angular acquisitions with incidence angles higher than 65° in the first case and 33° in the second case produce very satisfactory results. Finally in a companion paper the method is applied to monitoring atmospheric dust spring activity at high southern latitudes on Mars using OMEGA.

  7. A web-based procedure for liver segmentation in CT images

    Science.gov (United States)

    Yuan, Rong; Luo, Ming; Wang, Luyao; Xie, Qingguo

    2015-03-01

    Liver segmentation in CT images has been acknowledged as a basic and indispensable part in systems of computer aided liver surgery for operation design and risk evaluation. In this paper, we will introduce and implement a web-based procedure for liver segmentation to help radiologists and surgeons get an accurate result efficiently and expediently. Several clinical datasets are used to evaluate the accessibility and the accuracy. This procedure seems a promising approach for extraction of liver volumetry of various shapes. Moreover, it is possible for user to access the segmentation wherever the Internet is available without any specific machine.

  8. ALDF Data Retrieval Algorithms for Validating the Optical Transient Detector (OTD) and the Lightning Imaging Sensor (LIS)

    Science.gov (United States)

    Koshak, W. J.; Blakeslee, R. J.; Bailey, J. C.

    1997-01-01

    A linear algebraic solution is provided for the problem of retrieving the location and time of occurrence of lightning ground strikes from in Advanced Lightning Direction Finder (ALDF) network. The ALDF network measures field strength, magnetic bearing, and arrival time of lightning radio emissions and solutions for the plane (i.e.. no Earth curvature) are provided that implement all of these measurements. The accuracy of the retrieval method is tested using computer-simulated data sets and the relative influence of bearing and arrival time data on the outcome of the final solution is formally demonstrated. The algorithm is sufficiently accurate to validate NASA's Optical Transient Detector (OTD) and Lightning Imaging System (LIS). We also introduce a quadratic planar solution that is useful when only three arrival time measurements are available. The algebra of the quadratic root results are examined in detail to clarify what portions of the analysis region lead to fundamental ambiguities in source location. Complex root results are shown to be associated with the presence of measurement errors when the lightning source lies near an outer sensor baseline of the ALDF network. For arbitrary noncollinear network geometries and in the absence of measurement errors, it is shown that the two quadratic roots are equivalent (no source location ambiguity) on the outer sensor baselines. The accuracy of the quadratic planar method is tested with computer-generated data sets and the results are generally better than those obtained from the three station linear planar method when bearing errors are about 2 degrees.

  9. Optimized and secure technique for multiplexing QR code images of single characters: application to noiseless messages retrieval

    International Nuclear Information System (INIS)

    Trejos, Sorayda; Barrera, John Fredy; Torroba, Roberto

    2015-01-01

    We present for the first time an optical encrypting–decrypting protocol for recovering messages without speckle noise. This is a digital holographic technique using a 2f scheme to process QR codes entries. In the procedure, letters used to compose eventual messages are individually converted into a QR code, and then each QR code is divided into portions. Through a holographic technique, we store each processed portion. After filtering and repositioning, we add all processed data to create a single pack, thus simplifying the handling and recovery of multiple QR code images, representing the first multiplexing procedure applied to processed QR codes. All QR codes are recovered in a single step and in the same plane, showing neither cross-talk nor noise problems as in other methods. Experiments have been conducted using an interferometric configuration and comparisons between unprocessed and recovered QR codes have been performed, showing differences between them due to the involved processing. Recovered QR codes can be successfully scanned, thanks to their noise tolerance. Finally, the appropriate sequence in the scanning of the recovered QR codes brings a noiseless retrieved message. Additionally, to procure maximum security, the multiplexed pack could be multiplied by a digital diffuser as to encrypt it. The encrypted pack is easily decoded by multiplying the multiplexing with the complex conjugate of the diffuser. As it is a digital operation, no noise is added. Therefore, this technique is threefold robust, involving multiplexing, encryption, and the need of a sequence to retrieve the outcome. (paper)

  10. Optimized and secure technique for multiplexing QR code images of single characters: application to noiseless messages retrieval

    Science.gov (United States)

    Trejos, Sorayda; Fredy Barrera, John; Torroba, Roberto

    2015-08-01

    We present for the first time an optical encrypting-decrypting protocol for recovering messages without speckle noise. This is a digital holographic technique using a 2f scheme to process QR codes entries. In the procedure, letters used to compose eventual messages are individually converted into a QR code, and then each QR code is divided into portions. Through a holographic technique, we store each processed portion. After filtering and repositioning, we add all processed data to create a single pack, thus simplifying the handling and recovery of multiple QR code images, representing the first multiplexing procedure applied to processed QR codes. All QR codes are recovered in a single step and in the same plane, showing neither cross-talk nor noise problems as in other methods. Experiments have been conducted using an interferometric configuration and comparisons between unprocessed and recovered QR codes have been performed, showing differences between them due to the involved processing. Recovered QR codes can be successfully scanned, thanks to their noise tolerance. Finally, the appropriate sequence in the scanning of the recovered QR codes brings a noiseless retrieved message. Additionally, to procure maximum security, the multiplexed pack could be multiplied by a digital diffuser as to encrypt it. The encrypted pack is easily decoded by multiplying the multiplexing with the complex conjugate of the diffuser. As it is a digital operation, no noise is added. Therefore, this technique is threefold robust, involving multiplexing, encryption, and the need of a sequence to retrieve the outcome.

  11. A fully automatic end-to-end method for content-based image retrieval of CT scans with similar liver lesion annotations.

    Science.gov (United States)

    Spanier, A B; Caplan, N; Sosna, J; Acar, B; Joskowicz, L

    2018-01-01

    The goal of medical content-based image retrieval (M-CBIR) is to assist radiologists in the decision-making process by retrieving medical cases similar to a given image. One of the key interests of radiologists is lesions and their annotations, since the patient treatment depends on the lesion diagnosis. Therefore, a key feature of M-CBIR systems is the retrieval of scans with the most similar lesion annotations. To be of value, M-CBIR systems should be fully automatic to handle large case databases. We present a fully automatic end-to-end method for the retrieval of CT scans with similar liver lesion annotations. The input is a database of abdominal CT scans labeled with liver lesions, a query CT scan, and optionally one radiologist-specified lesion annotation of interest. The output is an ordered list of the database CT scans with the most similar liver lesion annotations. The method starts by automatically segmenting the liver in the scan. It then extracts a histogram-based features vector from the segmented region, learns the features' relative importance, and ranks the database scans according to the relative importance measure. The main advantages of our method are that it fully automates the end-to-end querying process, that it uses simple and efficient techniques that are scalable to large datasets, and that it produces quality retrieval results using an unannotated CT scan. Our experimental results on 9 CT queries on a dataset of 41 volumetric CT scans from the 2014 Image CLEF Liver Annotation Task yield an average retrieval accuracy (Normalized Discounted Cumulative Gain index) of 0.77 and 0.84 without/with annotation, respectively. Fully automatic end-to-end retrieval of similar cases based on image information alone, rather that on disease diagnosis, may help radiologists to better diagnose liver lesions.

  12. COLOR IMAGE RETRIEVAL BASED ON FEATURE FUSION THROUGH MULTIPLE LINEAR REGRESSION ANALYSIS

    Directory of Open Access Journals (Sweden)

    K. Seetharaman

    2015-08-01

    Full Text Available This paper proposes a novel technique based on feature fusion using multiple linear regression analysis, and the least-square estimation method is employed to estimate the parameters. The given input query image is segmented into various regions according to the structure of the image. The color and texture features are extracted on each region of the query image, and the features are fused together using the multiple linear regression model. The estimated parameters of the model, which is modeled based on the features, are formed as a vector called a feature vector. The Canberra distance measure is adopted to compare the feature vectors of the query and target images. The F-measure is applied to evaluate the performance of the proposed technique. The obtained results expose that the proposed technique is comparable to the other existing techniques.

  13. Estimation of cloud optical thickness by processing SEVIRI images and implementing a semi analytical cloud property retrieval algorithm

    Science.gov (United States)

    Pandey, P.; De Ridder, K.; van Lipzig, N.

    2009-04-01

    Clouds play a very important role in the Earth's climate system, as they form an intermediate layer between Sun and the Earth. Satellite remote sensing systems are the only means to provide information about clouds on large scales. The geostationary satellite, Meteosat Second Generation (MSG) has onboard an imaging radiometer, the Spinning Enhanced Visible and Infrared Imager (SEVIRI). SEVIRI is a 12 channel imager, with 11 channels observing the earth's full disk with a temporal resolution of 15 min and spatial resolution of 3 km at nadir, and a high resolution visible (HRV) channel. The visible channels (0.6 µm and 0.81 µm) and near infrared channel (1.6µm) of SEVIRI are being used to retrieve the cloud optical thickness (COT). The study domain is over Europe covering the region between 35°N - 70°N and 10°W - 30°E. SEVIRI level 1.5 images over this domain are being acquired from the European Organisation for the Exploitation of Meteorological Satellites (EUMETSAT) archive. The processing of this imagery, involves a number of steps before estimating the COT. The steps involved in pre-processing are as follows. First, the digital count number is acquired from the imagery. Image geo-coding is performed in order to relate the pixel positions to the corresponding longitude and latitude. Solar zenith angle is determined as a function of latitude and time. The radiometric conversion is done using the values of offsets and slopes of each band. The values of radiance obtained are then used to calculate the reflectance for channels in the visible spectrum using the information of solar zenith angle. An attempt is made to estimate the COT from the observed radiances. A semi analytical algorithm [Kokhanovsky et al., 2003] is implemented for the estimation of cloud optical thickness from the visible spectrum of light intensity reflected from clouds. The asymptotical solution of the radiative transfer equation, for clouds with large optical thickness, is the basis of

  14. Development of a 3D WebGIS System for Retrieving and Visualizing CityGML Data Based on their Geometric and Semantic Characteristics by Using Free and Open Source Technology

    Science.gov (United States)

    Pispidikis, I.; Dimopoulou, E.

    2016-10-01

    CityGML is considered as an optimal standard for representing 3D city models. However, international experience has shown that visualization of the latter is quite difficult to be implemented on the web, due to the large size of data and the complexity of CityGML. As a result, in the context of this paper, a 3D WebGIS application is developed in order to successfully retrieve and visualize CityGML data in accordance with their respective geometric and semantic characteristics. Furthermore, the available web technologies and the architecture of WebGIS systems are investigated, as provided by international experience, in order to be utilized in the most appropriate way for the purposes of this paper. Specifically, a PostgreSQL/ PostGIS Database is used, in compliance with the 3DCityDB schema. At Server tier, Apache HTTP Server and GeoServer are utilized, while a Server Side programming language PHP is used. At Client tier, which implemented the interface of the application, the following technologies were used: JQuery, AJAX, JavaScript, HTML5, WebGL and Ol3-Cesium. Finally, it is worth mentioning that the application's primary objectives are a user-friendly interface and a fully open source development.

  15. Distributing Images and Information over the Web - a Case Study of the Pont Manuscript Maps

    Directory of Open Access Journals (Sweden)

    Christopher Fleet

    2000-07-01

    Full Text Available Technological changes in recent years have encouraged many libraries to deliver digital images of their materials over the Internet. For larger items, including maps, high resolution images of research quality can now be obtained from steadily cheaper equipment, computers can process larger images more comfortably, and newer compression formats have allowed these images to be viewed interactively online. The exponential growth of the Internet has also created a large body of users who wish to access these images from their homes or desktops. Within this context, this paper describes the scanning of the Pont manuscript maps of Scotland, the technology used to make these maps available over the Internet, and the initial stages of the construction of a website of associated information. This case study is not presented as a prescriptive example of best practice, but rather as an illustration of some of the main practical issues involved, using these to present arguments for and against current map digitisation and Web delivery work.

  16. Content-based retrieval of brain tumor in contrast-enhanced MRI images using tumor margin information and learned distance metric.

    Science.gov (United States)

    Yang, Wei; Feng, Qianjin; Yu, Mei; Lu, Zhentai; Gao, Yang; Xu, Yikai; Chen, Wufan

    2012-11-01

    A content-based image retrieval (CBIR) method for T1-weighted contrast-enhanced MRI (CE-MRI) images of brain tumors is presented for diagnosis aid. The method is thoroughly evaluated on a large image dataset. Using the tumor region as a query, the authors' CBIR system attempts to retrieve tumors of the same pathological category. Aside from commonly used features such as intensity, texture, and shape features, the authors use a margin information descriptor (MID), which is capable of describing the characteristics of tissue surrounding a tumor, for representing image contents. In addition, the authors designed a distance metric learning algorithm called Maximum mean average Precision Projection (MPP) to maximize the smooth approximated mean average precision (mAP) to optimize retrieval performance. The effectiveness of MID and MPP algorithms was evaluated using a brain CE-MRI dataset consisting of 3108 2D scans acquired from 235 patients with three categories of brain tumors (meningioma, glioma, and pituitary tumor). By combining MID and other features, the mAP of retrieval increased by more than 6% with the learned distance metrics. The distance metric learned by MPP significantly outperformed the other two existing distance metric learning methods in terms of mAP. The CBIR system using the proposed strategies achieved a mAP of 87.3% and a precision of 89.3% when top 10 images were returned by the system. Compared with scale-invariant feature transform, the MID, which uses the intensity profile as descriptor, achieves better retrieval performance. Incorporating tumor margin information represented by MID with the distance metric learned by the MPP algorithm can substantially improve the retrieval performance for brain tumors in CE-MRI.

  17. Efficient random access high resolution region-of-interest (ROI) image retrieval using backward coding of wavelet trees (BCWT)

    Science.gov (United States)

    Corona, Enrique; Nutter, Brian; Mitra, Sunanda; Guo, Jiangling; Karp, Tanja

    2008-03-01

    Efficient retrieval of high quality Regions-Of-Interest (ROI) from high resolution medical images is essential for reliable interpretation and accurate diagnosis. Random access to high quality ROI from codestreams is becoming an essential feature in many still image compression applications, particularly in viewing diseased areas from large medical images. This feature is easier to implement in block based codecs because of the inherent spatial independency of the code blocks. This independency implies that the decoding order of the blocks is unimportant as long as the position for each is properly identified. In contrast, wavelet-tree based codecs naturally use some interdependency that exploits the decaying spectrum model of the wavelet coefficients. Thus one must keep track of the decoding order from level to level with such codecs. We have developed an innovative multi-rate image subband coding scheme using "Backward Coding of Wavelet Trees (BCWT)" which is fast, memory efficient, and resolution scalable. It offers far less complexity than many other existing codecs including both, wavelet-tree, and block based algorithms. The ROI feature in BCWT is implemented through a transcoder stage that generates a new BCWT codestream containing only the information associated with the user-defined ROI. This paper presents an efficient technique that locates a particular ROI within the BCWT coded domain, and decodes it back to the spatial domain. This technique allows better access and proper identification of pathologies in high resolution images since only a small fraction of the codestream is required to be transmitted and analyzed.

  18. Comparing image quality of print-on-demand books and photobooks from web-based vendors

    Science.gov (United States)

    Phillips, Jonathan; Bajorski, Peter; Burns, Peter; Fredericks, Erin; Rosen, Mitchell

    2010-01-01

    Because of the emergence of e-commerce and developments in print engines designed for economical output of very short runs, there are increased business opportunities and consumer options for print-on-demand books and photobooks. The current state of these printing modes allows for direct uploading of book files via the web, printing on nonoffset printers, and distributing by standard parcel or mail delivery services. The goal of this research is to assess the image quality of print-on-demand books and photobooks produced by various Web-based vendors and to identify correlations between psychophysical results and objective metrics. Six vendors were identified for one-off (single-copy) print-on-demand books, and seven vendors were identified for photobooks. Participants rank ordered overall quality of a subset of individual pages from each book, where the pages included text, photographs, or a combination of the two. Observers also reported overall quality ratings and price estimates for the bound books. Objective metrics of color gamut, color accuracy, accuracy of International Color Consortium profile usage, eye-weighted root mean square L*, and cascaded modulation transfer acutance were obtained and compared to the observer responses. We introduce some new methods for normalizing data as well as for strengthening the statistical significance of the results. Our approach includes the use of latent mixed-effect models. We found statistically significant correlation with overall image quality and some of the spatial metrics, but correlations between psychophysical results and other objective metrics were weak or nonexistent. Strong correlation was found between psychophysical results of overall quality assessment and estimated price associated with quality. The photobook set of vendors reached higher image-quality ratings than the set of print-on-demand vendors. However, the photobook set had higher image-quality variability.

  19. The Keck Cosmic Web Imager (KCWI): A Powerful New Integral Field Spectrograph for the Keck Observatory

    Science.gov (United States)

    Morrissey, Patrick; KCWI Team

    2013-01-01

    The Keck Cosmic Web Imager (KCWI) is a new facility instrument being developed for the W. M. Keck Observatory and funded for construction by the Telescope System Instrumentation Program (TSIP) of the National Science Foundation (NSF). KCWI is a bench-mounted spectrograph for the Keck II right Nasmyth focal station, providing integral field spectroscopy over a seeing-limited field up to 20"x33" in extent. Selectable Volume Phase Holographic (VPH) gratings provide high efficiency and spectral resolution in the range of 1000 to 20000. The dual-beam design of KCWI passed a Preliminary Design Review in summer 2011. The detailed design of the KCWI blue channel (350 to 700 nm) is now nearly complete, with the red channel (530 to 1050 nm) planned for a phased implementation contingent upon additional funding. KCWI builds on the experience of the Caltech team in implementing the Cosmic Web Imager (CWI), in operation since 2009 at Palomar Observatory. KCWI adds considerable flexibility to the CWI design, and will take full advantage of the excellent seeing and dark sky above Mauna Kea with a selectable nod-and-shuffle observing mode. The KCWI team is lead by Caltech (project management, design and implementation) in partnership with the University of California at Santa Cruz (camera optical and mechanical design) and the W. M. Keck Observatory (program oversight and observatory interfaces).

  20. Physical retrieval of precipitation water contents from Special Sensor Microwave/Imager (SSM/I) data. Part 2: Retrieval method and applications (report version)

    Science.gov (United States)

    Olson, William S.

    1990-01-01

    A physical retrieval method for estimating precipitating water distributions and other geophysical parameters based upon measurements from the DMSP-F8 SSM/I is developed. Three unique features of the retrieval method are (1) sensor antenna patterns are explicitly included to accommodate varying channel resolution; (2) precipitation-brightness temperature relationships are quantified using the cloud ensemble/radiative parameterization; and (3) spatial constraints are imposed for certain background parameters, such as humidity, which vary more slowly in the horizontal than the cloud and precipitation water contents. The general framework of the method will facilitate the incorporation of measurements from the SSMJT, SSM/T-2 and geostationary infrared measurements, as well as information from conventional sources (e.g., radiosondes) or numerical forecast model fields.

  1. Designing an image retrieval interface for abstract concepts within the domain of journalism

    NARCIS (Netherlands)

    R. Besseling (Ron)

    2011-01-01

    htmlabstractResearch has shown that users have difficulties finding images which illustrate abstract concepts. We carried out a user study that confirms the finding that the selection of search terms is perceived difficult and that users find the subjectivity of abstract concepts problematic. In

  2. MedXViewer: an extensible web-enabled software package for medical imaging

    Science.gov (United States)

    Looney, P. T.; Young, K. C.; Mackenzie, Alistair; Halling-Brown, Mark D.

    2014-03-01

    MedXViewer (Medical eXtensible Viewer) is an application designed to allow workstation-independent, PACS-less viewing and interaction with anonymised medical images (e.g. observer studies). The application was initially implemented for use in digital mammography and tomosynthesis but the flexible software design allows it to be easily extended to other imaging modalities. Regions of interest can be identified by a user and any associated information about a mark, an image or a study can be added. The questions and settings can be easily configured depending on the need of the research allowing both ROC and FROC studies to be performed. The extensible nature of the design allows for other functionality and hanging protocols to be available for each study. Panning, windowing, zooming and moving through slices are all available while modality-specific features can be easily enabled e.g. quadrant zooming in mammographic studies. MedXViewer can integrate with a web-based image database allowing results and images to be stored centrally. The software and images can be downloaded remotely from this centralised data-store. Alternatively, the software can run without a network connection where the images and results can be encrypted and stored locally on a machine or external drive. Due to the advanced workstation-style functionality, the simple deployment on heterogeneous systems over the internet without a requirement for administrative access and the ability to utilise a centralised database, MedXViewer has been used for running remote paper-less observer studies and is capable of providing a training infrastructure and co-ordinating remote collaborative viewing sessions (e.g. cancer reviews, interesting cases).

  3. Multimedia human brain database system for surgical candidacy determination in temporal lobe epilepsy with content-based image retrieval

    Science.gov (United States)

    Siadat, Mohammad-Reza; Soltanian-Zadeh, Hamid; Fotouhi, Farshad A.; Elisevich, Kost

    2003-01-01

    This paper presents the development of a human brain multimedia database for surgical candidacy determination in temporal lobe epilepsy. The focus of the paper is on content-based image management, navigation and retrieval. Several medical image-processing methods including our newly developed segmentation method are utilized for information extraction/correlation and indexing. The input data includes T1-, T2-Weighted MRI and FLAIR MRI and ictal and interictal SPECT modalities with associated clinical data and EEG data analysis. The database can answer queries regarding issues such as the correlation between the attribute X of the entity Y and the outcome of a temporal lobe epilepsy surgery. The entity Y can be a brain anatomical structure such as the hippocampus. The attribute X can be either a functionality feature of the anatomical structure Y, calculated with SPECT modalities, such as signal average, or a volumetric/morphological feature of the entity Y such as volume or average curvature. The outcome of the surgery can be any surgery assessment such as memory quotient. A determination is made regarding surgical candidacy by analysis of both textual and image data. The current database system suggests a surgical determination for the cases with relatively small hippocampus and high signal intensity average on FLAIR images within the hippocampus. This indication pretty much fits with the surgeons" expectations/observations. Moreover, as the database gets more populated with patient profiles and individual surgical outcomes, using data mining methods one may discover partially invisible correlations between the contents of different modalities of data and the outcome of the surgery.

  4. Towards Intelligible Query Processing in Relevance Feedback-Based Image Retrieval Systems

    OpenAIRE

    Mohammed, Belkhatir

    2008-01-01

    We have specified within the scope of this paper a framework combining semantics and relational (spatial) characterizations within a coupled architecture in order to address the semantic gap. This framework is instantiated by an operational model based on a sound logic-based formalism, allowing to define a representation for image documents and a matching function to compare index and query structures. We have specified a query framework coupling keyword-based querying with a relevance feedba...

  5. An Algorithm for Surface Current Retrieval from X-band Marine Radar Images

    Directory of Open Access Journals (Sweden)

    Chengxi Shen

    2015-06-01

    Full Text Available In this paper, a novel current inversion algorithm from X-band marine radar images is proposed. The routine, for which deep water is assumed, begins with 3-D FFT of the radar image sequence, followed by the extraction of the dispersion shell from the 3-D image spectrum. Next, the dispersion shell is converted to a polar current shell (PCS using a polar coordinate transformation. After removing outliers along each radial direction of the PCS, a robust sinusoidal curve fitting is applied to the data points along each circumferential direction of the PCS. The angle corresponding to the maximum of the estimated sinusoid function is determined to be the current direction, and the amplitude of this sinusoidal function is the current speed. For validation, the algorithm is tested against both simulated radar images and field data collected by a vertically-polarized X-band system and ground-truthed with measurements from an acoustic Doppler current profiler (ADCP. From the field data, it is observed that when the current speed is less than 0.5 m/s, the root mean square differences between the radar-derived and the ADCP-measured current speed and direction are 7.3 cm/s and 32.7°, respectively. The results indicate that the proposed procedure, unlike most existing current inversion schemes, is not susceptible to high current speeds and circumvents the need to consider aliasing. Meanwhile, the relatively low computational cost makes it an excellent choice in practical marine applications.

  6. Interoperative fundus image and report sharing in compliance with integrating the healthcare enterprise conformance and web access todigital imaging and communication in medicinepersistent object protocol

    Directory of Open Access Journals (Sweden)

    Hui-Qun Wu

    2013-12-01

    Full Text Available AIM:To address issues in interoperability between different fundus image systems, we proposed a web eye-picture archiving and communication system (PACS framework in conformance with digital imaging and communication in medicine (DICOM and health level 7 (HL7 protocol to realize fundus images and reports sharing and communication through internet.METHODS: Firstly, a telemedicine-based eye care work flow was established based on integrating the healthcare enterprise (IHE Eye Care technical framework. Then, a browser/server architecture eye-PACS system was established in conformance with the web access to DICOM persistent object (WADO protocol, which contains three tiers.RESULTS:In any client system installed with web browser, clinicians could log in the eye-PACS to observe fundus images and reports. Multipurpose internet mail extensions (MIME type of a structured report is saved as pdf/html with reference link to relevant fundus image using the WADO syntax could provide enough information for clinicians. Some functions provided by open-source Oviyam could be used to query, zoom, move, measure, view DICOM fundus images.CONCLUSION:Such web eye-PACS in compliance to WADO protocol could be used to store and communicate fundus images and reports, therefore is of great significance for teleophthalmology.

  7. Interoperative fundus image and report sharing in compliance with integrating the healthcare enterprise conformance and web access to digital imaging and communication in medicine persistent object protocol.

    Science.gov (United States)

    Wu, Hui-Qun; Lv, Zheng-Min; Geng, Xing-Yun; Jiang, Kui; Tang, Le-Min; Zhou, Guo-Min; Dong, Jian-Cheng

    2013-01-01

    To address issues in interoperability between different fundus image systems, we proposed a web eye-picture archiving and communication system (PACS) framework in conformance with digital imaging and communication in medicine (DICOM) and health level 7 (HL7) protocol to realize fundus images and reports sharing and communication through internet. Firstly, a telemedicine-based eye care work flow was established based on integrating the healthcare enterprise (IHE) Eye Care technical framework. Then, a browser/server architecture eye-PACS system was established in conformance with the web access to DICOM persistent object (WADO) protocol, which contains three tiers. In any client system installed with web browser, clinicians could log in the eye-PACS to observe fundus images and reports. Multipurpose internet mail extensions (MIME) type of a structured report is saved as pdf/html with reference link to relevant fundus image using the WADO syntax could provide enough information for clinicians. Some functions provided by open-source Oviyam could be used to query, zoom, move, measure, view DICOM fundus images. Such web eye-PACS in compliance to WADO protocol could be used to store and communicate fundus images and reports, therefore is of great significance for teleophthalmology.

  8. New nuclear data service at CNEA: retrieval of the update libraries from a local Web-Server; Nuevo servicio de datos nucleares en CNEA: obtencion de bibliotecas actualizadas desde un Servidor Local

    Energy Technology Data Exchange (ETDEWEB)

    Suarez, Patricia M [Comision Nacional de Energia Atomica, Ezeiza (Argentina). Centro Atomico Ezeiza; Pepe, Maria E [Comision Nacional de Energia Atomica, General San Martin (Argentina). Centro Atomico Constituyentes; Sbaffoni, Maria M [Comision Nacional de Energia Atomica, Buenos Aires (Argentina). Gerencia de Tecnologia

    2000-07-01

    A new On-line Nuclear Data Service was implemented at National Atomic Energy Commission (CNEA) Web-Site. The information usually issued by the Nuclear Data Section of IAEA (NDS-IAEA) on CD-ROM, as well as complementary libraries periodically downloaded from the a mirror server of NDS-IAEA Service located at IPEN, Brazil are available on the new CNEA Web page. In the site, users can find numerical data on neutron, charged-particle, and photonuclear reactions, nuclear structure, and decay data, with related bibliographic information. This data server is permanently maintained and updated by CNEA staff members. This crew also offers assistance on the use and retrieval of nuclear data to local users. (author)

  9. Satellite image simulations for model-supervised, dynamic retrieval of crop type and land use intensity

    Science.gov (United States)

    Bach, H.; Klug, P.; Ruf, T.; Migdall, S.; Schlenz, F.; Hank, T.; Mauser, W.

    2015-04-01

    To support food security, information products about the actual cropping area per crop type, the current status of agricultural production and estimated yields, as well as the sustainability of the agricultural management are necessary. Based on this information, well-targeted land management decisions can be made. Remote sensing is in a unique position to contribute to this task as it is globally available and provides a plethora of information about current crop status. M4Land is a comprehensive system in which a crop growth model (PROMET) and a reflectance model (SLC) are coupled in order to provide these information products by analyzing multi-temporal satellite images. SLC uses modelled surface state parameters from PROMET, such as leaf area index or phenology of different crops to simulate spatially distributed surface reflectance spectra. This is the basis for generating artificial satellite images considering sensor specific configurations (spectral bands, solar and observation geometries). Ensembles of model runs are used to represent different crop types, fertilization status, soil colour and soil moisture. By multi-temporal comparisons of simulated and real satellite images, the land cover/crop type can be classified in a dynamically, model-supervised way and without in-situ training data. The method is demonstrated in an agricultural test-site in Bavaria. Its transferability is studied by analysing PROMET model results for the rest of Germany. Especially the simulated phenological development can be verified on this scale in order to understand whether PROMET is able to adequately simulate spatial, as well as temporal (intra- and inter-season) crop growth conditions, a prerequisite for the model-supervised approach. This sophisticated new technology allows monitoring of management decisions on the field-level using high resolution optical data (presently RapidEye and Landsat). The M4Land analysis system is designed to integrate multi-mission data and is

  10. An On-Demand Retrieval Method Based on Hybrid NoSQL for Multi-Layer Image Tiles in Disaster Reduction Visualization

    Directory of Open Access Journals (Sweden)

    Linyao Qiu

    2017-01-01

    Full Text Available Monitoring, response, mitigation and damage assessment of disasters places a wide variety of demands on the spatial and temporal resolutions of remote sensing images. Images are divided into tile pyramids by data sources or resolutions and published as independent image services for visualization. A disaster-affected area is commonly covered by multiple image layers to express hierarchical surface information, which generates a large amount of namesake tiles from different layers that overlay the same location. The traditional tile retrieval method for visualization cannot distinguish between distinct layers and traverses all image datasets for each tile query. This process produces redundant queries and invalid access that can seriously affect the visualization performance of clients, servers and network transmission. This paper proposes an on-demand retrieval method for multi-layer images and defines semantic annotations to enrich the description of each dataset. By matching visualization demands with the semantic information of datasets, this method automatically filters inappropriate layers and finds the most suitable layer for the final tile query. The design and implementation are based on a two-layer NoSQL database architecture that provides scheduling optimization and concurrent processing capability. The experimental results reflect the effectiveness and stability of the approach for multi-layer retrieval in disaster reduction visualization.

  11. Use of MODIS Sensor Images Combined with Reanalysis Products to Retrieve Net Radiation in Amazonia

    Science.gov (United States)

    de Oliveira, Gabriel; Brunsell, Nathaniel A.; Moraes, Elisabete C.; Bertani, Gabriel; dos Santos, Thiago V.; Shimabukuro, Yosio E.; Aragão, Luiz E. O. C.

    2016-01-01

    In the Amazon region, the estimation of radiation fluxes through remote sensing techniques is hindered by the lack of ground measurements required as input in the models, as well as the difficulty to obtain cloud-free images. Here, we assess an approach to estimate net radiation (Rn) and its components under all-sky conditions for the Amazon region through the Surface Energy Balance Algorithm for Land (SEBAL) model utilizing only remote sensing and reanalysis data. The study period comprised six years, between January 2001–December 2006, and images from MODIS sensor aboard the Terra satellite and GLDAS reanalysis products were utilized. The estimates were evaluated with flux tower measurements within the Large-Scale Biosphere-Atmosphere Experiment in Amazonia (LBA) project. Comparison between estimates obtained by the proposed method and observations from LBA towers showed errors between 12.5% and 16.4% and 11.3% and 15.9% for instantaneous and daily Rn, respectively. Our approach was adequate to minimize the problem related to strong cloudiness over the region and allowed to map consistently the spatial distribution of net radiation components in Amazonia. We conclude that the integration of reanalysis products and satellite data, eliminating the need for surface measurements as input model, was a useful proposition for the spatialization of the radiation fluxes in the Amazon region, which may serve as input information needed by algorithms that aim to determine evapotranspiration, the most important component of the Amazon hydrological balance. PMID:27347957

  12. Use of MODIS Sensor Images Combined with Reanalysis Products to Retrieve Net Radiation in Amazonia.

    Science.gov (United States)

    de Oliveira, Gabriel; Brunsell, Nathaniel A; Moraes, Elisabete C; Bertani, Gabriel; Dos Santos, Thiago V; Shimabukuro, Yosio E; Aragão, Luiz E O C

    2016-06-24

    In the Amazon region, the estimation of radiation fluxes through remote sensing techniques is hindered by the lack of ground measurements required as input in the models, as well as the difficulty to obtain cloud-free images. Here, we assess an approach to estimate net radiation (Rn) and its components under all-sky conditions for the Amazon region through the Surface Energy Balance Algorithm for Land (SEBAL) model utilizing only remote sensing and reanalysis data. The study period comprised six years, between January 2001-December 2006, and images from MODIS sensor aboard the Terra satellite and GLDAS reanalysis products were utilized. The estimates were evaluated with flux tower measurements within the Large-Scale Biosphere-Atmosphere Experiment in Amazonia (LBA) project. Comparison between estimates obtained by the proposed method and observations from LBA towers showed errors between 12.5% and 16.4% and 11.3% and 15.9% for instantaneous and daily Rn, respectively. Our approach was adequate to minimize the problem related to strong cloudiness over the region and allowed to map consistently the spatial distribution of net radiation components in Amazonia. We conclude that the integration of reanalysis products and satellite data, eliminating the need for surface measurements as input model, was a useful proposition for the spatialization of the radiation fluxes in the Amazon region, which may serve as input information needed by algorithms that aim to determine evapotranspiration, the most important component of the Amazon hydrological balance.

  13. Improving performance of content-based image retrieval schemes in searching for similar breast mass regions: an assessment

    International Nuclear Information System (INIS)

    Wang Xiaohui; Park, Sang Cheol; Zheng Bin

    2009-01-01

    This study aims to assess three methods commonly used in content-based image retrieval (CBIR) schemes and investigate the approaches to improve scheme performance. A reference database involving 3000 regions of interest (ROIs) was established. Among them, 400 ROIs were randomly selected to form a testing dataset. Three methods, namely mutual information, Pearson's correlation and a multi-feature-based k-nearest neighbor (KNN) algorithm, were applied to search for the 15 'the most similar' reference ROIs to each testing ROI. The clinical relevance and visual similarity of searching results were evaluated using the areas under receiver operating characteristic (ROC) curves (A Z ) and average mean square difference (MSD) of the mass boundary spiculation level ratings between testing and selected ROIs, respectively. The results showed that the A Z values were 0.893 ± 0.009, 0.606 ± 0.021 and 0.699 ± 0.026 for the use of KNN, mutual information and Pearson's correlation, respectively. The A Z values increased to 0.724 ± 0.017 and 0.787 ± 0.016 for mutual information and Pearson's correlation when using ROIs with the size adaptively adjusted based on actual mass size. The corresponding MSD values were 2.107 ± 0.718, 2.301 ± 0.733 and 2.298 ± 0.743. The study demonstrates that due to the diversity of medical images, CBIR schemes using multiple image features and mass size-based ROIs can achieve significantly improved performance.

  14. Performance of Web-based image distribution: client-oriented measurements

    International Nuclear Information System (INIS)

    Bergh, B.; Pietsch, M.; Schlaefke, A.; Vogl, T.J.

    2003-01-01

    The aim of this study was to define a clinically suitable personal computer (PC) configuration for Web-based image distribution and to assess the influence of different hard- and software configurations on the performance. Through specially developed software the time-to-display (TTD) for various PC configurations was measured. Different processor speeds, random access memory (RAM), screen resolutions, graphic adapters, network speeds, operating systems and examination types (computed radiography, CT, MRI) were evaluated, providing more than half a million measurements. Processor speed was the most relevant factor for the TTD; doubling the speed halved the TTD. Under processor speeds of 350 MHz, TTD mostly remained above 5 s for 1 CR or 16 CT images. Here Windows NT with lossy compression were superior. Processor speeds of 350 MHz and over delivered TTD <5 s. In this case Windows 2000 and lossless compression were preferable. Screen resolutions above 1280 x 1024 pixels increased the TTD mainly for CR images. The RAM amount, network speed and graphic adapter did not have a significant influence. The minimum threshold for clinical routine is any standard off-the-shelf PC better than Pentium II 350 MHz, 128 MB RAM; hence, high-end PC hardware is not required. (orig.)

  15. An overview of the web-based Google Earth coincident imaging tool

    Science.gov (United States)

    Chander, Gyanesh; Kilough, B.; Gowda, S.

    2010-01-01

    The Committee on Earth Observing Satellites (CEOS) Visualization Environment (COVE) tool is a browser-based application that leverages Google Earth web to display satellite sensor coverage areas. The analysis tool can also be used to identify near simultaneous surface observation locations for two or more satellites. The National Aeronautics and Space Administration (NASA) CEOS System Engineering Office (SEO) worked with the CEOS Working Group on Calibration and Validation (WGCV) to develop the COVE tool. The CEOS member organizations are currently operating and planning hundreds of Earth Observation (EO) satellites. Standard cross-comparison exercises between multiple sensors to compare near-simultaneous surface observations and to identify corresponding image pairs are time-consuming and labor-intensive. COVE is a suite of tools that have been developed to make such tasks easier.

  16. Space Images for NASA/JPL

    Science.gov (United States)

    Boggs, Karen; Gutheinz, Sandy C.; Watanabe, Susan M.; Oks, Boris; Arca, Jeremy M.; Stanboli, Alice; Peez, Martin; Whatmore, Rebecca; Kang, Minliang; Espinoza, Luis A.

    2010-01-01

    Space Images for NASA/JPL is an Apple iPhone application that allows the general public to access featured images from the Jet Propulsion Laboratory (JPL). A back-end infrastructure stores, tracks, and retrieves space images from the JPL Photojournal Web server, and catalogs the information into a streamlined rating infrastructure.

  17. Stress distribution retrieval in granular materials: A multi-scale model and digital image correlation measurements

    Science.gov (United States)

    Bruno, Luigi; Decuzzi, Paolo; Gentile, Francesco

    2016-01-01

    The promise of nanotechnology lies in the possibility of engineering matter on the nanoscale and creating technological interfaces that, because of their small scales, may directly interact with biological objects, creating new strategies for the treatment of pathologies that are otherwise beyond the reach of conventional medicine. Nanotechnology is inherently a multiscale, multiphenomena challenge. Fundamental understanding and highly accurate predictive methods are critical to successful manufacturing of nanostructured materials, bio/mechanical devices and systems. In biomedical engineering, and in the mechanical analysis of biological tissues, classical continuum approaches are routinely utilized, even if these disregard the discrete nature of tissues, that are an interpenetrating network of a matrix (the extra cellular matrix, ECM) and a generally large but finite number of cells with a size falling in the micrometer range. Here, we introduce a nano-mechanical theory that accounts for the-non continuum nature of bio systems and other discrete systems. This discrete field theory, doublet mechanics (DM), is a technique to model the mechanical behavior of materials over multiple scales, ranging from some millimeters down to few nanometers. In the paper, we use this theory to predict the response of a granular material to an external applied load. Such a representation is extremely attractive in modeling biological tissues which may be considered as a spatial set of a large number of particulate (cells) dispersed in an extracellular matrix. Possibly more important of this, using digital image correlation (DIC) optical methods, we provide an experimental verification of the model.

  18. Quantitative evaluation of a single-distance phase-retrieval method applied on in-line phase-contrast images of a mouse lung

    International Nuclear Information System (INIS)

    Mohammadi, Sara; Larsson, Emanuel; Alves, Frauke; Dal Monego, Simeone; Biffi, Stefania; Garrovo, Chiara; Lorenzon, Andrea; Tromba, Giuliana; Dullin, Christian

    2014-01-01

    Quantitative analysis concerning the application of a single-distance phase-retrieval algorithm on in-line phase-contrast images of a mouse lung at different sample-to-detector distances is presented. Propagation-based X-ray phase-contrast computed tomography (PBI) has already proven its potential in a great variety of soft-tissue-related applications including lung imaging. However, the strong edge enhancement, caused by the phase effects, often hampers image segmentation and therefore the quantitative analysis of data sets. Here, the benefits of applying single-distance phase retrieval prior to the three-dimensional reconstruction (PhR) are discussed and quantified compared with three-dimensional reconstructions of conventional PBI data sets in terms of contrast-to-noise ratio (CNR) and preservation of image features. The PhR data sets show more than a tenfold higher CNR and only minor blurring of the edges when compared with PBI in a predominately absorption-based set-up. Accordingly, phase retrieval increases the sensitivity and provides more functionality in computed tomography imaging

  19. Memory retrieval of smoking-related images induce greater insula activation as revealed by an fMRI-based delayed matching to sample task.

    Science.gov (United States)

    Janes, Amy C; Ross, Robert S; Farmer, Stacey; Frederick, Blaise B; Nickerson, Lisa D; Lukas, Scott E; Stern, Chantal E

    2015-03-01

    Nicotine dependence is a chronic and difficult to treat disorder. While environmental stimuli associated with smoking precipitate craving and relapse, it is unknown whether smoking cues are cognitively processed differently than neutral stimuli. To evaluate working memory differences between smoking-related and neutral stimuli, we conducted a delay-match-to-sample (DMS) task concurrently with functional magnetic resonance imaging (fMRI) in nicotine-dependent participants. The DMS task evaluates brain activation during the encoding, maintenance and retrieval phases of working memory. Smoking images induced significantly more subjective craving, and greater midline cortical activation during encoding in comparison to neutral stimuli that were similar in content yet lacked a smoking component. The insula, which is involved in maintaining nicotine dependence, was active during the successful retrieval of previously viewed smoking versus neutral images. In contrast, neutral images required more prefrontal cortex-mediated active maintenance during the maintenance period. These findings indicate that distinct brain regions are involved in the different phases of working memory for smoking-related versus neutral images. Importantly, the results implicate the insula in the retrieval of smoking-related stimuli, which is relevant given the insula's emerging role in addiction. © 2013 Society for the Study of Addiction.

  20. Web-based, GPU-accelerated, Monte Carlo simulation and visualization of indirect radiation imaging detector performance

    International Nuclear Information System (INIS)

    Dong, Han; Sharma, Diksha; Badano, Aldo

    2014-01-01

    Purpose: Monte Carlo simulations play a vital role in the understanding of the fundamental limitations, design, and optimization of existing and emerging medical imaging systems. Efforts in this area have resulted in the development of a wide variety of open-source software packages. One such package, hybridMANTIS, uses a novel hybrid concept to model indirect scintillator detectors by balancing the computational load using dual CPU and graphics processing unit (GPU) processors, obtaining computational efficiency with reasonable accuracy. In this work, the authors describe two open-source visualization interfaces, webMANTIS and visualMANTIS to facilitate the setup of computational experiments via hybridMANTIS. Methods: The visualization tools visualMANTIS and webMANTIS enable the user to control simulation properties through a user interface. In the case of webMANTIS, control via a web browser allows access through mobile devices such as smartphones or tablets. webMANTIS acts as a server back-end and communicates with an NVIDIA GPU computing cluster that can support multiuser environments where users can execute different experiments in parallel. Results: The output consists of point response and pulse-height spectrum, and optical transport statistics generated by hybridMANTIS. The users can download the output images and statistics through a zip file for future reference. In addition, webMANTIS provides a visualization window that displays a few selected optical photon path as they get transported through the detector columns and allows the user to trace the history of the optical photons. Conclusions: The visualization tools visualMANTIS and webMANTIS provide features such as on the fly generation of pulse-height spectra and response functions for microcolumnar x-ray imagers while allowing users to save simulation parameters and results from prior experiments. The graphical interfaces simplify the simulation setup and allow the user to go directly from specifying

  1. Web-based, GPU-accelerated, Monte Carlo simulation and visualization of indirect radiation imaging detector performance

    Energy Technology Data Exchange (ETDEWEB)

    Dong, Han; Sharma, Diksha; Badano, Aldo, E-mail: aldo.badano@fda.hhs.gov [Division of Imaging, Diagnostics, and Software Reliability, Center for Devices and Radiological Health, U.S. Food and Drug Administration, Silver Spring, Maryland 20993 (United States)

    2014-12-15

    Purpose: Monte Carlo simulations play a vital role in the understanding of the fundamental limitations, design, and optimization of existing and emerging medical imaging systems. Efforts in this area have resulted in the development of a wide variety of open-source software packages. One such package, hybridMANTIS, uses a novel hybrid concept to model indirect scintillator detectors by balancing the computational load using dual CPU and graphics processing unit (GPU) processors, obtaining computational efficiency with reasonable accuracy. In this work, the authors describe two open-source visualization interfaces, webMANTIS and visualMANTIS to facilitate the setup of computational experiments via hybridMANTIS. Methods: The visualization tools visualMANTIS and webMANTIS enable the user to control simulation properties through a user interface. In the case of webMANTIS, control via a web browser allows access through mobile devices such as smartphones or tablets. webMANTIS acts as a server back-end and communicates with an NVIDIA GPU computing cluster that can support multiuser environments where users can execute different experiments in parallel. Results: The output consists of point response and pulse-height spectrum, and optical transport statistics generated by hybridMANTIS. The users can download the output images and statistics through a zip file for future reference. In addition, webMANTIS provides a visualization window that displays a few selected optical photon path as they get transported through the detector columns and allows the user to trace the history of the optical photons. Conclusions: The visualization tools visualMANTIS and webMANTIS provide features such as on the fly generation of pulse-height spectra and response functions for microcolumnar x-ray imagers while allowing users to save simulation parameters and results from prior experiments. The graphical interfaces simplify the simulation setup and allow the user to go directly from specifying

  2. Hera : Development of semantic web information systems

    NARCIS (Netherlands)

    Houben, G.J.P.M.; Barna, P.; Frasincar, F.; Vdovják, R.; Cuella Lovelle, J.M.; et al., xx

    2003-01-01

    As a consequence of the success of the Web, methodologies for information system development need to consider systems that use the Web paradigm. These Web Information Systems (WIS) use Web technologies to retrieve information from the Web and to deliver information in a Web presentation to the

  3. Performance management of high performance computing for medical image processing in Amazon Web Services

    Science.gov (United States)

    Bao, Shunxing; Damon, Stephen M.; Landman, Bennett A.; Gokhale, Aniruddha

    2016-03-01

    Adopting high performance cloud computing for medical image processing is a popular trend given the pressing needs of large studies. Amazon Web Services (AWS) provide reliable, on-demand, and inexpensive cloud computing services. Our research objective is to implement an affordable, scalable and easy-to-use AWS framework for the Java Image Science Toolkit (JIST). JIST is a plugin for Medical- Image Processing, Analysis, and Visualization (MIPAV) that provides a graphical pipeline implementation allowing users to quickly test and develop pipelines. JIST is DRMAA-compliant allowing it to run on portable batch system grids. However, as new processing methods are implemented and developed, memory may often be a bottleneck for not only lab computers, but also possibly some local grids. Integrating JIST with the AWS cloud alleviates these possible restrictions and does not require users to have deep knowledge of programming in Java. Workflow definition/management and cloud configurations are two key challenges in this research. Using a simple unified control panel, users have the ability to set the numbers of nodes and select from a variety of pre-configured AWS EC2 nodes with different numbers of processors and memory storage. Intuitively, we configured Amazon S3 storage to be mounted by pay-for- use Amazon EC2 instances. Hence, S3 storage is recognized as a shared cloud resource. The Amazon EC2 instances provide pre-installs of all necessary packages to run JIST. This work presents an implementation that facilitates the integration of JIST with AWS. We describe the theoretical cost/benefit formulae to decide between local serial execution versus cloud computing and apply this analysis to an empirical diffusion tensor imaging pipeline.

  4. Performance Management of High Performance Computing for Medical Image Processing in Amazon Web Services.

    Science.gov (United States)

    Bao, Shunxing; Damon, Stephen M; Landman, Bennett A; Gokhale, Aniruddha

    2016-02-27

    Adopting high performance cloud computing for medical image processing is a popular trend given the pressing needs of large studies. Amazon Web Services (AWS) provide reliable, on-demand, and inexpensive cloud computing services. Our research objective is to implement an affordable, scalable and easy-to-use AWS framework for the Java Image Science Toolkit (JIST). JIST is a plugin for Medical-Image Processing, Analysis, and Visualization (MIPAV) that provides a graphical pipeline implementation allowing users to quickly test and develop pipelines. JIST is DRMAA-compliant allowing it to run on portable batch system grids. However, as new processing methods are implemented and developed, memory may often be a bottleneck for not only lab computers, but also possibly some local grids. Integrating JIST with the AWS cloud alleviates these possible restrictions and does not require users to have deep knowledge of programming in Java. Workflow definition/management and cloud configurations are two key challenges in this research. Using a simple unified control panel, users have the ability to set the numbers of nodes and select from a variety of pre-configured AWS EC2 nodes with different numbers of processors and memory storage. Intuitively, we configured Amazon S3 storage to be mounted by pay-for-use Amazon EC2 instances. Hence, S3 storage is recognized as a shared cloud resource. The Amazon EC2 instances provide pre-installs of all necessary packages to run JIST. This work presents an implementation that facilitates the integration of JIST with AWS. We describe the theoretical cost/benefit formulae to decide between local serial execution versus cloud computing and apply this analysis to an empirical diffusion tensor imaging pipeline.

  5. Retrieval of precipitable water using near infrared channels of Global Imager/Advanced Earth Observing Satellite-II (GLI/ADEOS-II)

    International Nuclear Information System (INIS)

    Kuji, M.; Uchiyama, A.

    2002-01-01

    Retrieval of precipitable water (vertically integrated water vapor amount) is proposed using near infrared channels og Global Imager onboard Advanced Earth Observing Satellite-II (GLI/ADEOS-II). The principle of retrieval algorithm is based upon that adopted with Moderate Resolution Imaging Spectroradiometer (MODIS) onboard Earth Observing System (EOS) satellite series. Simulations were carried out with GLI Signal Simulator (GSS) to calculate the radiance ratio between water vapor absorbing bands and non-absorbing bands. As a result, it is found that for the case of high spectral reflectance background (a bright target) such as the land surface, the calibration curves are sensitive to the precipitable water variation. For the case of low albedo background (a dark target) such as the ocean surface, on the contrary, the calibration curve is not very sensitive to its variation under conditions of the large water vapor amount. It turns out that aerosol loading has little influence on the retrieval over a bright target for the aerosol optical thickness less than about 1.0 at 500nm. It is also anticipated that simultaneous retrieval of the water vapor amount using GLI data along with other channels will lead to improved accuracy of the determination of surface geophysical properties, such as vegetation, ocean color, and snow and ice, through the better atmospheric correction

  6. WEB STRUCTURE MINING

    Directory of Open Access Journals (Sweden)

    CLAUDIA ELENA DINUCĂ

    2011-01-01

    Full Text Available The World Wide Web became one of the most valuable resources for information retrievals and knowledge discoveries due to the permanent increasing of the amount of data available online. Taking into consideration the web dimension, the users get easily lost in the web’s rich hyper structure. Application of data mining methods is the right solution for knowledge discovery on the Web. The knowledge extracted from the Web can be used to raise the performances for Web information retrievals, question answering and Web based data warehousing. In this paper, I provide an introduction of Web mining categories and I focus on one of these categories: the Web structure mining. Web structure mining, one of three categories of web mining for data, is a tool used to identify the relationship between Web pages linked by information or direct link connection. It offers information about how different pages are linked together to form this huge web. Web Structure Mining finds hidden basic structures and uses hyperlinks for more web applications such as web search.

  7. Significant Benefits from Libraries in Web 3.0 Environment

    African Journals Online (AJOL)

    pc

    2018-03-05

    Mar 5, 2018 ... Keywords- Web 3.0, library 3.0, Web 3.0 Applications, Semantic. Web ... providing virtual information services, and other services cannot be ... web third generation, definition, beginning, and retrieve system. The study ...

  8. A web service system supporting three-dimensional post-processing of medical images based on WADO protocol.

    Science.gov (United States)

    He, Longjun; Xu, Lang; Ming, Xing; Liu, Qian

    2015-02-01

    Three-dimensional post-processing operations on the volume data generated by a series of CT or MR images had important significance on image reading and diagnosis. As a part of the DIOCM standard, WADO service defined how to access DICOM objects on the Web, but it didn't involve three-dimensional post-processing operations on the series images. This paper analyzed the technical features of three-dimensional post-processing operations on the volume data, and then designed and implemented a web service system for three-dimensional post-processing operations of medical images based on the WADO protocol. In order to improve the scalability of the proposed system, the business tasks and calculation operations were separated into two modules. As results, it was proved that the proposed system could support three-dimensional post-processing service of medical images for multiple clients at the same moment, which met the demand of accessing three-dimensional post-processing operations on the volume data on the web.

  9. Quality issues in the management of web information

    CERN Document Server

    Bordogna, Gloria; Jain, Lakhmi

    2013-01-01

    This research volume presents a sample of recent contributions related to the issue of quality-assessment for Web Based information in the context of information access, retrieval, and filtering systems. The advent of the Web and the uncontrolled process of documents' generation have raised the problem of declining quality assessment to information on the Web, by considering both the nature of documents (texts, images, video, sounds, and so on), the genre of documents ( news, geographic information, ontologies, medical records, products records, and so on), the reputation of information sources and sites, and, last but not least the actions performed on documents (content indexing, retrieval and ranking, collaborative filtering, and so on). The volume constitutes a compendium of both heterogeneous approaches and sample applications focusing specific aspects of the quality assessment for Web-based information for researchers, PhD students and practitioners carrying out their research activity in the field of W...

  10. Introduction to information retrieval

    CERN Document Server

    Manning, Christopher D; Schütze, Hinrich

    2008-01-01

    Class-tested and coherent, this textbook teaches classical and web information retrieval, including web search and the related areas of text classification and text clustering from basic concepts. It gives an up-to-date treatment of all aspects of the design and implementation of systems for gathering, indexing, and searching documents; methods for evaluating systems; and an introduction to the use of machine learning methods on text collections. All the important ideas are explained using examples and figures, making it perfect for introductory courses in information retrieval for advanced undergraduates and graduate students in computer science. Based on feedback from extensive classroom experience, the book has been carefully structured in order to make teaching more natural and effective. Slides and additional exercises (with solutions for lecturers) are also available through the book's supporting website to help course instructors prepare their lectures.

  11. Improved optical flow velocity analysis in SO2 camera images of volcanic plumes - implications for emission-rate retrievals investigated at Mt Etna, Italy and Guallatiri, Chile

    Science.gov (United States)

    Gliß, Jonas; Stebel, Kerstin; Kylling, Arve; Sudbø, Aasmund

    2018-02-01

    Accurate gas velocity measurements in emission plumes are highly desirable for various atmospheric remote sensing applications. The imaging technique of UV SO2 cameras is commonly used to monitor SO2 emissions from volcanoes and anthropogenic sources (e.g. power plants, ships). The camera systems capture the emission plumes at high spatial and temporal resolution. This allows the gas velocities in the plume to be retrieved directly from the images. The latter can be measured at a pixel level using optical flow (OF) algorithms. This is particularly advantageous under turbulent plume conditions. However, OF algorithms intrinsically rely on contrast in the images and often fail to detect motion in low-contrast image areas. We present a new method to identify ill-constrained OF motion vectors and replace them using the local average velocity vector. The latter is derived based on histograms of the retrieved OF motion fields. The new method is applied to two example data sets recorded at Mt Etna (Italy) and Guallatiri (Chile). We show that in many cases, the uncorrected OF yields significantly underestimated SO2 emission rates. We further show that our proposed correction can account for this and that it significantly improves the reliability of optical-flow-based gas velocity retrievals. In the case of Mt Etna, the SO2 emissions of the north-eastern crater are investigated. The corrected SO2 emission rates range between 4.8 and 10.7 kg s-1 (average of 7.1 ± 1.3 kg s-1) and are in good agreement with previously reported values. For the Guallatiri data, the emissions of the central crater and a fumarolic field are investigated. The retrieved SO2 emission rates are between 0.5 and 2.9 kg s-1 (average of 1.3 ± 0.5 kg s-1) and provide the first report of SO2 emissions from this remotely located and inaccessible volcano.

  12. Nuclear expert web search and crawler algorithm

    International Nuclear Information System (INIS)

    Reis, Thiago; Barroso, Antonio C.O.; Baptista, Benedito Filho D.

    2013-01-01

    In this paper we present preliminary research on web search and crawling algorithm applied specifically to nuclear-related web information. We designed a web-based nuclear-oriented expert system guided by a web crawler algorithm and a neural network able to search and retrieve nuclear-related hyper textual web information in autonomous and massive fashion. Preliminary experimental results shows a retrieval precision of 80% for web pages related to any nuclear theme and a retrieval precision of 72% for web pages related only to nuclear power theme. (author)

  13. Nuclear expert web search and crawler algorithm

    Energy Technology Data Exchange (ETDEWEB)

    Reis, Thiago; Barroso, Antonio C.O.; Baptista, Benedito Filho D., E-mail: thiagoreis@usp.br, E-mail: barroso@ipen.br, E-mail: bdbfilho@ipen.br [Instituto de Pesquisas Energeticas e Nucleares (IPEN/CNEN-SP), Sao Paulo, SP (Brazil)

    2013-07-01

    In this paper we present preliminary research on web search and crawling algorithm applied specifically to nuclear-related web information. We designed a web-based nuclear-oriented expert system guided by a web crawler algorithm and a neural network able to search and retrieve nuclear-related hyper textual web information in autonomous and massive fashion. Preliminary experimental results shows a retrieval precision of 80% for web pages related to any nuclear theme and a retrieval precision of 72% for web pages related only to nuclear power theme. (author)

  14. Coupled Retrieval of Aerosol Properties and Surface Reflection Using the Airborne Multi-angle SpectroPolarimetric Imager (AirMSPI)

    Science.gov (United States)

    Xu, F.; van Harten, G.; Kalashnikova, O. V.; Diner, D. J.; Seidel, F. C.; Garay, M. J.; Dubovik, O.

    2016-12-01

    The Airborne Multi-angle SpectroPolarimetric Imager (AirMSPI) [1] has been flying aboard the NASA ER-2 high altitude aircraft since October 2010. In step-and-stare operation mode, AirMSPI acquires radiance and polarization data at 355, 380, 445, 470*, 555, 660*, 865*, and 935 nm (* denotes polarimetric bands). The imaged area covers about 10 km by 10 km and is observed from 9 view angles between ±67° off of nadir. We have developed an efficient and flexible code that uses the information content of AirMSPI data for a coupled retrieval of aerosol properties and surface reflection. The retrieval was built based on the multi-pixel optimization concept [2], with the use of a hybrid radiative transfer model [3] that combines the Markov Chain [4] and adding/doubling methods [5]. The convergence and robustness of our algorithm is ensured by applying constraints on (a) the spectral variation of the Bidirectional Polarization Distribution Function (BPDF) and angular shape of the Bidirectional Reflectance Distribution Function (BRDF); (b) the spectral variation of aerosol optical properties; and (c) the spatial variation of aerosol parameters across neighboring image pixels. Our retrieval approach has been tested using over 20 AirMSPI datasets having low to moderately high aerosol loadings ( 0.02550-nmSpace Sci. Rev. 16, 527 (1974).

  15. Unified Retrieval of Cloud Properties, Atmospheric Profiles, and Surface Parameters from Combined DMSP Imager and Sounder Data

    National Research Council Canada - National Science Library

    Isaacs, Ronald

    2000-01-01

    The main objective of the proposed study was to investigate the complementary information provided by microwave and infrared sensors in order to enhance both the microwave retrieval and the current cloud analysis...

  16. Web-Based Software Integration For Dissemination Of Archival Images: The Frontiers Of Science Website

    Directory of Open Access Journals (Sweden)

    Gary Browne

    2011-07-01

    Full Text Available The Frontiers of Science illustrated comic strip of 'science fact' ran from 1961 to 1982, syndicated worldwide through over 600 newspapers. The Rare Books and Special Collections Library at the University of Sydney, in association with Sydney eScholarship, digitized all 939 strips. We aimed to create a website that could disseminate these comic strips to scholars, enthusiasts and the general public. We wanted to enable users to search and browse through the images simply and effectively, with an intuitive and novel viewing platform. Time and resource constraints dictated the use of (mostly open source code modules wherever possible and the integration and customisation of a range of web-based applications, code snippets and technologies (DSpace, eXtensible Text Framework (XTF, OmniFormat, JQuery Tools, Thickbox and Zoomify, stylistically pulled together using CSS. This approach allowed for a rapid development cycle (6 weeks to deliver the site on time as well as provide us with a framework for similar projects.

  17. Comment on ‘A new method for fusion, denoising and enhancement of x-ray images retrieved from Talbot–Lau grating interferometry’

    International Nuclear Information System (INIS)

    Scholkmann, Felix; Revol, Vincent; Kaufmann, Rolf; Kottler, Christian

    2015-01-01

    In a recent paper (Scholkamm et al 2014 Phys. Med. Biol. 59 1425–40) we presented a new image denoising, fusion and enhancement framework for combining and optimal visualization of x-ray attenuation contrast, differential phase contrast and dark-field contrast images retrieved from x-ray Talbot–Lau grating interferometry. In this comment we give additional information and report about the application of our framework to breast cancer tissue which we presented in our paper as an example. The applied procedure is suitable for a qualitative comparison of different algorithms. For a quantitative juxtaposition original data would however be needed as an input. (comment and reply)

  18. Aerosol Retrieval Sensitivity and Error Analysis for the Cloud and Aerosol Polarimetric Imager on Board TanSat: The Effect of Multi-Angle Measurement

    Directory of Open Access Journals (Sweden)

    Xi Chen

    2017-02-01

    Full Text Available Aerosol scattering is an important source of error in CO2 retrievals from satellite. This paper presents an analysis of aerosol information content from the Cloud and Aerosol Polarimetric Imager (CAPI onboard the Chinese Carbon Dioxide Observation Satellite (TanSat to be launched in 2016. Based on optimal estimation theory, aerosol information content is quantified from radiance and polarization observed by CAPI in terms of the degrees of freedom for the signal (DFS. A linearized vector radiative transfer model is used with a linearized Mie code to simulate observation and sensitivity (or Jacobians with respect to aerosol parameters. In satellite nadir mode, the DFS for aerosol optical depth is the largest, but for mode radius, it is only 0.55. Observation geometry is found to affect aerosol DFS based on the aerosol scattering phase function from the comparison between different viewing zenith angles or solar zenith angles. When TanSat is operated in target mode, we note that multi-angle retrieval represented by three along-track measurements provides additional 0.31 DFS on average, mainly from mode radius. When adding another two measurements, the a posteriori error decreases by another 2%–6%. The correlation coefficients between retrieved parameters show that aerosol is strongly correlated with surface reflectance, but multi-angle retrieval can weaken this correlation.

  19. Imaging a memory trace over half a life-time in the medial temporal lobe reveals a time-limited role of CA3 neurons in retrieval

    Science.gov (United States)

    Lux, Vanessa; Atucha, Erika; Kitsukawa, Takashi; Sauvage, Magdalena M

    2016-01-01

    Whether retrieval still depends on the hippocampus as memories age or relies then on cortical areas remains a major controversy. Despite evidence for a functional segregation between CA1, CA3 and parahippocampal areas, their specific role within this frame is unclear. Especially, the contribution of CA3 is questionable as very remote memories might be too degraded to be used for pattern completion. To identify the specific role of these areas, we imaged brain activity in mice during retrieval of recent, early remote and very remote fear memories by detecting the immediate-early gene Arc. Investigating correlates of the memory trace over an extended period allowed us to report that, in contrast to CA1, CA3 is no longer recruited in very remote retrieval. Conversely, we showed that parahippocampal areas are then maximally engaged. These results suggest a shift from a greater contribution of the trisynaptic loop to the temporoammonic pathway for retrieval. DOI: http://dx.doi.org/10.7554/eLife.11862.001 PMID:26880561

  20. SU-D-BRD-02: A Web-Based Image Processing and Plan Evaluation Platform (WIPPEP) for Future Cloud-Based Radiotherapy

    International Nuclear Information System (INIS)

    Chai, X; Liu, L; Xing, L

    2014-01-01

    Purpose: Visualization and processing of medical images and radiation treatment plan evaluation have traditionally been constrained to local workstations with limited computation power and ability of data sharing and software update. We present a web-based image processing and planning evaluation platform (WIPPEP) for radiotherapy applications with high efficiency, ubiquitous web access, and real-time data sharing. Methods: This software platform consists of three parts: web server, image server and computation server. Each independent server communicates with each other through HTTP requests. The web server is the key component that provides visualizations and user interface through front-end web browsers and relay information to the backend to process user requests. The image server serves as a PACS system. The computation server performs the actual image processing and dose calculation. The web server backend is developed using Java Servlets and the frontend is developed using HTML5, Javascript, and jQuery. The image server is based on open source DCME4CHEE PACS system. The computation server can be written in any programming language as long as it can send/receive HTTP requests. Our computation server was implemented in Delphi, Python and PHP, which can process data directly or via a C++ program DLL. Results: This software platform is running on a 32-core CPU server virtually hosting the web server, image server, and computation servers separately. Users can visit our internal website with Chrome browser, select a specific patient, visualize image and RT structures belonging to this patient and perform image segmentation running Delphi computation server and Monte Carlo dose calculation on Python or PHP computation server. Conclusion: We have developed a webbased image processing and plan evaluation platform prototype for radiotherapy. This system has clearly demonstrated the feasibility of performing image processing and plan evaluation platform through a web

  1. SU-D-BRD-02: A Web-Based Image Processing and Plan Evaluation Platform (WIPPEP) for Future Cloud-Based Radiotherapy

    Energy Technology Data Exchange (ETDEWEB)

    Chai, X; Liu, L; Xing, L [Stanford UniversitySchool of Medicine, Stanford, CA (United States)

    2014-06-01

    Purpose: Visualization and processing of medical images and radiation treatment plan evaluation have traditionally been constrained to local workstations with limited computation power and ability of data sharing and software update. We present a web-based image processing and planning evaluation platform (WIPPEP) for radiotherapy applications with high efficiency, ubiquitous web access, and real-time data sharing. Methods: This software platform consists of three parts: web server, image server and computation server. Each independent server communicates with each other through HTTP requests. The web server is the key component that provides visualizations and user interface through front-end web browsers and relay information to the backend to process user requests. The image server serves as a PACS system. The computation server performs the actual image processing and dose calculation. The web server backend is developed using Java Servlets and the frontend is developed using HTML5, Javascript, and jQuery. The image server is based on open source DCME4CHEE PACS system. The computation server can be written in any programming language as long as it can send/receive HTTP requests. Our computation server was implemented in Delphi, Python and PHP, which can process data directly or via a C++ program DLL. Results: This software platform is running on a 32-core CPU server virtually hosting the web server, image server, and computation servers separately. Users can visit our internal website with Chrome browser, select a specific patient, visualize image and RT structures belonging to this patient and perform image segmentation running Delphi computation server and Monte Carlo dose calculation on Python or PHP computation server. Conclusion: We have developed a webbased image processing and plan evaluation platform prototype for radiotherapy. This system has clearly demonstrated the feasibility of performing image processing and plan evaluation platform through a web

  2. Engineering semantic web information systems in Hera

    NARCIS (Netherlands)

    Vdovják, R.; Frasincar, F.; Houben, G.J.P.M.; Barna, P.

    2003-01-01

    The success of the World Wide Web has caused the concept of information system to change. Web Information Systems (WIS) use from the Web its paradigm and technologies in order to retrieve information from sources on the Web, and to present the information in terms of a Web or hypermedia

  3. Retrievals of formaldehyde from ground-based FTIR and MAX-DOAS observations at the Jungfraujoch station and comparisons with GEOS-Chem and IMAGES model simulations

    Directory of Open Access Journals (Sweden)

    B. Franco

    2015-04-01

    Full Text Available As an ubiquitous product of the oxidation of many volatile organic compounds (VOCs, formaldehyde (HCHO plays a key role as a short-lived and reactive intermediate in the atmospheric photo-oxidation pathways leading to the formation of tropospheric ozone and secondary organic aerosols. In this study, HCHO profiles have been successfully retrieved from ground-based Fourier transform infrared (FTIR solar spectra and UV-visible Multi-AXis Differential Optical Absorption Spectroscopy (MAX-DOAS scans recorded during the July 2010–December 2012 time period at the Jungfraujoch station (Swiss Alps, 46.5° N, 8.0° E, 3580 m a.s.l.. Analysis of the retrieved products has revealed different vertical sensitivity between both remote sensing techniques. Furthermore, HCHO amounts simulated by two state-of-the-art chemical transport models (CTMs, GEOS-Chem and IMAGES v2, have been compared to FTIR total columns and MAX-DOAS 3.6–8 km partial columns, accounting for the respective vertical resolution of each ground-based instrument. Using the CTM outputs as the intermediate, FTIR and MAX-DOAS retrievals have shown consistent seasonal modulations of HCHO throughout the investigated period, characterized by summertime maximum and wintertime minimum. Such comparisons have also highlighted that FTIR and MAX-DOAS provide complementary products for the HCHO retrieval above the Jungfraujoch station. Finally, tests have revealed that the updated IR parameters from the HITRAN 2012 database have a cumulative effect and significantly decrease the retrieved HCHO columns with respect to the use of the HITRAN 2008 compilation.

  4. Development of a Web Application: Recording Learners' Mouse Trajectories and Retrieving their Study Logs to Identify the Occurrence of Hesitation in Solving Word-Reordering Problems

    Directory of Open Access Journals (Sweden)

    Mitsumasa Zushi

    2014-04-01

    Full Text Available Most computer marking systems evaluate the results of the answers reached by learners without looking into the process by which the answers are produced, which will be insufficient to ascertain learners' understanding level because correct answers may well include lucky hunches, namely accidentally correct but not confident answers. In order to differentiate these lucky answers from confident correct ones, we have developed a Web application that can record mouse trajectories during the performance of tasks. Mathematical analyses of these trajectories have revealed that some parameters for mouse movements can be useful indicators to identify the occurrence of hesitation resulting from lack of knowledge or confidence in solving problems.

  5. Management of Scientific Images: an approach to the extraction, annotation and retrieval of figures in the field of High Energy Physics

    CERN Document Server

    Praczyk, Piotr Adam; Mele, Salvatore

    The information environment of the first decade of the XXIst century is unprecedented. The physical barriers limiting access to the knowledge are disappearing as traditional methods of accessing information are being replaced or enhanced by computer systems. Digital systems are able to manage much larger sets of documents, confronting information users with the deluge of documents related to their topic of interest. This new situation created an incentive for the rapid development of Data Mining techniques and to the creation of more efficient search engines capable of limiting the search results to a small subset of the most relevant ones. However, most of the up to date search engines operate using the text descriptions of the documents. Those descriptions can either be extracted from the content of the document or be obtained from the external sources. The retrieval based on the non-textual content of documents is a subject of ongoing research. In particular, the retrieval of images and unlocking the infor...

  6. Using centrality to rank web snippets

    NARCIS (Netherlands)

    Jijkoun, V.; de Rijke, M.; Peters, C.; Jijkoun, V.; Mandl, T.; Müller, H.; Oard, D.W.; Peñas, A.; Petras, V.; Santos, D.

    2008-01-01

    We describe our participation in the WebCLEF 2007 task, targeted at snippet retrieval from web data. Our system ranks snippets based on a simple similarity-based centrality, inspired by the web page ranking algorithms. We experimented with retrieval units (sentences and paragraphs) and with the

  7. ST Spot Detector: a web-based application for automatic spot and tissue detection for Spatial Transcriptomics image data sets.

    Science.gov (United States)

    Wong, Kim; Fernández Navarro, José; Bergenstråhle, Ludvig; Ståhl, Patrik L; Lundeberg, Joakim

    2018-01-17

    Spatial transcriptomics (ST) is a method which combines high resolution tissue imaging with high throughput transcriptome sequencing data. This data must be aligned with the images for correct visualisation, a process that involves several manual steps. Here we present ST Spot Detector, a web tool that automates and facilitates this alignment through a user friendly interface. Open source under the MIT license, available from https://github.com/SpatialTranscriptomicsResearch/st_spot_detector. jose.fernandez.navarro@scilifelab.se. Supplementary data are available at Bioinformatics online. © The Author(s) 2018. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com

  8. A NEW IMAGE RETRIEVAL ALGORITHM BASED ON VECTOR QUANTIFICATION%一种新的基于矢量量化的图像检索算法

    Institute of Scientific and Technical Information of China (English)

    冀鑫; 冀小平

    2016-01-01

    针对目前基于颜色的图像检索算法在颜色特征提取的不足,提出一种新的颜色特征提取算法。利用 LBG 算法对 HSI 空间的颜色信息矢量量化,然后统计图像中各个码字出现的频数,形成颜色直方图。这样在提取颜色特征过程中,尽可能地降低图像原有特征失真。同时通过设定门限值,多次实验比较查全率和查准率,找到较为满意的门限值,使检索算法更加完善。实验结果表明,该算法能有效地提高图像检索精准度。%We put forward a new colour feature extraction algorithm for the shortcoming of present colour-based image retrieval algorithm in colour feature extraction.First,the algorithm uses LBG algorithm to carry out vector quantification on colour information in HSI space,and then counts the appearance frequency of each code word in the image to form colour histogram.So in the process of colour feature extraction the distortion of original image features can be reduced as far as possible.Meanwhile,by setting the threshold value we compared the recall and precision rates through a couple of the experiments until a satisfied threshold value was found,thus made the retrieval method more perfect.Experimental results showed that the new algorithm could effectively improve the accuracy of image retrieval.

  9. Teleradiology network system using the web medical image conference system with a new information security solution

    Science.gov (United States)

    Satoh, Hitoshi; Niki, Noboru; Eguchi, Kenji; Ohmatsu, Hironobu; Kusumoto, Masahiro; Kaneko, Masahiro; Kakinuma, Ryutaru; Moriyama, Noriyuki

    2012-02-01

    We have developed the teleradiology network system with a new information security solution that provided with web medical image conference system. In the teleradiology network system, the security of information network is very important subjects. We are studying the secret sharing scheme and the tokenization as a method safely to store or to transmit the confidential medical information used with the teleradiology network system. The confidential medical information is exposed to the risk of the damage and intercept. Secret sharing scheme is a method of dividing the confidential medical information into two or more tallies. Individual medical information cannot be decoded by using one tally at all. Our method has the function of automatic backup. With automatic backup technology, if there is a failure in a single tally, there is redundant data already copied to other tally. Confidential information is preserved at an individual Data Center connected through internet because individual medical information cannot be decoded by using one tally at all. Therefore, even if one of the Data Centers is struck and information is damaged due to the large area disaster like the great earthquake of Japan, the confidential medical information can be decoded by using the tallies preserved at the data center to which it escapes damage. Moreover, by using tokenization, the history information of dividing the confidential medical information into two or more tallies is prevented from lying scattered by replacing the history information with another character string (Make it to powerlessness). As a result, information is available only to those who have rightful access it and the sender of a message and the message itself are verified at the receiving point. We propose a new information transmission method and a new information storage method with a new information security solution.

  10. The Effects of Surface Properties and Albedo on Methane Retrievals with the Airborne Visible/Infrared Imaging Spectrometer Next Generation (AVIRIS-NG)

    Science.gov (United States)

    Ayasse, A.; Thorpe, A. K.; Roberts, D. A.

    2017-12-01

    Atmospheric methane has increased by a factor of 2.5 since the beginning of the industrial era in response to anthropogenic emissions (Ciais et al., 2013). Although it is less abundant than carbon dioxide it is 86 time more potent on a 20 year time scale (Myhre et al., 2013) and is therefore responsible for about 20% of the total global warming induced by anthropogenic greenhouse gasses (Kirschke et al., 2013). Given the importance of methane to global climate change, monitoring and measuring methane emissions using techniques such as remote sensing is of increasing interest. Recently the Airborne Visible-Infrared Imaging Spectrometer - Next Generation (AVIRIS-NG) has proven to be a valuable instrument for quantitative mapping of methane plumes (Frankenberg et al., 2016; Thorpe et al., 2016; Thompson et al., 2015). In this study, we applied the Iterative Maximum a Posterior Differential Optical Spectroscopy (IMAP-DOAS) methane retrieval algorithm to a synthetic image with variable methane concentrations, albedo, and land cover. This allowed for characterizing retrieval performance, including potential sensitivity to variable land cover, low albedo surfaces, and surfaces known to cause spurious signals. We conclude that albedo had little influence on the IMAP-DOAS results except at very low radiance levels. Water (without sun glint) was found to be the most challenging surface for methane retrievals while hydrocarbons and some green vegetation also caused error. Understanding the effect of surface properties on methane retrievals is important given the increased use of AVIRIS-NG to map gas plumes over diverse locations and methane sources. This analysis could be expanded to include additional gas species like carbon dioxide and to further investigate gas sensitivity of proposed instruments for dedicated gas mapping from airborne and spaceborne platforms.

  11. EzMol: A Web Server Wizard for the Rapid Visualization and Image Production of Protein and Nucleic Acid Structures.

    Science.gov (United States)

    Reynolds, Christopher R; Islam, Suhail A; Sternberg, Michael J E

    2018-01-31

    EzMol is a molecular visualization Web server in the form of a software wizard, located at http://www.sbg.bio.ic.ac.uk/ezmol/. It is designed for easy and rapid image manipulation and display of protein molecules, and is intended for users who need to quickly produce high-resolution images of protein molecules but do not have the time or inclination to use a software molecular visualization system. EzMol allows the upload of molecular structure files in PDB format to generate a Web page including a representation of the structure that the user can manipulate. EzMol provides intuitive options for chain display, adjusting the color/transparency of residues, side chains and protein surfaces, and for adding labels to residues. The final adjusted protein image can then be downloaded as a high-resolution image. There are a range of applications for rapid protein display, including the illustration of specific areas of a protein structure and the rapid prototyping of images. Copyright © 2018. Published by Elsevier Ltd.

  12. From PACS to Web-based ePR system with image distribution for enterprise-level filmless healthcare delivery.

    Science.gov (United States)

    Huang, H K

    2011-07-01

    The concept of PACS (picture archiving and communication system) was initiated in 1982 during the SPIE medical imaging conference in New Port Beach, CA. Since then PACS has been matured to become an everyday clinical tool for image archiving, communication, display, and review. This paper follows the continuous development of PACS technology including Web-based PACS, PACS and ePR (electronic patient record), enterprise PACS to ePR with image distribution (ID). The concept of large-scale Web-based enterprise PACS and ePR with image distribution is presented along with its implementation, clinical deployment, and operation. The Hong Kong Hospital Authority's (HKHA) integration of its home-grown clinical management system (CMS) with PACS and ePR with image distribution is used as a case study. The current concept and design criteria of the HKHA enterprise integration of the CMS, PACS, and ePR-ID for filmless healthcare delivery are discussed, followed by its work-in-progress and current status.

  13. A hierarchical SVG image abstraction layer for medical imaging

    Science.gov (United States)

    Kim, Edward; Huang, Xiaolei; Tan, Gang; Long, L. Rodney; Antani, Sameer

    2010-03-01

    As medical imaging rapidly expands, there is an increasing need to structure and organize image data for efficient analysis, storage and retrieval. In response, a large fraction of research in the areas of content-based image retrieval (CBIR) and picture archiving and communication systems (PACS) has focused on structuring information to bridge the "semantic gap", a disparity between machine and human image understanding. An additional consideration in medical images is the organization and integration of clinical diagnostic information. As a step towards bridging the semantic gap, we design and implement a hierarchical image abstraction layer using an XML based language, Scalable Vector Graphics (SVG). Our method encodes features from the raw image and clinical information into an extensible "layer" that can be stored in a SVG document and efficiently searched. Any feature extracted from the raw image including, color, texture, orientation, size, neighbor information, etc., can be combined in our abstraction with high level descriptions or classifications. And our representation can natively characterize an image in a hierarchical tree structure to support multiple levels of segmentation. Furthermore, being a world wide web consortium (W3C) standard, SVG is able to be displayed by most web browsers, interacted with by ECMAScript (standardized scripting language, e.g. JavaScript, JScript), and indexed and retrieved by XML databases and XQuery. Using these open source technologies enables straightforward integration into existing systems. From our results, we show that the flexibility and extensibility of our abstraction facilitates effective storage and retrieval of medical images.

  14. Suppression of local haze variations in MERIS images over turbid coastal waters for retrieval of suspended sediment concentration

    NARCIS (Netherlands)

    Shen, F.; Verhoef, W.

    2010-01-01

    Atmospheric correction over turbid waters can be problematic if atmospheric haze is spatially variable. In this case the retrieval of water quality is hampered by the fact that haze variations could be partly mistaken for variations in suspended sediment concentration (SSC). In this study we propose

  15. Comparative Data Mining Analysis for Information Retrieval of MODIS Images: Monitoring Lake Turbidity Changes at Lake Okeechobee, Florida

    Science.gov (United States)

    In the remote sensing field, a frequently recurring question is: Which computational intelligence or data mining algorithms are most suitable for the retrieval of essential information given that most natural systems exhibit very high non-linearity. Among potential candidates mig...

  16. Web platform using digital image processing and geographic information system tools: a Brazilian case study on dengue.

    Science.gov (United States)

    Brasil, Lourdes M; Gomes, Marília M F; Miosso, Cristiano J; da Silva, Marlete M; Amvame-Nze, Georges D

    2015-07-16

    Dengue fever is endemic in Asia, the Americas, the East of the Mediterranean and the Western Pacific. According to the World Health Organization, it is one of the diseases of greatest impact on health, affecting millions of people each year worldwide. A fast detection of increases in populations of the transmitting vector, the Aedes aegypti mosquito, is essential to avoid dengue outbreaks. Unfortunately, in several countries, such as Brazil, the current methods for detecting populations changes and disseminating this information are too slow to allow efficient allocation of resources to fight outbreaks. To reduce the delay in providing the information regarding A. aegypti population changes, we propose, develop, and evaluate a system for counting the eggs found in special traps and to provide the collected data using a web structure with geographical location resources. One of the most useful tools for the detection and surveillance of arthropods is the ovitrap, a special trap built to collect the mosquito eggs. This allows for an egg counting process, which is still usually performed manually, in countries such as Brazil. We implement and evaluate a novel system for automatically counting the eggs found in the ovitraps' cardboards. The system we propose is based on digital image processing (DIP) techniques, as well as a Web based Semi-Automatic Counting System (SCSA-WEB). All data collected are geographically referenced in a geographic information system (GIS) and made available on a Web platform. The work was developed in Gama's administrative region, in Brasília/Brazil, with the aid of the Environmental Surveillance Directory (DIVAL-Gama) and Brasília's Board of Health (SSDF), in partnership with the University of Brasília (UnB). The system was built based on a field survey carried out during three months and provided by health professionals. These professionals provided 84 cardboards from 84 ovitraps, sized 15 × 5 cm. In developing the system, we conducted

  17. Interoperable Multimedia Annotation and Retrieval for the Tourism Sector

    NARCIS (Netherlands)

    Chatzitoulousis, Antonios; Efraimidis, Pavlos S.; Athanasiadis, I.N.

    2015-01-01

    The Atlas Metadata System (AMS) employs semantic web annotation techniques in order to create an interoperable information annotation and retrieval platform for the tourism sector. AMS adopts state-of-the-art metadata vocabularies, annotation techniques and semantic web technologies.

  18. Foreign Body Retrieval

    Medline Plus

    Full Text Available Toggle navigation Test/Treatment Patient Type Screening/Wellness Disease/Condition Safety En Español More Info Images/Videos About Us News Physician Resources Professions Site Index A-Z Foreign Body Retrieval Foreign ...

  19. Remote Sensing Image Analysis Without Expert Knowledge - A Web-Based Classification Tool On Top of Taverna Workflow Management System

    Science.gov (United States)

    Selsam, Peter; Schwartze, Christian

    2016-10-01

    Providing software solutions via internet has been known for quite some time and is now an increasing trend marketed as "software as a service". A lot of business units accept the new methods and streamlined IT strategies by offering web-based infrastructures for external software usage - but geospatial applications featuring very specialized services or functionalities on demand are still rare. Originally applied in desktop environments, the ILMSimage tool for remote sensing image analysis and classification was modified in its communicating structures and enabled for running on a high-power server and benefiting from Tavema software. On top, a GIS-like and web-based user interface guides the user through the different steps in ILMSimage. ILMSimage combines object oriented image segmentation with pattern recognition features. Basic image elements form a construction set to model for large image objects with diverse and complex appearance. There is no need for the user to set up detailed object definitions. Training is done by delineating one or more typical examples (templates) of the desired object using a simple vector polygon. The template can be large and does not need to be homogeneous. The template is completely independent from the segmentation. The object definition is done completely by the software.

  20. The Evolution of Web Searching.

    Science.gov (United States)

    Green, David

    2000-01-01

    Explores the interrelation between Web publishing and information retrieval technologies and lists new approaches to Web indexing and searching. Highlights include Web directories; search engines; portalisation; Internet service providers; browser providers; meta search engines; popularity based analysis; natural language searching; links-based…

  1. Development of an Operational System for the Retrieval of Aerosol and Land Surface Properties from the Terra Multi-Angle Imaging SpectroRadiometer

    Science.gov (United States)

    Crean, Kathleen A.

    2003-01-01

    An operational system to retrieve atmospheric aerosol and land surface properties using data from the Multi-angle Imaging SpectroRadiometer (MISR) instrument, currently flying onboard NASA's Terra spacecraft, has been deployed. The system is in full operation, with new data products generated daily and distributed to science users worldwide. This paper describes the evolution of the system, from initial requirements definition and prototyping through design, implementation, testing, operational deployment, checkout and maintenance activities. The current status of the system and future plans for enhancement are described. Major challenges encountered during implementation are detailed.

  2. Improved optical flow velocity analysis in SO2 camera images of volcanic plumes – implications for emission-rate retrievals investigated at Mt Etna, Italy and Guallatiri, Chile

    Directory of Open Access Journals (Sweden)

    J. Gliß

    2018-02-01

    Full Text Available Accurate gas velocity measurements in emission plumes are highly desirable for various atmospheric remote sensing applications. The imaging technique of UV SO2 cameras is commonly used to monitor SO2 emissions from volcanoes and anthropogenic sources (e.g. power plants, ships. The camera systems capture the emission plumes at high spatial and temporal resolution. This allows the gas velocities in the plume to be retrieved directly from the images. The latter can be measured at a pixel level using optical flow (OF algorithms. This is particularly advantageous under turbulent plume conditions. However, OF algorithms intrinsically rely on contrast in the images and often fail to detect motion in low-contrast image areas. We present a new method to identify ill-constrained OF motion vectors and replace them using the local average velocity vector. The latter is derived based on histograms of the retrieved OF motion fields. The new method is applied to two example data sets recorded at Mt Etna (Italy and Guallatiri (Chile. We show that in many cases, the uncorrected OF yields significantly underestimated SO2 emission rates. We further show that our proposed correction can account for this and that it significantly improves the reliability of optical-flow-based gas velocity retrievals. In the case of Mt Etna, the SO2 emissions of the north-eastern crater are investigated. The corrected SO2 emission rates range between 4.8 and 10.7 kg s−1 (average of 7.1  ±  1.3 kg s−1 and are in good agreement with previously reported values. For the Guallatiri data, the emissions of the central crater and a fumarolic field are investigated. The retrieved SO2 emission rates are between 0.5 and 2.9 kg s−1 (average of 1.3  ±  0.5 kg s−1 and provide the first report of SO2 emissions from this remotely located and inaccessible volcano.

  3. Medical high-resolution image sharing and electronic whiteboard system: A pure-web-based system for accessing and discussing lossless original images in telemedicine.

    Science.gov (United States)

    Qiao, Liang; Li, Ying; Chen, Xin; Yang, Sheng; Gao, Peng; Liu, Hongjun; Feng, Zhengquan; Nian, Yongjian; Qiu, Mingguo

    2015-09-01

    There are various medical image sharing and electronic whiteboard systems available for diagnosis and discussion purposes. However, most of these systems ask clients to install special software tools or web plug-ins to support whiteboard discussion, special medical image format, and customized decoding algorithm of data transmission of HRIs (high-resolution images). This limits the accessibility of the software running on different devices and operating systems. In this paper, we propose a solution based on pure web pages for medical HRIs lossless sharing and e-whiteboard discussion, and have set up a medical HRI sharing and e-whiteboard system, which has four-layered design: (1) HRIs access layer: we improved an tile-pyramid model named unbalanced ratio pyramid structure (URPS), to rapidly share lossless HRIs and to adapt to the reading habits of users; (2) format conversion layer: we designed a format conversion engine (FCE) on server side to real time convert and cache DICOM tiles which clients requesting with window-level parameters, to make browsers compatible and keep response efficiency to server-client; (3) business logic layer: we built a XML behavior relationship storage structure to store and share users' behavior, to keep real time co-browsing and discussion between clients; (4) web-user-interface layer: AJAX technology and Raphael toolkit were used to combine HTML and JavaScript to build client RIA (rich Internet application), to meet clients' desktop-like interaction on any pure webpage. This system can be used to quickly browse lossless HRIs, and support discussing and co-browsing smoothly on any web browser in a diversified network environment. The proposal methods can provide a way to share HRIs safely, and may be used in the field of regional health, telemedicine and remote education at a low cost. Copyright © 2015 Elsevier Ireland Ltd. All rights reserved.

  4. Tennessee StreamStats: A Web-Enabled Geographic Information System Application for Automating the Retrieval and Calculation of Streamflow Statistics

    Science.gov (United States)

    Ladd, David E.; Law, George S.

    2007-01-01

    The U.S. Geological Survey (USGS) provides streamflow and other stream-related information needed to protect people and property from floods, to plan and manage water resources, and to protect water quality in the streams. Streamflow statistics provided by the USGS, such as the 100-year flood and the 7-day 10-year low flow, frequently are used by engineers, land managers, biologists, and many others to help guide decisions in their everyday work. In addition to streamflow statistics, resource managers often need to know the physical and climatic characteristics (basin characteristics) of the drainage basins for locations of interest to help them understand the mechanisms that control water availability and water quality at these locations. StreamStats is a Web-enabled geographic information system (GIS) application that makes it easy for users to obtain streamflow statistics, basin characteristics, and other information for USGS data-collection stations and for ungaged sites of interest. If a user selects the location of a data-collection station, StreamStats will provide previously published information for the station from a database. If a user selects a location where no data are available (an ungaged site), StreamStats will run a GIS program to delineate a drainage basin boundary, measure basin characteristics, and estimate streamflow statistics based on USGS streamflow prediction methods. A user can download a GIS feature class of the drainage basin boundary with attributes including the measured basin characteristics and streamflow estimates.

  5. Sally Ride EarthKAM - Automated Image Geo-Referencing Using Google Earth Web Plug-In

    Science.gov (United States)

    Andres, Paul M.; Lazar, Dennis K.; Thames, Robert Q.

    2013-01-01

    Sally Ride EarthKAM is an educational program funded by NASA that aims to provide the public the ability to picture Earth from the perspective of the International Space Station (ISS). A computer-controlled camera is mounted on the ISS in a nadir-pointing window; however, timing limitations in the system cause inaccurate positional metadata. Manually correcting images within an orbit allows the positional metadata to be improved using mathematical regressions. The manual correction process is time-consuming and thus, unfeasible for a large number of images. The standard Google Earth program allows for the importing of KML (keyhole markup language) files that previously were created. These KML file-based overlays could then be manually manipulated as image overlays, saved, and then uploaded to the project server where they are parsed and the metadata in the database is updated. The new interface eliminates the need to save, download, open, re-save, and upload the KML files. Everything is processed on the Web, and all manipulations go directly into the database. Administrators also have the control to discard any single correction that was made and validate a correction. This program streamlines a process that previously required several critical steps and was probably too complex for the average user to complete successfully. The new process is theoretically simple enough for members of the public to make use of and contribute to the success of the Sally Ride EarthKAM project. Using the Google Earth Web plug-in, EarthKAM images, and associated metadata, this software allows users to interactively manipulate an EarthKAM image overlay, and update and improve the associated metadata. The Web interface uses the Google Earth JavaScript API along with PHP-PostgreSQL to present the user the same interface capabilities without leaving the Web. The simpler graphical user interface will allow the public to participate directly and meaningfully with EarthKAM. The use of

  6. Grating-based x-ray differential phase contrast imaging with twin peaks in phase-stepping curves—phase retrieval and dewrapping

    Energy Technology Data Exchange (ETDEWEB)

    Yang, Yi; Xie, Huiqiao; Tang, Xiangyang, E-mail: xiangyang.tang@emory.edu [Imaging and Medical Physics, Department of Radiology and Imaging Sciences, Emory University School of Medicine, 1701 Uppergate Dr., C-5018, Atlanta, Georgia 30322 (United States); Cai, Weixing [Department of Radiation Oncology, Brigham and Women’s Hospital Harvard Medical School, 75 Francis Street, Boston, Massachusetts 02115 (United States); Mao, Hui [Laboratory of Functional and Molecular Imaging and Nanomedicine, Department of Radiology and Imaging Sciences, Emory University School of Medicine, 1841 Clifton Road NE, Atlanta, Georgia 30329 (United States)

    2016-06-15

    Purpose: X-ray differential phase contrast CT implemented with Talbot interferometry employs phase-stepping to extract information of x-ray attenuation, phase shift, and small-angle scattering. Since inaccuracy may exist in the absorption grating G{sub 2} due to an imperfect fabrication, the effective period of G{sub 2} can be as large as twice the nominal period, leading to a phenomenon of twin peaks that differ remarkably in their heights. In this work, the authors investigate how to retrieve and dewrap the phase signal from the phase-stepping curve (PSC) with the feature of twin peaks for x-ray phase contrast imaging. Methods: Based on the paraxial Fresnel–Kirchhoff theory, the analytical formulae to characterize the phenomenon of twin peaks in the PSC are derived. Then an approach to dewrap the retrieved phase signal by jointly using the phases of the first- and second-order Fourier components is proposed. Through an experimental investigation using a prototype x-ray phase contrast imaging system implemented with Talbot interferometry, the authors evaluate and verify the derived analytic formulae and the proposed approach for phase retrieval and dewrapping. Results: According to theoretical analysis, the twin-peak phenomenon in PSC is a consequence of combined effects, including the inaccuracy in absorption grating G{sub 2}, mismatch between phase grating and x-ray source spectrum, and finite size of x-ray tube’s focal spot. The proposed approach is experimentally evaluated by scanning a phantom consisting of organic materials and a lab mouse. The preliminary data show that compared to scanning G{sub 2} over only one single nominal period and correcting the measured phase signal with an intuitive phase dewrapping method that is being used in the field, stepping G{sub 2} over twice its nominal period and dewrapping the measured phase signal with the proposed approach can significantly improve the quality of x-ray differential phase contrast imaging in both

  7. On the Response of the Special Sensor Microwave/Imager to the Marine Environment: Implications for Atmospheric Parameter Retrievals. Ph.D. Thesis

    Science.gov (United States)

    Petty, Grant W.

    1990-01-01

    A reasonably rigorous basis for understanding and extracting the physical information content of Special Sensor Microwave/Imager (SSM/I) satellite images of the marine environment is provided. To this end, a comprehensive algebraic parameterization is developed for the response of the SSM/I to a set of nine atmospheric and ocean surface parameters. The brightness temperature model includes a closed-form approximation to microwave radiative transfer in a non-scattering atmosphere and fitted models for surface emission and scattering based on geometric optics calculations for the roughened sea surface. The combined model is empirically tuned using suitable sets of SSM/I data and coincident surface observations. The brightness temperature model is then used to examine the sensitivity of the SSM/I to realistic variations in the scene being observed and to evaluate the theoretical maximum precision of global SSM/I retrievals of integrated water vapor, integrated cloud liquid water, and surface wind speed. A general minimum-variance method for optimally retrieving geophysical parameters from multichannel brightness temperature measurements is outlined, and several global statistical constraints of the type required by this method are computed. Finally, a unified set of efficient statistical and semi-physical algorithms is presented for obtaining fields of surface wind speed, integrated water vapor, cloud liquid water, and precipitation from SSM/I brightness temperature data. Features include: a semi-physical method for retrieving integrated cloud liquid water at 15 km resolution and with rms errors as small as approximately 0.02 kg/sq