WorldWideScience

Sample records for image annotation application

  1. Ratsnake: A Versatile Image Annotation Tool with Application to Computer-Aided Diagnosis

    Directory of Open Access Journals (Sweden)

    D. K. Iakovidis

    2014-01-01

    Full Text Available Image segmentation and annotation are key components of image-based medical computer-aided diagnosis (CAD systems. In this paper we present Ratsnake, a publicly available generic image annotation tool providing annotation efficiency, semantic awareness, versatility, and extensibility, features that can be exploited to transform it into an effective CAD system. In order to demonstrate this unique capability, we present its novel application for the evaluation and quantification of salient objects and structures of interest in kidney biopsy images. Accurate annotation identifying and quantifying such structures in microscopy images can provide an estimation of pathogenesis in obstructive nephropathy, which is a rather common disease with severe implication in children and infants. However a tool for detecting and quantifying the disease is not yet available. A machine learning-based approach, which utilizes prior domain knowledge and textural image features, is considered for the generation of an image force field customizing the presented tool for automatic evaluation of kidney biopsy images. The experimental evaluation of the proposed application of Ratsnake demonstrates its efficiency and effectiveness and promises its wide applicability across a variety of medical imaging domains.

  2. Objective-guided image annotation.

    Science.gov (United States)

    Mao, Qi; Tsang, Ivor Wai-Hung; Gao, Shenghua

    2013-04-01

    Automatic image annotation, which is usually formulated as a multi-label classification problem, is one of the major tools used to enhance the semantic understanding of web images. Many multimedia applications (e.g., tag-based image retrieval) can greatly benefit from image annotation. However, the insufficient performance of image annotation methods prevents these applications from being practical. On the other hand, specific measures are usually designed to evaluate how well one annotation method performs for a specific objective or application, but most image annotation methods do not consider optimization of these measures, so that they are inevitably trapped into suboptimal performance of these objective-specific measures. To address this issue, we first summarize a variety of objective-guided performance measures under a unified representation. Our analysis reveals that macro-averaging measures are very sensitive to infrequent keywords, and hamming measure is easily affected by skewed distributions. We then propose a unified multi-label learning framework, which directly optimizes a variety of objective-specific measures of multi-label learning tasks. Specifically, we first present a multilayer hierarchical structure of learning hypotheses for multi-label problems based on which a variety of loss functions with respect to objective-guided measures are defined. And then, we formulate these loss functions as relaxed surrogate functions and optimize them by structural SVMs. According to the analysis of various measures and the high time complexity of optimizing micro-averaging measures, in this paper, we focus on example-based measures that are tailor-made for image annotation tasks but are seldom explored in the literature. Experiments show consistency with the formal analysis on two widely used multi-label datasets, and demonstrate the superior performance of our proposed method over state-of-the-art baseline methods in terms of example-based measures on four

  3. Image annotation under X Windows

    Science.gov (United States)

    Pothier, Steven

    1991-08-01

    A mechanism for attaching graphic and overlay annotation to multiple bits/pixel imagery while providing levels of performance approaching that of native mode graphics systems is presented. This mechanism isolates programming complexity from the application programmer through software encapsulation under the X Window System. It ensures display accuracy throughout operations on the imagery and annotation including zooms, pans, and modifications of the annotation. Trade-offs that affect speed of display, consumption of memory, and system functionality are explored. The use of resource files to tune the display system is discussed. The mechanism makes use of an abstraction consisting of four parts; a graphics overlay, a dithered overlay, an image overly, and a physical display window. Data structures are maintained that retain the distinction between the four parts so that they can be modified independently, providing system flexibility. A unique technique for associating user color preferences with annotation is introduced. An interface that allows interactive modification of the mapping between image value and color is discussed. A procedure that provides for the colorization of imagery on 8-bit display systems using pixel dithering is explained. Finally, the application of annotation mechanisms to various applications is discussed.

  4. Diverse Image Annotation

    KAUST Repository

    Wu, Baoyuan

    2017-11-09

    In this work we study the task of image annotation, of which the goal is to describe an image using a few tags. Instead of predicting the full list of tags, here we target for providing a short list of tags under a limited number (e.g., 3), to cover as much information as possible of the image. The tags in such a short list should be representative and diverse. It means they are required to be not only corresponding to the contents of the image, but also be different to each other. To this end, we treat the image annotation as a subset selection problem based on the conditional determinantal point process (DPP) model, which formulates the representation and diversity jointly. We further explore the semantic hierarchy and synonyms among the candidate tags, and require that two tags in a semantic hierarchy or in a pair of synonyms should not be selected simultaneously. This requirement is then embedded into the sampling algorithm according to the learned conditional DPP model. Besides, we find that traditional metrics for image annotation (e.g., precision, recall and F1 score) only consider the representation, but ignore the diversity. Thus we propose new metrics to evaluate the quality of the selected subset (i.e., the tag list), based on the semantic hierarchy and synonyms. Human study through Amazon Mechanical Turk verifies that the proposed metrics are more close to the humans judgment than traditional metrics. Experiments on two benchmark datasets show that the proposed method can produce more representative and diverse tags, compared with existing image annotation methods.

  5. Diverse Image Annotation

    KAUST Repository

    Wu, Baoyuan; Jia, Fan; Liu, Wei; Ghanem, Bernard

    2017-01-01

    In this work we study the task of image annotation, of which the goal is to describe an image using a few tags. Instead of predicting the full list of tags, here we target for providing a short list of tags under a limited number (e.g., 3), to cover as much information as possible of the image. The tags in such a short list should be representative and diverse. It means they are required to be not only corresponding to the contents of the image, but also be different to each other. To this end, we treat the image annotation as a subset selection problem based on the conditional determinantal point process (DPP) model, which formulates the representation and diversity jointly. We further explore the semantic hierarchy and synonyms among the candidate tags, and require that two tags in a semantic hierarchy or in a pair of synonyms should not be selected simultaneously. This requirement is then embedded into the sampling algorithm according to the learned conditional DPP model. Besides, we find that traditional metrics for image annotation (e.g., precision, recall and F1 score) only consider the representation, but ignore the diversity. Thus we propose new metrics to evaluate the quality of the selected subset (i.e., the tag list), based on the semantic hierarchy and synonyms. Human study through Amazon Mechanical Turk verifies that the proposed metrics are more close to the humans judgment than traditional metrics. Experiments on two benchmark datasets show that the proposed method can produce more representative and diverse tags, compared with existing image annotation methods.

  6. Application of whole slide image markup and annotation for pathologist knowledge capture.

    Science.gov (United States)

    Campbell, Walter S; Foster, Kirk W; Hinrichs, Steven H

    2013-01-01

    The ability to transfer image markup and annotation data from one scanned image of a slide to a newly acquired image of the same slide within a single vendor platform was investigated. The goal was to study the ability to use image markup and annotation data files as a mechanism to capture and retain pathologist knowledge without retaining the entire whole slide image (WSI) file. Accepted mathematical principles were investigated as a method to overcome variations in scans of the same glass slide and to accurately associate image markup and annotation data across different WSI of the same glass slide. Trilateration was used to link fixed points within the image and slide to the placement of markups and annotations of the image in a metadata file. Variation in markup and annotation placement between WSI of the same glass slide was reduced from over 80 μ to less than 4 μ in the x-axis and from 17 μ to 6 μ in the y-axis (P < 0.025). This methodology allows for the creation of a highly reproducible image library of histopathology images and interpretations for educational and research use.

  7. Annotating images by mining image search results

    NARCIS (Netherlands)

    Wang, X.J.; Zhang, L.; Li, X.; Ma, W.Y.

    2008-01-01

    Although it has been studied for years by the computer vision and machine learning communities, image annotation is still far from practical. In this paper, we propose a novel attempt at model-free image annotation, which is a data-driven approach that annotates images by mining their search

  8. Annotating Fine Art Images

    OpenAIRE

    Isemann, Daniel

    2007-01-01

    The project's objective is to work with art galleries to help them find innovative ways of indexing images, especially by having automatically created and updated thesauri. National Gallery of Ireland Douglas Hyde Gallery Trinity Long Room Hub

  9. Annotating images by mining image search results.

    Science.gov (United States)

    Wang, Xin-Jing; Zhang, Lei; Li, Xirong; Ma, Wei-Ying

    2008-11-01

    Although it has been studied for years by the computer vision and machine learning communities, image annotation is still far from practical. In this paper, we propose a novel attempt at model-free image annotation, which is a data-driven approach that annotates images by mining their search results. Some 2.4 million images with their surrounding text are collected from a few photo forums to support this approach. The entire process is formulated in a divide-and-conquer framework where a query keyword is provided along with the uncaptioned image to improve both the effectiveness and efficiency. This is helpful when the collected data set is not dense everywhere. In this sense, our approach contains three steps: 1) the search process to discover visually and semantically similar search results, 2) the mining process to identify salient terms from textual descriptions of the search results, and 3) the annotation rejection process to filter out noisy terms yielded by Step 2. To ensure real-time annotation, two key techniques are leveraged-one is to map the high-dimensional image visual features into hash codes, the other is to implement it as a distributed system, of which the search and mining processes are provided as Web services. As a typical result, the entire process finishes in less than 1 second. Since no training data set is required, our approach enables annotating with unlimited vocabulary and is highly scalable and robust to outliers. Experimental results on both real Web images and a benchmark image data set show the effectiveness and efficiency of the proposed algorithm. It is also worth noting that, although the entire approach is illustrated within the divide-and conquer framework, a query keyword is not crucial to our current implementation. We provide experimental results to prove this.

  10. Current and future trends in marine image annotation software

    Science.gov (United States)

    Gomes-Pereira, Jose Nuno; Auger, Vincent; Beisiegel, Kolja; Benjamin, Robert; Bergmann, Melanie; Bowden, David; Buhl-Mortensen, Pal; De Leo, Fabio C.; Dionísio, Gisela; Durden, Jennifer M.; Edwards, Luke; Friedman, Ariell; Greinert, Jens; Jacobsen-Stout, Nancy; Lerner, Steve; Leslie, Murray; Nattkemper, Tim W.; Sameoto, Jessica A.; Schoening, Timm; Schouten, Ronald; Seager, James; Singh, Hanumant; Soubigou, Olivier; Tojeira, Inês; van den Beld, Inge; Dias, Frederico; Tempera, Fernando; Santos, Ricardo S.

    2016-12-01

    Given the need to describe, analyze and index large quantities of marine imagery data for exploration and monitoring activities, a range of specialized image annotation tools have been developed worldwide. Image annotation - the process of transposing objects or events represented in a video or still image to the semantic level, may involve human interactions and computer-assisted solutions. Marine image annotation software (MIAS) have enabled over 500 publications to date. We review the functioning, application trends and developments, by comparing general and advanced features of 23 different tools utilized in underwater image analysis. MIAS requiring human input are basically a graphical user interface, with a video player or image browser that recognizes a specific time code or image code, allowing to log events in a time-stamped (and/or geo-referenced) manner. MIAS differ from similar software by the capability of integrating data associated to video collection, the most simple being the position coordinates of the video recording platform. MIAS have three main characteristics: annotating events in real time, posteriorly to annotation and interact with a database. These range from simple annotation interfaces, to full onboard data management systems, with a variety of toolboxes. Advanced packages allow to input and display data from multiple sensors or multiple annotators via intranet or internet. Posterior human-mediated annotation often include tools for data display and image analysis, e.g. length, area, image segmentation, point count; and in a few cases the possibility of browsing and editing previous dive logs or to analyze the annotations. The interaction with a database allows the automatic integration of annotations from different surveys, repeated annotation and collaborative annotation of shared datasets, browsing and querying of data. Progress in the field of automated annotation is mostly in post processing, for stable platforms or still images

  11. An Annotated Dataset of 14 Meat Images

    DEFF Research Database (Denmark)

    Stegmann, Mikkel Bille

    2002-01-01

    This note describes a dataset consisting of 14 annotated images of meat. Points of correspondence are placed on each image. As such, the dataset can be readily used for building statistical models of shape. Further, format specifications and terms of use are given.......This note describes a dataset consisting of 14 annotated images of meat. Points of correspondence are placed on each image. As such, the dataset can be readily used for building statistical models of shape. Further, format specifications and terms of use are given....

  12. Multiview Hessian regularization for image annotation.

    Science.gov (United States)

    Liu, Weifeng; Tao, Dacheng

    2013-07-01

    The rapid development of computer hardware and Internet technology makes large scale data dependent models computationally tractable, and opens a bright avenue for annotating images through innovative machine learning algorithms. Semisupervised learning (SSL) therefore received intensive attention in recent years and was successfully deployed in image annotation. One representative work in SSL is Laplacian regularization (LR), which smoothes the conditional distribution for classification along the manifold encoded in the graph Laplacian, however, it is observed that LR biases the classification function toward a constant function that possibly results in poor generalization. In addition, LR is developed to handle uniformly distributed data (or single-view data), although instances or objects, such as images and videos, are usually represented by multiview features, such as color, shape, and texture. In this paper, we present multiview Hessian regularization (mHR) to address the above two problems in LR-based image annotation. In particular, mHR optimally combines multiple HR, each of which is obtained from a particular view of instances, and steers the classification function that varies linearly along the data manifold. We apply mHR to kernel least squares and support vector machines as two examples for image annotation. Extensive experiments on the PASCAL VOC'07 dataset validate the effectiveness of mHR by comparing it with baseline algorithms, including LR and HR.

  13. ePNK Applications and Annotations

    DEFF Research Database (Denmark)

    Kindler, Ekkart

    2017-01-01

    newapplicationsfor the ePNK and, in particular, visualizing the result of an application in the graphical editor of the ePNK by singannotations, and interacting with the end user using these annotations. In this paper, we give an overview of the concepts of ePNK applications by discussing the implementation...

  14. Using machine learning to speed up manual image annotation: application to a 3D imaging protocol for measuring single cell gene expression in the developing C. elegans embryo

    Directory of Open Access Journals (Sweden)

    Waterston Robert H

    2010-02-01

    Full Text Available Abstract Background Image analysis is an essential component in many biological experiments that study gene expression, cell cycle progression, and protein localization. A protocol for tracking the expression of individual C. elegans genes was developed that collects image samples of a developing embryo by 3-D time lapse microscopy. In this protocol, a program called StarryNite performs the automatic recognition of fluorescently labeled cells and traces their lineage. However, due to the amount of noise present in the data and due to the challenges introduced by increasing number of cells in later stages of development, this program is not error free. In the current version, the error correction (i.e., editing is performed manually using a graphical interface tool named AceTree, which is specifically developed for this task. For a single experiment, this manual annotation task takes several hours. Results In this paper, we reduce the time required to correct errors made by StarryNite. We target one of the most frequent error types (movements annotated as divisions and train a support vector machine (SVM classifier to decide whether a division call made by StarryNite is correct or not. We show, via cross-validation experiments on several benchmark data sets, that the SVM successfully identifies this type of error significantly. A new version of StarryNite that includes the trained SVM classifier is available at http://starrynite.sourceforge.net. Conclusions We demonstrate the utility of a machine learning approach to error annotation for StarryNite. In the process, we also provide some general methodologies for developing and validating a classifier with respect to a given pattern recognition task.

  15. BioAnnote: a software platform for annotating biomedical documents with application in medical learning environments.

    Science.gov (United States)

    López-Fernández, H; Reboiro-Jato, M; Glez-Peña, D; Aparicio, F; Gachet, D; Buenaga, M; Fdez-Riverola, F

    2013-07-01

    Automatic term annotation from biomedical documents and external information linking are becoming a necessary prerequisite in modern computer-aided medical learning systems. In this context, this paper presents BioAnnote, a flexible and extensible open-source platform for automatically annotating biomedical resources. Apart from other valuable features, the software platform includes (i) a rich client enabling users to annotate multiple documents in a user friendly environment, (ii) an extensible and embeddable annotation meta-server allowing for the annotation of documents with local or remote vocabularies and (iii) a simple client/server protocol which facilitates the use of our meta-server from any other third-party application. In addition, BioAnnote implements a powerful scripting engine able to perform advanced batch annotations. Copyright © 2013 Elsevier Ireland Ltd. All rights reserved.

  16. The caBIG annotation and image Markup project.

    Science.gov (United States)

    Channin, David S; Mongkolwat, Pattanasak; Kleper, Vladimir; Sepukar, Kastubh; Rubin, Daniel L

    2010-04-01

    Image annotation and markup are at the core of medical interpretation in both the clinical and the research setting. Digital medical images are managed with the DICOM standard format. While DICOM contains a large amount of meta-data about whom, where, and how the image was acquired, DICOM says little about the content or meaning of the pixel data. An image annotation is the explanatory or descriptive information about the pixel data of an image that is generated by a human or machine observer. An image markup is the graphical symbols placed over the image to depict an annotation. While DICOM is the standard for medical image acquisition, manipulation, transmission, storage, and display, there are no standards for image annotation and markup. Many systems expect annotation to be reported verbally, while markups are stored in graphical overlays or proprietary formats. This makes it difficult to extract and compute with both of them. The goal of the Annotation and Image Markup (AIM) project is to develop a mechanism, for modeling, capturing, and serializing image annotation and markup data that can be adopted as a standard by the medical imaging community. The AIM project produces both human- and machine-readable artifacts. This paper describes the AIM information model, schemas, software libraries, and tools so as to prepare researchers and developers for their use of AIM.

  17. Fuzzy Emotional Semantic Analysis and Automated Annotation of Scene Images

    Directory of Open Access Journals (Sweden)

    Jianfang Cao

    2015-01-01

    Full Text Available With the advances in electronic and imaging techniques, the production of digital images has rapidly increased, and the extraction and automated annotation of emotional semantics implied by images have become issues that must be urgently addressed. To better simulate human subjectivity and ambiguity for understanding scene images, the current study proposes an emotional semantic annotation method for scene images based on fuzzy set theory. A fuzzy membership degree was calculated to describe the emotional degree of a scene image and was implemented using the Adaboost algorithm and a back-propagation (BP neural network. The automated annotation method was trained and tested using scene images from the SUN Database. The annotation results were then compared with those based on artificial annotation. Our method showed an annotation accuracy rate of 91.2% for basic emotional values and 82.4% after extended emotional values were added, which correspond to increases of 5.5% and 8.9%, respectively, compared with the results from using a single BP neural network algorithm. Furthermore, the retrieval accuracy rate based on our method reached approximately 89%. This study attempts to lay a solid foundation for the automated emotional semantic annotation of more types of images and therefore is of practical significance.

  18. Ontology modularization to improve semantic medical image annotation.

    Science.gov (United States)

    Wennerberg, Pinar; Schulz, Klaus; Buitelaar, Paul

    2011-02-01

    Searching for medical images and patient reports is a significant challenge in a clinical setting. The contents of such documents are often not described in sufficient detail thus making it difficult to utilize the inherent wealth of information contained within them. Semantic image annotation addresses this problem by describing the contents of images and reports using medical ontologies. Medical images and patient reports are then linked to each other through common annotations. Subsequently, search algorithms can more effectively find related sets of documents on the basis of these semantic descriptions. A prerequisite to realizing such a semantic search engine is that the data contained within should have been previously annotated with concepts from medical ontologies. One major challenge in this regard is the size and complexity of medical ontologies as annotation sources. Manual annotation is particularly time consuming labor intensive in a clinical environment. In this article we propose an approach to reducing the size of clinical ontologies for more efficient manual image and text annotation. More precisely, our goal is to identify smaller fragments of a large anatomy ontology that are relevant for annotating medical images from patients suffering from lymphoma. Our work is in the area of ontology modularization, which is a recent and active field of research. We describe our approach, methods and data set in detail and we discuss our results. Copyright © 2010 Elsevier Inc. All rights reserved.

  19. Managing and Querying Image Annotation and Markup in XML

    Science.gov (United States)

    Wang, Fusheng; Pan, Tony; Sharma, Ashish; Saltz, Joel

    2010-01-01

    Proprietary approaches for representing annotations and image markup are serious barriers for researchers to share image data and knowledge. The Annotation and Image Markup (AIM) project is developing a standard based information model for image annotation and markup in health care and clinical trial environments. The complex hierarchical structures of AIM data model pose new challenges for managing such data in terms of performance and support of complex queries. In this paper, we present our work on managing AIM data through a native XML approach, and supporting complex image and annotation queries through native extension of XQuery language. Through integration with xService, AIM databases can now be conveniently shared through caGrid. PMID:21218167

  20. Managing and Querying Image Annotation and Markup in XML.

    Science.gov (United States)

    Wang, Fusheng; Pan, Tony; Sharma, Ashish; Saltz, Joel

    2010-01-01

    Proprietary approaches for representing annotations and image markup are serious barriers for researchers to share image data and knowledge. The Annotation and Image Markup (AIM) project is developing a standard based information model for image annotation and markup in health care and clinical trial environments. The complex hierarchical structures of AIM data model pose new challenges for managing such data in terms of performance and support of complex queries. In this paper, we present our work on managing AIM data through a native XML approach, and supporting complex image and annotation queries through native extension of XQuery language. Through integration with xService, AIM databases can now be conveniently shared through caGrid.

  1. BIOCAT: a pattern recognition platform for customizable biological image classification and annotation.

    Science.gov (United States)

    Zhou, Jie; Lamichhane, Santosh; Sterne, Gabriella; Ye, Bing; Peng, Hanchuan

    2013-10-04

    Pattern recognition algorithms are useful in bioimage informatics applications such as quantifying cellular and subcellular objects, annotating gene expressions, and classifying phenotypes. To provide effective and efficient image classification and annotation for the ever-increasing microscopic images, it is desirable to have tools that can combine and compare various algorithms, and build customizable solution for different biological problems. However, current tools often offer a limited solution in generating user-friendly and extensible tools for annotating higher dimensional images that correspond to multiple complicated categories. We develop the BIOimage Classification and Annotation Tool (BIOCAT). It is able to apply pattern recognition algorithms to two- and three-dimensional biological image sets as well as regions of interest (ROIs) in individual images for automatic classification and annotation. We also propose a 3D anisotropic wavelet feature extractor for extracting textural features from 3D images with xy-z resolution disparity. The extractor is one of the about 20 built-in algorithms of feature extractors, selectors and classifiers in BIOCAT. The algorithms are modularized so that they can be "chained" in a customizable way to form adaptive solution for various problems, and the plugin-based extensibility gives the tool an open architecture to incorporate future algorithms. We have applied BIOCAT to classification and annotation of images and ROIs of different properties with applications in cell biology and neuroscience. BIOCAT provides a user-friendly, portable platform for pattern recognition based biological image classification of two- and three- dimensional images and ROIs. We show, via diverse case studies, that different algorithms and their combinations have different suitability for various problems. The customizability of BIOCAT is thus expected to be useful for providing effective and efficient solutions for a variety of biological

  2. Creating New Medical Ontologies for Image Annotation A Case Study

    CERN Document Server

    Stanescu, Liana; Brezovan, Marius; Mihai, Cristian Gabriel

    2012-01-01

    Creating New Medical Ontologies for Image Annotation focuses on the problem of the medical images automatic annotation process, which is solved in an original manner by the authors. All the steps of this process are described in detail with algorithms, experiments and results. The original algorithms proposed by authors are compared with other efficient similar algorithms. In addition, the authors treat the problem of creating ontologies in an automatic way, starting from Medical Subject Headings (MESH). They have presented some efficient and relevant annotation models and also the basics of the annotation model used by the proposed system: Cross Media Relevance Models. Based on a text query the system will retrieve the images that contain objects described by the keywords.

  3. Image annotation based on positive-negative instances learning

    Science.gov (United States)

    Zhang, Kai; Hu, Jiwei; Liu, Quan; Lou, Ping

    2017-07-01

    Automatic image annotation is now a tough task in computer vision, the main sense of this tech is to deal with managing the massive image on the Internet and assisting intelligent retrieval. This paper designs a new image annotation model based on visual bag of words, using the low level features like color and texture information as well as mid-level feature as SIFT, and mixture the pic2pic, label2pic and label2label correlation to measure the correlation degree of labels and images. We aim to prune the specific features for each single label and formalize the annotation task as a learning process base on Positive-Negative Instances Learning. Experiments are performed using the Corel5K Dataset, and provide a quite promising result when comparing with other existing methods.

  4. Tagging like Humans: Diverse and Distinct Image Annotation

    KAUST Repository

    Wu, Baoyuan

    2018-03-31

    In this work we propose a new automatic image annotation model, dubbed {\\\\bf diverse and distinct image annotation} (D2IA). The generative model D2IA is inspired by the ensemble of human annotations, which create semantically relevant, yet distinct and diverse tags. In D2IA, we generate a relevant and distinct tag subset, in which the tags are relevant to the image contents and semantically distinct to each other, using sequential sampling from a determinantal point process (DPP) model. Multiple such tag subsets that cover diverse semantic aspects or diverse semantic levels of the image contents are generated by randomly perturbing the DPP sampling process. We leverage a generative adversarial network (GAN) model to train D2IA. Extensive experiments including quantitative and qualitative comparisons, as well as human subject studies, on two benchmark datasets demonstrate that the proposed model can produce more diverse and distinct tags than the state-of-the-arts.

  5. Learning visual contexts for image annotation from Flickr groups

    NARCIS (Netherlands)

    Ulges, A.; Worring, M.; Breuel, T.

    2011-01-01

    We present an extension of automatic image annotation that takes the context of a picture into account. Our core assumption is that users do not only provide individual images to be tagged, but group their pictures into batches (e.g., all snapshots taken over the same holiday trip), whereas the

  6. An Annotated Dataset of 14 Cardiac MR Images

    DEFF Research Database (Denmark)

    Stegmann, Mikkel Bille

    2002-01-01

    This note describes a dataset consisting of 14 annotated cardiac MR images. Points of correspondence are placed on each image at the left ventricle (LV). As such, the dataset can be readily used for building statistical models of shape. Further, format specifications and terms of use are given....

  7. iPad: Semantic annotation and markup of radiological images.

    Science.gov (United States)

    Rubin, Daniel L; Rodriguez, Cesar; Shah, Priyanka; Beaulieu, Chris

    2008-11-06

    Radiological images contain a wealth of information,such as anatomy and pathology, which is often not explicit and computationally accessible. Information schemes are being developed to describe the semantic content of images, but such schemes can be unwieldy to operationalize because there are few tools to enable users to capture structured information easily as part of the routine research workflow. We have created iPad, an open source tool enabling researchers and clinicians to create semantic annotations on radiological images. iPad hides the complexity of the underlying image annotation information model from users, permitting them to describe images and image regions using a graphical interface that maps their descriptions to structured ontologies semi-automatically. Image annotations are saved in a variety of formats,enabling interoperability among medical records systems, image archives in hospitals, and the Semantic Web. Tools such as iPad can help reduce the burden of collecting structured information from images, and it could ultimately enable researchers and physicians to exploit images on a very large scale and glean the biological and physiological significance of image content.

  8. Automatic medical image annotation and keyword-based image retrieval using relevance feedback.

    Science.gov (United States)

    Ko, Byoung Chul; Lee, JiHyeon; Nam, Jae-Yeal

    2012-08-01

    This paper presents novel multiple keywords annotation for medical images, keyword-based medical image retrieval, and relevance feedback method for image retrieval for enhancing image retrieval performance. For semantic keyword annotation, this study proposes a novel medical image classification method combining local wavelet-based center symmetric-local binary patterns with random forests. For keyword-based image retrieval, our retrieval system use the confidence score that is assigned to each annotated keyword by combining probabilities of random forests with predefined body relation graph. To overcome the limitation of keyword-based image retrieval, we combine our image retrieval system with relevance feedback mechanism based on visual feature and pattern classifier. Compared with other annotation and relevance feedback algorithms, the proposed method shows both improved annotation performance and accurate retrieval results.

  9. Image annotation by deep neural networks with attention shaping

    Science.gov (United States)

    Zheng, Kexin; Lv, Shaohe; Ma, Fang; Chen, Fei; Jin, Chi; Dou, Yong

    2017-07-01

    Image annotation is a task of assigning semantic labels to an image. Recently, deep neural networks with visual attention have been utilized successfully in many computer vision tasks. In this paper, we show that conventional attention mechanism is easily misled by the salient class, i.e., the attended region always contains part of the image area describing the content of salient class at different attention iterations. To this end, we propose a novel attention shaping mechanism, which aims to maximize the non-overlapping area between consecutive attention processes by taking into account the history of previous attention vectors. Several weighting polices are studied to utilize the history information in different manners. In two benchmark datasets, i.e., PASCAL VOC2012 and MIRFlickr-25k, the average precision is improved by up to 10% in comparison with the state-of-the-art annotation methods.

  10. Selected annotated bibliographies for adaptive filtering of digital image data

    Science.gov (United States)

    Mayers, Margaret; Wood, Lynnette

    1988-01-01

    Digital spatial filtering is an important tool both for enhancing the information content of satellite image data and for implementing cosmetic effects which make the imagery more interpretable and appealing to the eye. Spatial filtering is a context-dependent operation that alters the gray level of a pixel by computing a weighted average formed from the gray level values of other pixels in the immediate vicinity.Traditional spatial filtering involves passing a particular filter or set of filters over an entire image. This assumes that the filter parameter values are appropriate for the entire image, which in turn is based on the assumption that the statistics of the image are constant over the image. However, the statistics of an image may vary widely over the image, requiring an adaptive or "smart" filter whose parameters change as a function of the local statistical properties of the image. Then a pixel would be averaged only with more typical members of the same population. This annotated bibliography cites some of the work done in the area of adaptive filtering. The methods usually fall into two categories, (a) those that segment the image into subregions, each assumed to have stationary statistics, and use a different filter on each subregion, and (b) those that use a two-dimensional "sliding window" to continuously estimate the filter either the spatial or frequency domain, or may utilize both domains. They may be used to deal with images degraded by space variant noise, to suppress undesirable local radiometric statistics while enforcing desirable (user-defined) statistics, to treat problems where space-variant point spread functions are involved, to segment images into regions of constant value for classification, or to "tune" images in order to remove (nonstationary) variations in illumination, noise, contrast, shadows, or haze.Since adpative filtering, like nonadaptive filtering, is used in image processing to accomplish various goals, this bibliography

  11. Evaluation of web-based annotation of ophthalmic images for multicentric clinical trials.

    Science.gov (United States)

    Chalam, K V; Jain, P; Shah, V A; Shah, Gaurav Y

    2006-06-01

    An Internet browser-based annotation system can be used to identify and describe features in digitalized retinal images, in multicentric clinical trials, in real time. In this web-based annotation system, the user employs a mouse to draw and create annotations on a transparent layer, that encapsulates the observations and interpretations of a specific image. Multiple annotation layers may be overlaid on a single image. These layers may correspond to annotations by different users on the same image or annotations of a temporal sequence of images of a disease process, over a period of time. In addition, geometrical properties of annotated figures may be computed and measured. The annotations are stored in a central repository database on a server, which can be retrieved by multiple users in real time. This system facilitates objective evaluation of digital images and comparison of double-blind readings of digital photographs, with an identifiable audit trail. Annotation of ophthalmic images allowed clinically feasible and useful interpretation to track properties of an area of fundus pathology. This provided an objective method to monitor properties of pathologies over time, an essential component of multicentric clinical trials. The annotation system also allowed users to view stereoscopic images that are stereo pairs. This web-based annotation system is useful and valuable in monitoring patient care, in multicentric clinical trials, telemedicine, teaching and routine clinical settings.

  12. OntoVIP: an ontology for the annotation of object models used for medical image simulation.

    Science.gov (United States)

    Gibaud, Bernard; Forestier, Germain; Benoit-Cattin, Hugues; Cervenansky, Frédéric; Clarysse, Patrick; Friboulet, Denis; Gaignard, Alban; Hugonnard, Patrick; Lartizien, Carole; Liebgott, Hervé; Montagnat, Johan; Tabary, Joachim; Glatard, Tristan

    2014-12-01

    This paper describes the creation of a comprehensive conceptualization of object models used in medical image simulation, suitable for major imaging modalities and simulators. The goal is to create an application ontology that can be used to annotate the models in a repository integrated in the Virtual Imaging Platform (VIP), to facilitate their sharing and reuse. Annotations make the anatomical, physiological and pathophysiological content of the object models explicit. In such an interdisciplinary context we chose to rely on a common integration framework provided by a foundational ontology, that facilitates the consistent integration of the various modules extracted from several existing ontologies, i.e. FMA, PATO, MPATH, RadLex and ChEBI. Emphasis is put on methodology for achieving this extraction and integration. The most salient aspects of the ontology are presented, especially the organization in model layers, as well as its use to browse and query the model repository. Copyright © 2014 Elsevier Inc. All rights reserved.

  13. Supporting Keyword Search for Image Retrieval with Integration of Probabilistic Annotation

    Directory of Open Access Journals (Sweden)

    Tie Hua Zhou

    2015-05-01

    Full Text Available The ever-increasing quantities of digital photo resources are annotated with enriching vocabularies to form semantic annotations. Photo-sharing social networks have boosted the need for efficient and intuitive querying to respond to user requirements in large-scale image collections. In order to help users formulate efficient and effective image retrieval, we present a novel integration of a probabilistic model based on keyword query architecture that models the probability distribution of image annotations: allowing users to obtain satisfactory results from image retrieval via the integration of multiple annotations. We focus on the annotation integration step in order to specify the meaning of each image annotation, thus leading to the most representative annotations of the intent of a keyword search. For this demonstration, we show how a probabilistic model has been integrated to semantic annotations to allow users to intuitively define explicit and precise keyword queries in order to retrieve satisfactory image results distributed in heterogeneous large data sources. Our experiments on SBU (collected by Stony Brook University database show that (i our integrated annotation contains higher quality representatives and semantic matches; and (ii the results indicating annotation integration can indeed improve image search result quality.

  14. Physician evaluation and acceptance of remote transmission of CT, digital subtraction angiography, and US annotated images

    International Nuclear Information System (INIS)

    Haskin, M.E.; Robbins, C.; Kohn, M.; Laffey, P.A.; Haskin, P.H.; Teplick, J.G.; Teplick, S.K.; Peyster, R.G.

    1986-01-01

    The authors have found annotated images an effective way of communicating the results of imaging studies to referring physicians. Of particular value is the collation of representative images from several modalities. Previously, hard copy of this collation was sent to the referring physician as an integrated imaging report. Recently they developed a computer-based station that transmits annotated images to remote personal computer (PC) terminals via a telephone modem which requires 30 seconds to send each image. This annotated image report can be quickly accessed by the referring physician at the remote PC terminal The prototype system, utility, diagnostic fidelity, and potential of this remote system are described

  15. Integrated annotation and analysis of in situ hybridization images using the ImAnno system: application to the ear and sensory organs of the fetal mouse.

    Science.gov (United States)

    Romand, Raymond; Ripp, Raymond; Poidevin, Laetitia; Boeglin, Marcel; Geffers, Lars; Dollé, Pascal; Poch, Olivier

    2015-01-01

    An in situ hybridization (ISH) study was performed on 2000 murine genes representing around 10% of the protein-coding genes present in the mouse genome using data generated by the EURExpress consortium. This study was carried out in 25 tissues of late gestation embryos (E14.5), with a special emphasis on the developing ear and on five distinct developing sensory organs, including the cochlea, the vestibular receptors, the sensory retina, the olfactory organ, and the vibrissae follicles. The results obtained from an analysis of more than 11,000 micrographs have been integrated in a newly developed knowledgebase, called ImAnno. In addition to managing the multilevel micrograph annotations performed by human experts, ImAnno provides public access to various integrated databases and tools. Thus, it facilitates the analysis of complex ISH gene expression patterns, as well as functional annotation and interaction of gene sets. It also provides direct links to human pathways and diseases. Hierarchical clustering of expression patterns in the 25 tissues revealed three main branches corresponding to tissues with common functions and/or embryonic origins. To illustrate the integrative power of ImAnno, we explored the expression, function and disease traits of the sensory epithelia of the five presumptive sensory organs. The study identified 623 genes (out of 2000) concomitantly expressed in the five embryonic epithelia, among which many (∼12%) were involved in human disorders. Finally, various multilevel interaction networks were characterized, highlighting differential functional enrichments of directly or indirectly interacting genes. These analyses exemplify an under-represention of "sensory" functions in the sensory gene set suggests that E14.5 is a pivotal stage between the developmental stage and the functional phase that will be fully reached only after birth.

  16. Can Global Visual Features Improve Tag Recommendation for Image Annotation?

    Directory of Open Access Journals (Sweden)

    Oge Marques

    2010-08-01

    Full Text Available Recent advances in the fields of digital photography, networking and computing, have made it easier than ever for users to store and share photographs. However without sufficient metadata, e.g., in the form of tags, photos are difficult to find and organize. In this paper, we describe a system that recommends tags for image annotation. We postulate that the use of low-level global visual features can improve the quality of the tag recommendation process when compared to a baseline statistical method based on tag co-occurrence. We present results from experiments conducted using photos and metadata sourced from the Flickr photo website that suggest that the use of visual features improves the mean average precision (MAP of the system and increases the system's ability to suggest different tags, therefore justifying the associated increase in complexity.

  17. Learning pathology using collaborative vs. individual annotation of whole slide images: a mixed methods trial.

    Science.gov (United States)

    Sahota, Michael; Leung, Betty; Dowdell, Stephanie; Velan, Gary M

    2016-12-12

    Students in biomedical disciplines require understanding of normal and abnormal microscopic appearances of human tissues (histology and histopathology). For this purpose, practical classes in these disciplines typically use virtual microscopy, viewing digitised whole slide images in web browsers. To enhance engagement, tools have been developed to enable individual or collaborative annotation of whole slide images within web browsers. To date, there have been no studies that have critically compared the impact on learning of individual and collaborative annotations on whole slide images. Junior and senior students engaged in Pathology practical classes within Medical Science and Medicine programs participated in cross-over trials of individual and collaborative annotation activities. Students' understanding of microscopic morphology was compared using timed online quizzes, while students' perceptions of learning were evaluated using an online questionnaire. For senior medical students, collaborative annotation of whole slide images was superior for understanding key microscopic features when compared to individual annotation; whilst being at least equivalent to individual annotation for junior medical science students. Across cohorts, students agreed that the annotation activities provided a user-friendly learning environment that met their flexible learning needs, improved efficiency, provided useful feedback, and helped them to set learning priorities. Importantly, these activities were also perceived to enhance motivation and improve understanding. Collaborative annotation improves understanding of microscopic morphology for students with sufficient background understanding of the discipline. These findings have implications for the deployment of annotation activities in biomedical curricula, and potentially for postgraduate training in Anatomical Pathology.

  18. BreakingNews: Article Annotation by Image and Text Processing.

    Science.gov (United States)

    Ramisa, Arnau; Yan, Fei; Moreno-Noguer, Francesc; Mikolajczyk, Krystian

    2018-05-01

    Building upon recent Deep Neural Network architectures, current approaches lying in the intersection of Computer Vision and Natural Language Processing have achieved unprecedented breakthroughs in tasks like automatic captioning or image retrieval. Most of these learning methods, though, rely on large training sets of images associated with human annotations that specifically describe the visual content. In this paper we propose to go a step further and explore the more complex cases where textual descriptions are loosely related to the images. We focus on the particular domain of news articles in which the textual content often expresses connotative and ambiguous relations that are only suggested but not directly inferred from images. We introduce an adaptive CNN architecture that shares most of the structure for multiple tasks including source detection, article illustration and geolocation of articles. Deep Canonical Correlation Analysis is deployed for article illustration, and a new loss function based on Great Circle Distance is proposed for geolocation. Furthermore, we present BreakingNews, a novel dataset with approximately 100K news articles including images, text and captions, and enriched with heterogeneous meta-data (such as GPS coordinates and user comments). We show this dataset to be appropriate to explore all aforementioned problems, for which we provide a baseline performance using various Deep Learning architectures, and different representations of the textual and visual features. We report very promising results and bring to light several limitations of current state-of-the-art in this kind of domain, which we hope will help spur progress in the field.

  19. Multi-Label Classification Based on Low Rank Representation for Image Annotation

    Directory of Open Access Journals (Sweden)

    Qiaoyu Tan

    2017-01-01

    Full Text Available Annotating remote sensing images is a challenging task for its labor demanding annotation process and requirement of expert knowledge, especially when images can be annotated with multiple semantic concepts (or labels. To automatically annotate these multi-label images, we introduce an approach called Multi-Label Classification based on Low Rank Representation (MLC-LRR. MLC-LRR firstly utilizes low rank representation in the feature space of images to compute the low rank constrained coefficient matrix, then it adapts the coefficient matrix to define a feature-based graph and to capture the global relationships between images. Next, it utilizes low rank representation in the label space of labeled images to construct a semantic graph. Finally, these two graphs are exploited to train a graph-based multi-label classifier. To validate the performance of MLC-LRR against other related graph-based multi-label methods in annotating images, we conduct experiments on a public available multi-label remote sensing images (Land Cover. We perform additional experiments on five real-world multi-label image datasets to further investigate the performance of MLC-LRR. Empirical study demonstrates that MLC-LRR achieves better performance on annotating images than these comparing methods across various evaluation criteria; it also can effectively exploit global structure and label correlations of multi-label images.

  20. Semi-supervised learning based probabilistic latent semantic analysis for automatic image annotation

    Institute of Scientific and Technical Information of China (English)

    Tian Dongping

    2017-01-01

    In recent years, multimedia annotation problem has been attracting significant research attention in multimedia and computer vision areas, especially for automatic image annotation, whose purpose is to provide an efficient and effective searching environment for users to query their images more easily.In this paper, a semi-supervised learning based probabilistic latent semantic analysis ( PL-SA) model for automatic image annotation is presenred.Since it' s often hard to obtain or create la-beled images in large quantities while unlabeled ones are easier to collect, a transductive support vector machine ( TSVM) is exploited to enhance the quality of the training image data.Then, differ-ent image features with different magnitudes will result in different performance for automatic image annotation.To this end, a Gaussian normalization method is utilized to normalize different features extracted from effective image regions segmented by the normalized cuts algorithm so as to reserve the intrinsic content of images as complete as possible.Finally, a PLSA model with asymmetric mo-dalities is constructed based on the expectation maximization( EM) algorithm to predict a candidate set of annotations with confidence scores.Extensive experiments on the general-purpose Corel5k dataset demonstrate that the proposed model can significantly improve performance of traditional PL-SA for the task of automatic image annotation.

  1. Application of neuroanatomical ontologies for neuroimaging data annotation

    Directory of Open Access Journals (Sweden)

    Jessica A Turner

    2010-06-01

    Full Text Available The annotation of functional neuroimaging results for data sharing and reuse is particularly challenging, due to the diversity of terminologies of neuroanatomical structures and cortical parcellation schemes. To address this challenge, we extended the Foundational Model of Anatomy Ontology (FMA to include cytoarchitectural, Brodmann area labels, and a morphological cortical labeling scheme (e.g., the part of Brodmann area 6 in the left precentral gyrus. This representation was also used to augment the neuroanatomical axis of RadLex, the ontology for clinical imaging. The resulting neuroanatomical ontology contains explicit relationships indicating which brain regions are “part of” which other regions, across cytoarchitectural and morphological labeling schemas. We annotated a large functional neuroimaging dataset with terms from the ontology and applied a reasoning engine to analyze this dataset in conjunction with the ontology, and achieved successful inferences from the most specific level (e.g., how many subjects showed activation in a sub-part of the middle frontal gyrus to more general (how many activations were found in areas connected via a known white matter tract?. In summary, we have produced a neuroanatomical ontology that harmonizes several different terminologies of neuroanatomical structures and cortical parcellation schemes. This neuranatomical ontology is publicly available as a view of FMA at the Bioportal website at http://rest.bioontology.org/bioportal/ontologies/download/10005. The ontological encoding of anatomic knowledge can be exploited by computer reasoning engines to make inferences about neuroanatomical relationships described in imaging datasets using different terminologies. This approach could ultimately enable knowledge discovery from large, distributed fMRI studies or medical record mining.

  2. Plann: A command-line application for annotating plastome sequences.

    Science.gov (United States)

    Huang, Daisie I; Cronk, Quentin C B

    2015-08-01

    Plann automates the process of annotating a plastome sequence in GenBank format for either downstream processing or for GenBank submission by annotating a new plastome based on a similar, well-annotated plastome. Plann is a Perl script to be executed on the command line. Plann compares a new plastome sequence to the features annotated in a reference plastome and then shifts the intervals of any matching features to the locations in the new plastome. Plann's output can be used in the National Center for Biotechnology Information's tbl2asn to create a Sequin file for GenBank submission. Unlike Web-based annotation packages, Plann is a locally executable script that will accurately annotate a plastome sequence to a locally specified reference plastome. Because it executes from the command line, it is ready to use in other software pipelines and can be easily rerun as a draft plastome is improved.

  3. Protein Annotators' Assistant: A Novel Application of Information Retrieval Techniques.

    Science.gov (United States)

    Wise, Michael J.

    2000-01-01

    Protein Annotators' Assistant (PAA) is a software system which assists protein annotators in assigning functions to newly sequenced proteins. PAA employs a number of information retrieval techniques in a novel setting and is thus related to text categorization, where multiple categories may be suggested, except that in this case none of the…

  4. The National Cancer Informatics Program (NCIP) Annotation and Image Markup (AIM) Foundation model.

    Science.gov (United States)

    Mongkolwat, Pattanasak; Kleper, Vladimir; Talbot, Skip; Rubin, Daniel

    2014-12-01

    Knowledge contained within in vivo imaging annotated by human experts or computer programs is typically stored as unstructured text and separated from other associated information. The National Cancer Informatics Program (NCIP) Annotation and Image Markup (AIM) Foundation information model is an evolution of the National Institute of Health's (NIH) National Cancer Institute's (NCI) Cancer Bioinformatics Grid (caBIG®) AIM model. The model applies to various image types created by various techniques and disciplines. It has evolved in response to the feedback and changing demands from the imaging community at NCI. The foundation model serves as a base for other imaging disciplines that want to extend the type of information the model collects. The model captures physical entities and their characteristics, imaging observation entities and their characteristics, markups (two- and three-dimensional), AIM statements, calculations, image source, inferences, annotation role, task context or workflow, audit trail, AIM creator details, equipment used to create AIM instances, subject demographics, and adjudication observations. An AIM instance can be stored as a Digital Imaging and Communications in Medicine (DICOM) structured reporting (SR) object or Extensible Markup Language (XML) document for further processing and analysis. An AIM instance consists of one or more annotations and associated markups of a single finding along with other ancillary information in the AIM model. An annotation describes information about the meaning of pixel data in an image. A markup is a graphical drawing placed on the image that depicts a region of interest. This paper describes fundamental AIM concepts and how to use and extend AIM for various imaging disciplines.

  5. eHistology image and annotation data from the Kaufman Atlas of Mouse Development.

    Science.gov (United States)

    Baldock, Richard A; Armit, Chris

    2017-12-20

    "The Atlas of Mouse Development" by Kaufman is a classic paper atlas that is the de facto standard for the definition of mouse embryo anatomy in the context of standard histological images. We have re-digitised the original H&E stained tissue sections used for the book at high resolution and transferred the hand-drawn annotations to digital form. We have augmented the annotations with standard ontological assignments (EMAPA anatomy) and made the data freely available via an online viewer (eHistology) and from the University of Edinburgh DataShare archive. The dataset captures and preserves the definitive anatomical knowledge of the original atlas, provides a core image set for deeper community annotation and teaching, and delivers a unique high-quality set of high-resolution histological images through mammalian development for manual and automated analysis. © The Authors 2017. Published by Oxford University Press.

  6. Annotating MYC status with 89Zr-transferrin imaging.

    Science.gov (United States)

    Holland, Jason P; Evans, Michael J; Rice, Samuel L; Wongvipat, John; Sawyers, Charles L; Lewis, Jason S

    2012-10-01

    A noninvasive technology that quantitatively measures the activity of oncogenic signaling pathways could have a broad impact on cancer diagnosis and treatment with targeted therapies. Here we describe the development of (89)Zr-desferrioxamine-labeled transferrin ((89)Zr-transferrin), a new positron emission tomography (PET) radiotracer that binds the transferrin receptor 1 (TFRC, CD71) with high avidity. The use of (89)Zr-transferrin produces high-contrast PET images that quantitatively reflect treatment-induced changes in MYC-regulated TFRC expression in a MYC-driven prostate cancer xenograft model. Moreover, (89)Zr-transferrin imaging can detect the in situ development of prostate cancer in a transgenic MYC prostate cancer model, as well as in prostatic intraepithelial neoplasia (PIN) before histological or anatomic evidence of invasive cancer. These preclinical data establish (89)Zr-transferrin as a sensitive tool for noninvasive measurement of oncogene-driven TFRC expression in prostate and potentially other cancers, with prospective near-term clinical application.

  7. Towards the VWO Annotation Service: a Success Story of the IMAGE RPI Expert Rating System

    Science.gov (United States)

    Reinisch, B. W.; Galkin, I. A.; Fung, S. F.; Benson, R. F.; Kozlov, A. V.; Khmyrov, G. M.; Garcia, L. N.

    2010-12-01

    Interpretation of Heliophysics wave data requires specialized knowledge of wave phenomena. Users of the virtual wave observatory (VWO) will greatly benefit from a data annotation service that will allow querying of data by phenomenon type, thus helping accomplish the VWO goal to make Heliophysics wave data searchable, understandable, and usable by the scientific community. Individual annotations can be sorted by phenomenon type and reduced into event lists (catalogs). However, in contrast to the event lists, annotation records allow a greater flexibility of collaborative management by more easily admitting operations of addition, revision, or deletion. They can therefore become the building blocks for an interactive Annotation Service with a suitable graphic user interface to the VWO middleware. The VWO Annotation Service vision is an interactive, collaborative sharing of domain expert knowledge with fellow scientists and students alike. An effective prototype of the VWO Annotation Service has been in operation at the University of Massachusetts Lowell since 2001. An expert rating system (ERS) was developed for annotating the IMAGE radio plasma imager (RPI) active sounding data containing 1.2 million plasmagrams. The RPI data analysts can use ERS to submit expert ratings of plasmagram features, such as presence of echo traces resulted from reflected RPI signals from distant plasma structures. Since its inception in 2001, the RPI ERS has accumulated 7351 expert plasmagram ratings in 16 phenomenon categories, together with free-text descriptions and other metadata. In addition to human expert ratings, the system holds 225,125 ratings submitted by the CORPRAL data prospecting software that employs a model of the human pre-attentive vision to select images potentially containing interesting features. The annotation records proved to be instrumental in a number of investigations where manual data exploration would have been prohibitively tedious and expensive

  8. Joint Probability Models of Radiology Images and Clinical Annotations

    Science.gov (United States)

    Arnold, Corey Wells

    2009-01-01

    Radiology data, in the form of images and reports, is growing at a high rate due to the introduction of new imaging modalities, new uses of existing modalities, and the growing importance of objective image information in the diagnosis and treatment of patients. This increase has resulted in an enormous set of image data that is richly annotated…

  9. Similarity maps and hierarchical clustering for annotating FT-IR spectral images.

    Science.gov (United States)

    Zhong, Qiaoyong; Yang, Chen; Großerüschkamp, Frederik; Kallenbach-Thieltges, Angela; Serocka, Peter; Gerwert, Klaus; Mosig, Axel

    2013-11-20

    Unsupervised segmentation of multi-spectral images plays an important role in annotating infrared microscopic images and is an essential step in label-free spectral histopathology. In this context, diverse clustering approaches have been utilized and evaluated in order to achieve segmentations of Fourier Transform Infrared (FT-IR) microscopic images that agree with histopathological characterization. We introduce so-called interactive similarity maps as an alternative annotation strategy for annotating infrared microscopic images. We demonstrate that segmentations obtained from interactive similarity maps lead to similarly accurate segmentations as segmentations obtained from conventionally used hierarchical clustering approaches. In order to perform this comparison on quantitative grounds, we provide a scheme that allows to identify non-horizontal cuts in dendrograms. This yields a validation scheme for hierarchical clustering approaches commonly used in infrared microscopy. We demonstrate that interactive similarity maps may identify more accurate segmentations than hierarchical clustering based approaches, and thus are a viable and due to their interactive nature attractive alternative to hierarchical clustering. Our validation scheme furthermore shows that performance of hierarchical two-means is comparable to the traditionally used Ward's clustering. As the former is much more efficient in time and memory, our results suggest another less resource demanding alternative for annotating large spectral images.

  10. Automatically Annotated Mapping for Indoor Mobile Robot Applications

    DEFF Research Database (Denmark)

    Özkil, Ali Gürcan; Howard, Thomas J.

    2012-01-01

    This paper presents a new and practical method for mapping and annotating indoor environments for mobile robot use. The method makes use of 2D occupancy grid maps for metric representation, and topology maps to indicate the connectivity of the ‘places-of-interests’ in the environment. Novel use...... localization and mapping in topology space, and fuses camera and robot pose estimations to build an automatically annotated global topo-metric map. It is developed as a framework for a hospital service robot and tested in a real hospital. Experiments show that the method is capable of producing globally...... consistent, automatically annotated hybrid metric-topological maps that is needed by mobile service robots....

  11. Enabling Histopathological Annotations on Immunofluorescent Images through Virtualization of Hematoxylin and Eosin.

    Science.gov (United States)

    Lahiani, Amal; Klaiman, Eldad; Grimm, Oliver

    2018-01-01

    Medical diagnosis and clinical decisions rely heavily on the histopathological evaluation of tissue samples, especially in oncology. Historically, classical histopathology has been the gold standard for tissue evaluation and assessment by pathologists. The most widely and commonly used dyes in histopathology are hematoxylin and eosin (H&E) as most malignancies diagnosis is largely based on this protocol. H&E staining has been used for more than a century to identify tissue characteristics and structures morphologies that are needed for tumor diagnosis. In many cases, as tissue is scarce in clinical studies, fluorescence imaging is necessary to allow staining of the same specimen with multiple biomarkers simultaneously. Since fluorescence imaging is a relatively new technology in the pathology landscape, histopathologists are not used to or trained in annotating or interpreting these images. To allow pathologists to annotate these images without the need for additional training, we designed an algorithm for the conversion of fluorescence images to brightfield H&E images. In this algorithm, we use fluorescent nuclei staining to reproduce the hematoxylin information and natural tissue autofluorescence to reproduce the eosin information avoiding the necessity to specifically stain the proteins or intracellular structures with an additional fluorescence stain. Our method is based on optimizing a transform function from fluorescence to H&E images using least mean square optimization. It results in high quality virtual H&E digital images that can easily and efficiently be analyzed by pathologists. We validated our results with pathologists by making them annotate tumor in real and virtual H&E whole slide images and we obtained promising results. Hence, we provide a solution that enables pathologists to assess tissue and annotate specific structures based on multiplexed fluorescence images.

  12. Enabling histopathological annotations on immunofluorescent images through virtualization of hematoxylin and eosin

    Directory of Open Access Journals (Sweden)

    Amal Lahiani

    2018-01-01

    Full Text Available Context: Medical diagnosis and clinical decisions rely heavily on the histopathological evaluation of tissue samples, especially in oncology. Historically, classical histopathology has been the gold standard for tissue evaluation and assessment by pathologists. The most widely and commonly used dyes in histopathology are hematoxylin and eosin (H&E as most malignancies diagnosis is largely based on this protocol. H&E staining has been used for more than a century to identify tissue characteristics and structures morphologies that are needed for tumor diagnosis. In many cases, as tissue is scarce in clinical studies, fluorescence imaging is necessary to allow staining of the same specimen with multiple biomarkers simultaneously. Since fluorescence imaging is a relatively new technology in the pathology landscape, histopathologists are not used to or trained in annotating or interpreting these images. Aims, Settings and Design: To allow pathologists to annotate these images without the need for additional training, we designed an algorithm for the conversion of fluorescence images to brightfield H&E images. Subjects and Methods: In this algorithm, we use fluorescent nuclei staining to reproduce the hematoxylin information and natural tissue autofluorescence to reproduce the eosin information avoiding the necessity to specifically stain the proteins or intracellular structures with an additional fluorescence stain. Statistical Analysis Used: Our method is based on optimizing a transform function from fluorescence to H&E images using least mean square optimization. Results: It results in high quality virtual H&E digital images that can easily and efficiently be analyzed by pathologists. We validated our results with pathologists by making them annotate tumor in real and virtual H&E whole slide images and we obtained promising results. Conclusions: Hence, we provide a solution that enables pathologists to assess tissue and annotate specific structures

  13. Analysis and Segmentation of Face Images using Point Annotations and Linear Subspace Techniques

    DEFF Research Database (Denmark)

    Stegmann, Mikkel Bille

    2002-01-01

    This report provides an analysis of 37 annotated frontal face images. All results presented have been obtained using our freely available Active Appearance Model (AAM) implementation. To ensure the reproducibility of the presented experiments, the data set has also been made available. As such...

  14. A Methodology and Implementation for Annotating Digital Images for Context-appropriate Use in an Academic Health Care Environment

    Science.gov (United States)

    Goede, Patricia A.; Lauman, Jason R.; Cochella, Christopher; Katzman, Gregory L.; Morton, David A.; Albertine, Kurt H.

    2004-01-01

    Use of digital medical images has become common over the last several years, coincident with the release of inexpensive, mega-pixel quality digital cameras and the transition to digital radiology operation by hospitals. One problem that clinicians, medical educators, and basic scientists encounter when handling images is the difficulty of using business and graphic arts commercial-off-the-shelf (COTS) software in multicontext authoring and interactive teaching environments. The authors investigated and developed software-supported methodologies to help clinicians, medical educators, and basic scientists become more efficient and effective in their digital imaging environments. The software that the authors developed provides the ability to annotate images based on a multispecialty methodology for annotation and visual knowledge representation. This annotation methodology is designed by consensus, with contributions from the authors and physicians, medical educators, and basic scientists in the Departments of Radiology, Neurobiology and Anatomy, Dermatology, and Ophthalmology at the University of Utah. The annotation methodology functions as a foundation for creating, using, reusing, and extending dynamic annotations in a context-appropriate, interactive digital environment. The annotation methodology supports the authoring process as well as output and presentation mechanisms. The annotation methodology is the foundation for a Windows implementation that allows annotated elements to be represented as structured eXtensible Markup Language and stored separate from the image(s). PMID:14527971

  15. Visual Interpretation with Three-Dimensional Annotations (VITA): Three-Dimensional Image Interpretation Tool for Radiological Reporting

    OpenAIRE

    Roy, Sharmili; Brown, Michael S.; Shih, George L.

    2013-01-01

    This paper introduces a software framework called Visual Interpretation with Three-Dimensional Annotations (VITA) that is able to automatically generate three-dimensional (3D) visual summaries based on radiological annotations made during routine exam reporting. VITA summaries are in the form of rotating 3D volumes where radiological annotations are highlighted to place important clinical observations into a 3D context. The rendered volume is produced as a Digital Imaging and Communications i...

  16. Annotating images by harnessing worldwide user-tagged photos

    NARCIS (Netherlands)

    Li, X.; Snoek, C.G.M.; Worring, M.

    2009-01-01

    Automatic image tagging is important yet challenging due to the semantic gap and the lack of learning examples to model a tag's visual diversity. Meanwhile, social user tagging is creating rich multimedia content on the Web. In this paper, we propose to combine the two tagging approaches in a

  17. STRUCTURAL ANNOTATION OF EM IMAGES BY GRAPH CUT

    Energy Technology Data Exchange (ETDEWEB)

    Chang, Hang; Auer, Manfred; Parvin, Bahram

    2009-05-08

    Biological images have the potential to reveal complex signatures that may not be amenable to morphological modeling in terms of shape, location, texture, and color. An effective analytical method is to characterize the composition of a specimen based on user-defined patterns of texture and contrast formation. However, such a simple requirement demands an improved model for stability and robustness. Here, an interactive computational model is introduced for learning patterns of interest by example. The learned patterns bound an active contour model in which the traditional gradient descent optimization is replaced by the more efficient optimization of the graph cut methods. First, the energy function is defined according to the curve evolution. Next, a graph is constructed with weighted edges on the energy function and is optimized with the graph cut algorithm. As a result, the method combines the advantages of the level set method and graph cut algorithm, i.e.,"topological" invariance and computational efficiency. The technique is extended to the multi-phase segmentation problem; the method is validated on synthetic images and then applied to specimens imaged by transmission electron microscopy(TEM).

  18. An Annotated Bibliography on Silicon Nitride for Structural Applications

    Science.gov (United States)

    1977-03-01

    annotated in this bibliography with each entry under the name of the specific author. 16. Canteloup, J., and Mocellin , A., "Synthesis of...thinning. Oxidation of the SJ3N4 grains started at the grain boundaries. 81. Torre, J. P., and Mocellin , A., "On the Existence of Si-AI-O-N Solid...Torre, J. P., and Mocellin , A., "Some Effects of Al and O2 on the Nitridation of Silicon Compacts", J. Mater. Sei., 11., 1725-1733(1976). Highest final

  19. Automatically Annotated Mapping for Indoor Mobile Robot Applications

    DEFF Research Database (Denmark)

    Özkil, Ali Gürcan; Howard, Thomas J.

    2012-01-01

    of 2D visual tags allows encoding information physically at places-of-interest. Moreover, using physical characteristics of the visual tags (i.e. paper size) is exploited to recover relative poses of the tags in the environment using a simple camera. This method extends tag encoding to simultaneous......This paper presents a new and practical method for mapping and annotating indoor environments for mobile robot use. The method makes use of 2D occupancy grid maps for metric representation, and topology maps to indicate the connectivity of the ‘places-of-interests’ in the environment. Novel use...

  20. Annotating image ROIs with text descriptions for multimodal biomedical document retrieval

    Science.gov (United States)

    You, Daekeun; Simpson, Matthew; Antani, Sameer; Demner-Fushman, Dina; Thoma, George R.

    2013-01-01

    Regions of interest (ROIs) that are pointed to by overlaid markers (arrows, asterisks, etc.) in biomedical images are expected to contain more important and relevant information than other regions for biomedical article indexing and retrieval. We have developed several algorithms that localize and extract the ROIs by recognizing markers on images. Cropped ROIs then need to be annotated with contents describing them best. In most cases accurate textual descriptions of the ROIs can be found from figure captions, and these need to be combined with image ROIs for annotation. The annotated ROIs can then be used to, for example, train classifiers that separate ROIs into known categories (medical concepts), or to build visual ontologies, for indexing and retrieval of biomedical articles. We propose an algorithm that pairs visual and textual ROIs that are extracted from images and figure captions, respectively. This algorithm based on dynamic time warping (DTW) clusters recognized pointers into groups, each of which contains pointers with identical visual properties (shape, size, color, etc.). Then a rule-based matching algorithm finds the best matching group for each textual ROI mention. Our method yields a precision and recall of 96% and 79%, respectively, when ground truth textual ROI data is used.

  1. Crowdsourcing image annotation for nucleus detection and segmentation in computational pathology: evaluating experts, automated methods, and the crowd.

    Science.gov (United States)

    Irshad, H; Montaser-Kouhsari, L; Waltz, G; Bucur, O; Nowak, J A; Dong, F; Knoblauch, N W; Beck, A H

    2015-01-01

    The development of tools in computational pathology to assist physicians and biomedical scientists in the diagnosis of disease requires access to high-quality annotated images for algorithm learning and evaluation. Generating high-quality expert-derived annotations is time-consuming and expensive. We explore the use of crowdsourcing for rapidly obtaining annotations for two core tasks in com- putational pathology: nucleus detection and nucleus segmentation. We designed and implemented crowdsourcing experiments using the CrowdFlower platform, which provides access to a large set of labor channel partners that accesses and manages millions of contributors worldwide. We obtained annotations from four types of annotators and compared concordance across these groups. We obtained: crowdsourced annotations for nucleus detection and segmentation on a total of 810 images; annotations using automated methods on 810 images; annotations from research fellows for detection and segmentation on 477 and 455 images, respectively; and expert pathologist-derived annotations for detection and segmentation on 80 and 63 images, respectively. For the crowdsourced annotations, we evaluated performance across a range of contributor skill levels (1, 2, or 3). The crowdsourced annotations (4,860 images in total) were completed in only a fraction of the time and cost required for obtaining annotations using traditional methods. For the nucleus detection task, the research fellow-derived annotations showed the strongest concordance with the expert pathologist- derived annotations (F-M =93.68%), followed by the crowd-sourced contributor levels 1,2, and 3 and the automated method, which showed relatively similar performance (F-M = 87.84%, 88.49%, 87.26%, and 86.99%, respectively). For the nucleus segmentation task, the crowdsourced contributor level 3-derived annotations, research fellow-derived annotations, and automated method showed the strongest concordance with the expert pathologist

  2. A method for increasing the accuracy of image annotating in crowd-sourcing

    OpenAIRE

    Nurmukhametov, O.R.; Baklanov, A.

    2016-01-01

    Crowdsourcing is a new approach to solve tasks when a group of volunteers replaces experts. Recent results show that crowdsourcing is an efficient tool for annotating large datasets. Geo-Wiki is an example of successful citizen science projects. The goal of Geo-Wiki project is to improve a global land cover map by applying crowdsourcing for image recognition. In our research, we investigate methods for increasing reliability of data collected during The Cropland Capture Game (Geo-Wiki). In th...

  3. Informatics in radiology: An open-source and open-access cancer biomedical informatics grid annotation and image markup template builder.

    Science.gov (United States)

    Mongkolwat, Pattanasak; Channin, David S; Kleper, Vladimir; Rubin, Daniel L

    2012-01-01

    In a routine clinical environment or clinical trial, a case report form or structured reporting template can be used to quickly generate uniform and consistent reports. Annotation and image markup (AIM), a project supported by the National Cancer Institute's cancer biomedical informatics grid, can be used to collect information for a case report form or structured reporting template. AIM is designed to store, in a single information source, (a) the description of pixel data with use of markups or graphical drawings placed on the image, (b) calculation results (which may or may not be directly related to the markups), and (c) supplemental information. To facilitate the creation of AIM annotations with data entry templates, an AIM template schema and an open-source template creation application were developed to assist clinicians, image researchers, and designers of clinical trials to quickly create a set of data collection items, thereby ultimately making image information more readily accessible.

  4. Meta4: a web application for sharing and annotating metagenomic gene predictions using web services.

    Science.gov (United States)

    Richardson, Emily J; Escalettes, Franck; Fotheringham, Ian; Wallace, Robert J; Watson, Mick

    2013-01-01

    Whole-genome shotgun metagenomics experiments produce DNA sequence data from entire ecosystems, and provide a huge amount of novel information. Gene discovery projects require up-to-date information about sequence homology and domain structure for millions of predicted proteins to be presented in a simple, easy-to-use system. There is a lack of simple, open, flexible tools that allow the rapid sharing of metagenomics datasets with collaborators in a format they can easily interrogate. We present Meta4, a flexible and extensible web application that can be used to share and annotate metagenomic gene predictions. Proteins and predicted domains are stored in a simple relational database, with a dynamic front-end which displays the results in an internet browser. Web services are used to provide up-to-date information about the proteins from homology searches against public databases. Information about Meta4 can be found on the project website, code is available on Github, a cloud image is available, and an example implementation can be seen at.

  5. Visual Interpretation with Three-Dimensional Annotations (VITA): three-dimensional image interpretation tool for radiological reporting.

    Science.gov (United States)

    Roy, Sharmili; Brown, Michael S; Shih, George L

    2014-02-01

    This paper introduces a software framework called Visual Interpretation with Three-Dimensional Annotations (VITA) that is able to automatically generate three-dimensional (3D) visual summaries based on radiological annotations made during routine exam reporting. VITA summaries are in the form of rotating 3D volumes where radiological annotations are highlighted to place important clinical observations into a 3D context. The rendered volume is produced as a Digital Imaging and Communications in Medicine (DICOM) object and is automatically added to the study for archival in Picture Archiving and Communication System (PACS). In addition, a video summary (e.g., MPEG4) can be generated for sharing with patients and for situations where DICOM viewers are not readily available to referring physicians. The current version of VITA is compatible with ClearCanvas; however, VITA can work with any PACS workstation that has a structured annotation implementation (e.g., Extendible Markup Language, Health Level 7, Annotation and Image Markup) and is able to seamlessly integrate into the existing reporting workflow. In a survey with referring physicians, the vast majority strongly agreed that 3D visual summaries improve the communication of the radiologists' reports and aid communication with patients.

  6. Applications of optical imaging

    International Nuclear Information System (INIS)

    Schellenberger, E.

    2005-01-01

    Optical imaging in the form of near infrared fluorescence and bioluminescence has proven useful for a wide range of applications in the field of molecular imaging. Both techniques provide a high sensitivity (in the nanomolar range), which is of particular importance for molecular imaging. Imaging with near infrared fluorescence is especially cost-effective and can be performed, in contrast to radioactivity-based methods, with fluorescence dyes that remain stable for months. The most important advantage of bioluminescence, in turn, is the lack of background signal. Although molecular imaging with these techniques is still in the experimental phase, an application of near infrared fluorescence is already foreseeable for the imaging of superficial structures. (orig.)

  7. Workflow and web application for annotating NCBI BioProject transcriptome data.

    Science.gov (United States)

    Vera Alvarez, Roberto; Medeiros Vidal, Newton; Garzón-Martínez, Gina A; Barrero, Luz S; Landsman, David; Mariño-Ramírez, Leonardo

    2017-01-01

    The volume of transcriptome data is growing exponentially due to rapid improvement of experimental technologies. In response, large central resources such as those of the National Center for Biotechnology Information (NCBI) are continually adapting their computational infrastructure to accommodate this large influx of data. New and specialized databases, such as Transcriptome Shotgun Assembly Sequence Database (TSA) and Sequence Read Archive (SRA), have been created to aid the development and expansion of centralized repositories. Although the central resource databases are under continual development, they do not include automatic pipelines to increase annotation of newly deposited data. Therefore, third-party applications are required to achieve that aim. Here, we present an automatic workflow and web application for the annotation of transcriptome data. The workflow creates secondary data such as sequencing reads and BLAST alignments, which are available through the web application. They are based on freely available bioinformatics tools and scripts developed in-house. The interactive web application provides a search engine and several browser utilities. Graphical views of transcript alignments are available through SeqViewer, an embedded tool developed by NCBI for viewing biological sequence data. The web application is tightly integrated with other NCBI web applications and tools to extend the functionality of data processing and interconnectivity. We present a case study for the species Physalis peruviana with data generated from BioProject ID 67621. URL: http://www.ncbi.nlm.nih.gov/projects/physalis/. Published by Oxford University Press 2017. This work is written by US Government employees and is in the public domain in the US.

  8. Emerging applications of read profiles towards the functional annotation of the genome

    DEFF Research Database (Denmark)

    Pundhir, Sachin; Poirazi, Panayiota; Gorodkin, Jan

    2015-01-01

    is typically a result of the protocol designed to address specific research questions. The sequencing results in reads, which when mapped to a reference genome often leads to the formation of distinct patterns (read profiles). Interpretation of these read profiles is essential for their analysis in relation...... to the research question addressed. Several strategies have been employed at varying levels of abstraction ranging from a somewhat ad hoc to a more systematic analysis of read profiles. These include methods which can compare read profiles, e.g., from direct (non-sequence based) alignments to classification...... of patterns into functional groups. In this review, we highlight the emerging applications of read profiles for the annotation of non-coding RNA and cis-regulatory elements (CREs) such as enhancers and promoters. We also discuss the biological rationale behind their formation....

  9. Genomic variant annotation workflow for clinical applications [version 2; referees: 2 approved

    Directory of Open Access Journals (Sweden)

    Thomas Thurnherr

    2016-10-01

    Full Text Available Annotation and interpretation of DNA aberrations identified through next-generation sequencing is becoming an increasingly important task. Even more so in the context of data analysis pipelines for medical applications, where genomic aberrations are associated with phenotypic and clinical features. Here we describe a workflow to identify potential gene targets in aberrated genes or pathways and their corresponding drugs. To this end, we provide the R/Bioconductor package rDGIdb, an R wrapper to query the drug-gene interaction database (DGIdb. DGIdb accumulates drug-gene interaction data from 15 different resources and allows filtering on different levels. The rDGIdb package makes these resources and tools available to R users. Moreover, rDGIdb queries can be automated through incorporation of the rDGIdb package into NGS sequencing pipelines.

  10. Transcription and Annotation of a Japanese Accented Spoken Corpus of L2 Spanish for the Development of CAPT Applications

    Science.gov (United States)

    Carranza, Mario

    2016-01-01

    This paper addresses the process of transcribing and annotating spontaneous non-native speech with the aim of compiling a training corpus for the development of Computer Assisted Pronunciation Training (CAPT) applications, enhanced with Automatic Speech Recognition (ASR) technology. To better adapt ASR technology to CAPT tools, the recognition…

  11. Extending in silico mechanism-of-action analysis by annotating targets with pathways: application to cellular cytotoxicity readouts.

    Science.gov (United States)

    Liggi, Sonia; Drakakis, Georgios; Koutsoukas, Alexios; Cortes-Ciriano, Isidro; Martínez-Alonso, Patricia; Malliavin, Thérèse E; Velazquez-Campoy, Adrian; Brewerton, Suzanne C; Bodkin, Michael J; Evans, David A; Glen, Robert C; Carrodeguas, José Alberto; Bender, Andreas

    2014-01-01

    An in silico mechanism-of-action analysis protocol was developed, comprising molecule bioactivity profiling, annotation of predicted targets with pathways and calculation of enrichment factors to highlight targets and pathways more likely to be implicated in the studied phenotype. The method was applied to a cytotoxicity phenotypic endpoint, with enriched targets/pathways found to be statistically significant when compared with 100 random datasets. Application on a smaller apoptotic set (10 molecules) did not allowed to obtain statistically relevant results, suggesting that the protocol requires modification such as analysis of the most frequently predicted targets/annotated pathways. Pathway annotations improved the mechanism-of-action information gained by target prediction alone, allowing a better interpretation of the predictions and providing better mapping of targets onto pathways.

  12. GSV Annotated Bibliography

    Energy Technology Data Exchange (ETDEWEB)

    Roberts, Randy S. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Pope, Paul A. [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Jiang, Ming [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Trucano, Timothy G. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Aragon, Cecilia R. [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States); Ni, Kevin [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Wei, Thomas [Argonne National Lab. (ANL), Argonne, IL (United States); Chilton, Lawrence K. [Pacific Northwest National Lab. (PNNL), Richland, WA (United States); Bakel, Alan [Argonne National Lab. (ANL), Argonne, IL (United States)

    2010-09-14

    The following annotated bibliography was developed as part of the geospatial algorithm verification and validation (GSV) project for the Simulation, Algorithms and Modeling program of NA-22. Verification and Validation of geospatial image analysis algorithms covers a wide range of technologies. Papers in the bibliography are thus organized into the following five topic areas: Image processing and analysis, usability and validation of geospatial image analysis algorithms, image distance measures, scene modeling and image rendering, and transportation simulation models. Many other papers were studied during the course of the investigation including. The annotations for these articles can be found in the paper "On the verification and validation of geospatial image analysis algorithms".

  13. Improving knowledge management through the support of image examination and data annotation using DICOM structured reporting.

    Science.gov (United States)

    Torres, José Salavert; Damian Segrelles Quilis, J; Espert, Ignacio Blanquer; García, Vicente Hernandez

    2012-12-01

    An important effort has been invested on improving the image diagnosis process in different medical areas using information technologies. The field of medical imaging involves two main data types: medical imaging and reports. Developments based on the DICOM standard have demonstrated to be a convenient and widespread solution among the medical community. The main objective of this work is to design a Web application prototype that will be able to improve diagnosis and follow-on of breast cancer patients. It is based on TRENCADIS middleware, which provides a knowledge-oriented storage model composed by federated repositories of DICOM image studies and DICOM-SR medical reports. The full structure and contents of the diagnosis reports are used as metadata for indexing images. The TRENCADIS infrastructure takes full advantage of Grid technologies by deploying multi-resource grid services that enable multiple views (reports schemes) of the knowledge database. The paper presents a real deployment of such Web application prototype in the Dr. Peset Hospital providing radiologists with a tool to create, store and search diagnostic reports based on breast cancer explorations (mammography, magnetic resonance, ultrasound, pre-surgery biopsy and post-surgery biopsy), improving support for diagnostics decisions. A technical details for use cases (outlining enhanced multi-resource grid services communication and processing steps) and interactions between actors and the deployed prototype are described. As a result, information is more structured, the logic is clearer, network messages have been reduced and, in general, the system is more resistant to failures. Copyright © 2012 Elsevier Inc. All rights reserved.

  14. Reasoning with Annotations of Texts

    OpenAIRE

    Ma , Yue; Lévy , François; Ghimire , Sudeep

    2011-01-01

    International audience; Linguistic and semantic annotations are important features for text-based applications. However, achieving and maintaining a good quality of a set of annotations is known to be a complex task. Many ad hoc approaches have been developed to produce various types of annotations, while comparing those annotations to improve their quality is still rare. In this paper, we propose a framework in which both linguistic and domain information can cooperate to reason with annotat...

  15. Plann: A command-line application for annotating plastome sequences1

    Science.gov (United States)

    Huang, Daisie I.; Cronk, Quentin C. B.

    2015-01-01

    Premise of the study: Plann automates the process of annotating a plastome sequence in GenBank format for either downstream processing or for GenBank submission by annotating a new plastome based on a similar, well-annotated plastome. Methods and Results: Plann is a Perl script to be executed on the command line. Plann compares a new plastome sequence to the features annotated in a reference plastome and then shifts the intervals of any matching features to the locations in the new plastome. Plann’s output can be used in the National Center for Biotechnology Information’s tbl2asn to create a Sequin file for GenBank submission. Conclusions: Unlike Web-based annotation packages, Plann is a locally executable script that will accurately annotate a plastome sequence to a locally specified reference plastome. Because it executes from the command line, it is ready to use in other software pipelines and can be easily rerun as a draft plastome is improved. PMID:26312193

  16. Tile-Level Annotation of Satellite Images Using Multi-Level Max-Margin Discriminative Random Field

    Directory of Open Access Journals (Sweden)

    Hong Sun

    2013-05-01

    Full Text Available This paper proposes a multi-level max-margin discriminative analysis (M3DA framework, which takes both coarse and fine semantics into consideration, for the annotation of high-resolution satellite images. In order to generate more discriminative topic-level features, the M3DA uses the maximum entropy discrimination latent Dirichlet Allocation (MedLDA model. Moreover, for improving the spatial coherence of visual words neglected by M3DA, conditional random field (CRF is employed to optimize the soft label field composed of multiple label posteriors. The framework of M3DA enables one to combine word-level features (generated by support vector machines and topic-level features (generated by MedLDA via the bag-of-words representation. The experimental results on high-resolution satellite images have demonstrated that, using the proposed method can not only obtain suitable semantic interpretation, but also improve the annotation performance by taking into account the multi-level semantics and the contextual information.

  17. Update of identification and estimation of socioeconomic impacts resulting from perceived risks and changing images: An annotated bibliography

    International Nuclear Information System (INIS)

    Nieves, L.A.; Clark, D.E.; Wernette, D.

    1991-08-01

    This annotated bibliography reviews selected literature published through August 1991 on the identification of perceived risks and methods for estimating the economic impacts of risk perception. It updates the literature review found in Argonne National Laboratory report ANL/EAIS/TM-24 (February 1990). Included in this update are (1) a literature review of the risk perception process, of the relationship between risk perception and economic impacts, of economic methods and empirical applications, and interregional market interactions and adjustments; (2) a working bibliography (that includes the documents abstracted in the 1990 report); (3) a topical index to the abstracts found in both reports; and (4) abstracts of selected articles found in this update

  18. Update of identification and estimation of socioeconomic impacts resulting from perceived risks and changing images: An annotated bibliography

    Energy Technology Data Exchange (ETDEWEB)

    Nieves, L.A.; Clark, D.E.; Wernette, D.

    1991-08-01

    This annotated bibliography reviews selected literature published through August 1991 on the identification of perceived risks and methods for estimating the economic impacts of risk perception. It updates the literature review found in Argonne National Laboratory report ANL/EAIS/TM-24 (February 1990). Included in this update are (1) a literature review of the risk perception process, of the relationship between risk perception and economic impacts, of economic methods and empirical applications, and interregional market interactions and adjustments; (2) a working bibliography (that includes the documents abstracted in the 1990 report); (3) a topical index to the abstracts found in both reports; and (4) abstracts of selected articles found in this update.

  19. A fully automatic end-to-end method for content-based image retrieval of CT scans with similar liver lesion annotations.

    Science.gov (United States)

    Spanier, A B; Caplan, N; Sosna, J; Acar, B; Joskowicz, L

    2018-01-01

    The goal of medical content-based image retrieval (M-CBIR) is to assist radiologists in the decision-making process by retrieving medical cases similar to a given image. One of the key interests of radiologists is lesions and their annotations, since the patient treatment depends on the lesion diagnosis. Therefore, a key feature of M-CBIR systems is the retrieval of scans with the most similar lesion annotations. To be of value, M-CBIR systems should be fully automatic to handle large case databases. We present a fully automatic end-to-end method for the retrieval of CT scans with similar liver lesion annotations. The input is a database of abdominal CT scans labeled with liver lesions, a query CT scan, and optionally one radiologist-specified lesion annotation of interest. The output is an ordered list of the database CT scans with the most similar liver lesion annotations. The method starts by automatically segmenting the liver in the scan. It then extracts a histogram-based features vector from the segmented region, learns the features' relative importance, and ranks the database scans according to the relative importance measure. The main advantages of our method are that it fully automates the end-to-end querying process, that it uses simple and efficient techniques that are scalable to large datasets, and that it produces quality retrieval results using an unannotated CT scan. Our experimental results on 9 CT queries on a dataset of 41 volumetric CT scans from the 2014 Image CLEF Liver Annotation Task yield an average retrieval accuracy (Normalized Discounted Cumulative Gain index) of 0.77 and 0.84 without/with annotation, respectively. Fully automatic end-to-end retrieval of similar cases based on image information alone, rather that on disease diagnosis, may help radiologists to better diagnose liver lesions.

  20. GSV Annotated Bibliography

    Energy Technology Data Exchange (ETDEWEB)

    Roberts, Randy S. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Pope, Paul A. [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Jiang, Ming [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Trucano, Timothy G. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Aragon, Cecilia R. [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States); Ni, Kevin [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Wei, Thomas [Argonne National Lab. (ANL), Argonne, IL (United States); Chilton, Lawrence K. [Pacific Northwest National Lab. (PNNL), Richland, WA (United States); Bakel, Alan [Argonne National Lab. (ANL), Argonne, IL (United States)

    2011-06-14

    The following annotated bibliography was developed as part of the Geospatial Algorithm Veri cation and Validation (GSV) project for the Simulation, Algorithms and Modeling program of NA-22. Veri cation and Validation of geospatial image analysis algorithms covers a wide range of technologies. Papers in the bibliography are thus organized into the following ve topic areas: Image processing and analysis, usability and validation of geospatial image analysis algorithms, image distance measures, scene modeling and image rendering, and transportation simulation models.

  1. What do we do with all this video? Better understanding public engagement for image and video annotation

    Science.gov (United States)

    Wiener, C.; Miller, A.; Zykov, V.

    2016-12-01

    Advanced robotic vehicles are increasingly being used by oceanographic research vessels to enable more efficient and widespread exploration of the ocean, particularly the deep ocean. With cutting-edge capabilities mounted onto robotic vehicles, data at high resolutions is being generated more than ever before, enabling enhanced data collection and the potential for broader participation. For example, high resolution camera technology not only improves visualization of the ocean environment, but also expands the capacity to engage participants remotely through increased use of telepresence and virtual reality techniques. Schmidt Ocean Institute is a private, non-profit operating foundation established to advance the understanding of the world's oceans through technological advancement, intelligent observation and analysis, and open sharing of information. Telepresence-enabled research is an important component of Schmidt Ocean Institute's science research cruises, which this presentation will highlight. Schmidt Ocean Institute is one of the only research programs that make their entire underwater vehicle dive series available online, creating a collection of video that enables anyone to follow deep sea research in real time. We encourage students, educators and the general public to take advantage of freely available dive videos. Additionally, other SOI-supported internet platforms, have engaged the public in image and video annotation activities. Examples of these new online platforms, which utilize citizen scientists to annotate scientific image and video data will be provided. This presentation will include an introduction to SOI-supported video and image tagging citizen science projects, real-time robot tracking, live ship-to-shore communications, and an array of outreach activities that enable scientists to interact with the public and explore the ocean in fascinating detail.

  2. Why Web Pages Annotation Tools Are Not Killer Applications? A New Approach to an Old Problem.

    Science.gov (United States)

    Ronchetti, Marco; Rizzi, Matteo

    The idea of annotating Web pages is not a new one: early proposals date back to 1994. A tool providing the ability to add notes to a Web page, and to share the notes with other users seems to be particularly well suited to an e-learning environment. Although several tools already provide such possibility, they are not widely popular. This paper…

  3. Automated annotation of functional imaging experiments via multi-label classification

    Directory of Open Access Journals (Sweden)

    Matthew D Turner

    2013-12-01

    Full Text Available Identifying the experimental methods in human neuroimaging papers is important for grouping meaningfully similar experiments for meta-analyses. Currently, this can only be done by human readers. We present the performance of common machine learning (text mining methods applied to the problem of automatically classifying or labeling this literature. Labeling terms are from the Cognitive Paradigm Ontology (CogPO, the text corpora are abstracts of published functional neuroimaging papers, and the methods use the performance of a human expert as training data. We aim to replicate the expert's annotation of multiple labels per abstract identifying the experimental stimuli, cognitive paradigms, response types, and other relevant dimensions of the experiments. We use several standard machine learning methods: naive Bayes, k-nearest neighbor, and support vector machines (specifically SMO or sequential minimal optimization. Exact match performance ranged from only 15% in the worst cases to 78% in the best cases. Naive Bayes methods combined with binary relevance transformations performed strongly and were robust to overfitting. This collection of results demonstrates what can be achieved with off-the-shelf software components and little to no pre-processing of raw text.

  4. Molecular imaging. Fundamentals and applications

    International Nuclear Information System (INIS)

    Tian, Jie

    2013-01-01

    Covers a wide range of new theory, new techniques and new applications. Contributed by many experts in China. The editor has obtained the National Science and Technology Progress Award twice. ''Molecular Imaging: Fundamentals and Applications'' is a comprehensive monograph which describes not only the theory of the underlying algorithms and key technologies but also introduces a prototype system and its applications, bringing together theory, technology and applications. By explaining the basic concepts and principles of molecular imaging, imaging techniques, as well as research and applications in detail, the book provides both detailed theoretical background information and technical methods for researchers working in medical imaging and the life sciences. Clinical doctors and graduate students will also benefit from this book.

  5. Visual Genome: Connecting language and vision using crowdsourced dense image annotations

    NARCIS (Netherlands)

    R. Krishna (Ranjay); Y. Zhu (Yuke); O. Groth (Oliver); J. Johnson (Justin); K. Hata (Kenji); J. Kravitz (Joshua); S. Chen (Stephanie); Y. Kalantidis (Yannis); L.-J. Li (Li-Jia); D.A. Shamma (David); M.S. Bernstein (Michael); L. Fei-Fei (Li)

    2017-01-01

    textabstractDespite progress in perceptual tasks such as image classification, computers still perform poorly on cognitive tasks such as image description and question answering. Cognition is core to tasks that involve not just recognizing, but reasoning about our visual world. However, models used

  6. Clever generation of rich SPARQL queries from annotated relational schema: application to Semantic Web Service creation for biological databases.

    Science.gov (United States)

    Wollbrett, Julien; Larmande, Pierre; de Lamotte, Frédéric; Ruiz, Manuel

    2013-04-15

    In recent years, a large amount of "-omics" data have been produced. However, these data are stored in many different species-specific databases that are managed by different institutes and laboratories. Biologists often need to find and assemble data from disparate sources to perform certain analyses. Searching for these data and assembling them is a time-consuming task. The Semantic Web helps to facilitate interoperability across databases. A common approach involves the development of wrapper systems that map a relational database schema onto existing domain ontologies. However, few attempts have been made to automate the creation of such wrappers. We developed a framework, named BioSemantic, for the creation of Semantic Web Services that are applicable to relational biological databases. This framework makes use of both Semantic Web and Web Services technologies and can be divided into two main parts: (i) the generation and semi-automatic annotation of an RDF view; and (ii) the automatic generation of SPARQL queries and their integration into Semantic Web Services backbones. We have used our framework to integrate genomic data from different plant databases. BioSemantic is a framework that was designed to speed integration of relational databases. We present how it can be used to speed the development of Semantic Web Services for existing relational biological databases. Currently, it creates and annotates RDF views that enable the automatic generation of SPARQL queries. Web Services are also created and deployed automatically, and the semantic annotations of our Web Services are added automatically using SAWSDL attributes. BioSemantic is downloadable at http://southgreen.cirad.fr/?q=content/Biosemantic.

  7. Clever generation of rich SPARQL queries from annotated relational schema: application to Semantic Web Service creation for biological databases

    Science.gov (United States)

    2013-01-01

    Background In recent years, a large amount of “-omics” data have been produced. However, these data are stored in many different species-specific databases that are managed by different institutes and laboratories. Biologists often need to find and assemble data from disparate sources to perform certain analyses. Searching for these data and assembling them is a time-consuming task. The Semantic Web helps to facilitate interoperability across databases. A common approach involves the development of wrapper systems that map a relational database schema onto existing domain ontologies. However, few attempts have been made to automate the creation of such wrappers. Results We developed a framework, named BioSemantic, for the creation of Semantic Web Services that are applicable to relational biological databases. This framework makes use of both Semantic Web and Web Services technologies and can be divided into two main parts: (i) the generation and semi-automatic annotation of an RDF view; and (ii) the automatic generation of SPARQL queries and their integration into Semantic Web Services backbones. We have used our framework to integrate genomic data from different plant databases. Conclusions BioSemantic is a framework that was designed to speed integration of relational databases. We present how it can be used to speed the development of Semantic Web Services for existing relational biological databases. Currently, it creates and annotates RDF views that enable the automatic generation of SPARQL queries. Web Services are also created and deployed automatically, and the semantic annotations of our Web Services are added automatically using SAWSDL attributes. BioSemantic is downloadable at http://southgreen.cirad.fr/?q=content/Biosemantic. PMID:23586394

  8. Application of image editing software for forensic detection of image ...

    African Journals Online (AJOL)

    Application of image editing software for forensic detection of image. ... The image editing software's available today is apt for creating visually compelling and sophisticated fake images, ... EMAIL FREE FULL TEXT EMAIL FREE FULL TEXT

  9. Terahertz Sensing, Imaging and Applications

    Energy Technology Data Exchange (ETDEWEB)

    Otani, C.; Hoshing, H.; Sasaki, Y.; Maki, K.; Hayashi, A. [RIKEN Advanced Science Institute, Sendai (Japan)

    2008-11-15

    Diagnosis using terahertz (THz) wave holds a great potential for various applications in various fields because of its transmittance to many soft materials with the good spatial resolution. In addition, the presence of specific spectral absorption features of crystalline materials is also important for many applications. Such features are different from material to material to material and is applicable for identifying materials inside packages that are opaque to visible light. One of the most impressive examples of such applications is the detection of illicit drugs inside envelopes. In this talk, we will present our recent topics of THz sensing, imaging and applications including this example. We will also present the cancer diagnosis, an application of the photonic crystal to high sensitivity detection, and gas spectroscopy if we have enough time. We also would like to briefly review the recent topics related to THz applications.

  10. Terahertz Sensing, Imaging and Applications

    International Nuclear Information System (INIS)

    Otani, C.; Hoshing, H.; Sasaki, Y.; Maki, K.; Hayashi, A.

    2008-01-01

    Diagnosis using terahertz (THz) wave holds a great potential for various applications in various fields because of its transmittance to many soft materials with the good spatial resolution. In addition, the presence of specific spectral absorption features of crystalline materials is also important for many applications. Such features are different from material to material to material and is applicable for identifying materials inside packages that are opaque to visible light. One of the most impressive examples of such applications is the detection of illicit drugs inside envelopes. In this talk, we will present our recent topics of THz sensing, imaging and applications including this example. We will also present the cancer diagnosis, an application of the photonic crystal to high sensitivity detection, and gas spectroscopy if we have enough time. We also would like to briefly review the recent topics related to THz applications

  11. Industrial Applications of Image Processing

    Science.gov (United States)

    Ciora, Radu Adrian; Simion, Carmen Mihaela

    2014-11-01

    The recent advances in sensors quality and processing power provide us with excellent tools for designing more complex image processing and pattern recognition tasks. In this paper we review the existing applications of image processing and pattern recognition in industrial engineering. First we define the role of vision in an industrial. Then a dissemination of some image processing techniques, feature extraction, object recognition and industrial robotic guidance is presented. Moreover, examples of implementations of such techniques in industry are presented. Such implementations include automated visual inspection, process control, part identification, robots control. Finally, we present some conclusions regarding the investigated topics and directions for future investigation

  12. Physics for Medical Imaging Applications

    CERN Document Server

    Caner, Alesssandra; Rahal, Ghita

    2007-01-01

    The book introduces the fundamental aspects of digital imaging and covers four main themes: Ultrasound techniques and imaging applications; Magnetic resonance and MPJ in hospital; Digital imaging with X-rays; and Emission tomography (PET and SPECT). Each of these topics is developed by analysing the underlying physics principles and their implementation, quality and safety aspects, clinical performance and recent advancements in the field. Some issues specific to the individual techniques are also treated, e.g. choice of radioisotopes or contrast agents, optimisation of data acquisition and st

  13. Biomedical Imaging Principles and Applications

    CERN Document Server

    Salzer, Reiner

    2012-01-01

    This book presents and describes imaging technologies that can be used to study chemical processes and structural interactions in dynamic systems, principally in biomedical systems. The imaging technologies, largely biomedical imaging technologies such as MRT, Fluorescence mapping, raman mapping, nanoESCA, and CARS microscopy, have been selected according to their application range and to the chemical information content of their data. These technologies allow for the analysis and evaluation of delicate biological samples, which must not be disturbed during the profess. Ultimately, this may me

  14. EEG-Annotate: Automated identification and labeling of events in continuous signals with applications to EEG.

    Science.gov (United States)

    Su, Kyung-Min; Hairston, W David; Robbins, Kay

    2018-01-01

    In controlled laboratory EEG experiments, researchers carefully mark events and analyze subject responses time-locked to these events. Unfortunately, such markers may not be available or may come with poor timing resolution for experiments conducted in less-controlled naturalistic environments. We present an integrated event-identification method for identifying particular responses that occur in unlabeled continuously recorded EEG signals based on information from recordings of other subjects potentially performing related tasks. We introduce the idea of timing slack and timing-tolerant performance measures to deal with jitter inherent in such non-time-locked systems. We have developed an implementation available as an open-source MATLAB toolbox (http://github.com/VisLab/EEG-Annotate) and have made test data available in a separate data note. We applied the method to identify visual presentation events (both target and non-target) in data from an unlabeled subject using labeled data from other subjects with good sensitivity and specificity. The method also identified actual visual presentation events in the data that were not previously marked in the experiment. Although the method uses traditional classifiers for initial stages, the problem of identifying events based on the presence of stereotypical EEG responses is the converse of the traditional stimulus-response paradigm and has not been addressed in its current form. In addition to identifying potential events in unlabeled or incompletely labeled EEG, these methods also allow researchers to investigate whether particular stereotypical neural responses are present in other circumstances. Timing-tolerance has the added benefit of accommodating inter- and intra- subject timing variations. Copyright © 2017 The Author(s). Published by Elsevier B.V. All rights reserved.

  15. Raman Imaging Techniques and Applications

    CERN Document Server

    2012-01-01

    Raman imaging has long been used to probe the chemical nature of a sample, providing information on molecular orientation, symmetry and structure with sub-micron spatial resolution. Recent technical developments have pushed the limits of micro-Raman microscopy, enabling the acquisition of Raman spectra with unprecedented speed, and opening a pathway to fast chemical imaging for many applications from material science and semiconductors to pharmaceutical drug development and cell biology, and even art and forensic science. The promise of tip-enhanced raman spectroscopy (TERS) and near-field techniques is pushing the envelope even further by breaking the limit of diffraction and enabling nano-Raman microscopy.

  16. Biomedical Optical Imaging Technologies Design and Applications

    CERN Document Server

    2013-01-01

    This book provides an introduction to design of biomedical optical imaging technologies and their applications. The main topics include: fluorescence imaging, confocal imaging, micro-endoscope, polarization imaging, hyperspectral imaging, OCT imaging, multimodal imaging and spectroscopic systems. Each chapter is written by the world leaders of the respective fields, and will cover: principles and limitations of optical imaging technology, system design and practical implementation for one or two specific applications, including design guidelines, system configuration, optical design, component requirements and selection, system optimization and design examples, recent advances and applications in biomedical researches and clinical imaging. This book serves as a reference for students and researchers in optics and biomedical engineering.

  17. Wavelets: Applications to Image Compression-II

    Indian Academy of Sciences (India)

    Wavelets: Applications to Image Compression-II. Sachin P ... successful application of wavelets in image com- ... b) Soft threshold: In this case, all the coefficients x ..... [8] http://www.jpeg.org} Official site of the Joint Photographic Experts Group.

  18. Wavelets: Applications to Image Compression-I

    Indian Academy of Sciences (India)

    form (OWl). Digital imaging has had an enormous impact on ... Digital images have become an important source of in- ... media applications and is the focus of this article. ..... Theory and Applications, Pearson Education InC., Delhi, India, 2000.

  19. Annotated bibliography

    International Nuclear Information System (INIS)

    1997-08-01

    Under a cooperative agreement with the U.S. Department of Energy's Office of Science and Technology, Waste Policy Institute (WPI) is conducting a five-year research project to develop a research-based approach for integrating communication products in stakeholder involvement related to innovative technology. As part of the research, WPI developed this annotated bibliography which contains almost 100 citations of articles/books/resources involving topics related to communication and public involvement aspects of deploying innovative cleanup technology. To compile the bibliography, WPI performed on-line literature searches (e.g., Dialog, International Association of Business Communicators Public Relations Society of America, Chemical Manufacturers Association, etc.), consulted past years proceedings of major environmental waste cleanup conferences (e.g., Waste Management), networked with professional colleagues and DOE sites to gather reports or case studies, and received input during the August 1996 Research Design Team meeting held to discuss the project's research methodology. Articles were selected for annotation based upon their perceived usefulness to the broad range of public involvement and communication practitioners

  20. Annotation and Visualization in Android: An Application for Education and Real Time Information

    Directory of Open Access Journals (Sweden)

    Renato Barahona Neri

    2013-06-01

    Full Text Available By using Augmented Reality applications, users can get more information while interacting with real objects. The popularity of the Smartphones and the ubiquity of an Internet connection within modern devices, offer the best combination for these kind of applications, which can pull content from heterogeneous sources. The goal with this work is to show the architecture and a basic implementation of a prototype for an AR application that displays information (opinions about physical places as comments overlaid to the place left there by other users, but that also encourage in-situ content creation for collaboration. These applications can also be used in order to improve the interaction between students and physical places, getting facts, or associating quizzes to a specific location; tourism guides, promotions of products, just to mention a few.

  1. Fundamentals of thermodynamics and applications with historical annotations and many citations from Avogadro to Zermelo

    CERN Document Server

    Müller, Ingo

    2009-01-01

    Here is a systematic introduction into the fundamental ideas of thermodynamics at a somewhat advanced level. The book details many applications of the theory in the fields of engineering, physics, chemistry, physical chemistry, and materials science.

  2. ANNOTATION SUPPORTED OCCLUDED OBJECT TRACKING

    Directory of Open Access Journals (Sweden)

    Devinder Kumar

    2012-08-01

    Full Text Available Tracking occluded objects at different depths has become as extremely important component of study for any video sequence having wide applications in object tracking, scene recognition, coding, editing the videos and mosaicking. The paper studies the ability of annotation to track the occluded object based on pyramids with variation in depth further establishing a threshold at which the ability of the system to track the occluded object fails. Image annotation is applied on 3 similar video sequences varying in depth. In the experiment, one bike occludes the other at a depth of 60cm, 80cm and 100cm respectively. Another experiment is performed on tracking humans with similar depth to authenticate the results. The paper also computes the frame by frame error incurred by the system, supported by detailed simulations. This system can be effectively used to analyze the error in motion tracking and further correcting the error leading to flawless tracking. This can be of great interest to computer scientists while designing surveillance systems etc.

  3. Use of annotated outlines to prepare guidance for license applications for the MRS and MGDS

    International Nuclear Information System (INIS)

    Roberts, J.; Griffin, W.R.

    1992-01-01

    This paper reports that the Office of Civilian Radioactive Waste Management (OCRWM) has embarked on an aggressive program to develop guidance for preparation of the License Applications for the Mined Geological Disposal System (MGDS) and Monitored Retrievable Storage (MRS). The endeavor is a team effort that will utilize personnel and funding from the Office of Systems and Compliance at DOE Headquarters, the MRS Project (i.e., DOE Office of Storage and Transportation) and the Yucca Mountain Project (i.e., DOE Office of Geologic Disposal). The endeavor was initiated in the Spring of 1991. It will continue via an iterative process until License Applications are completed for the MRS and MGDS projects

  4. ePNK Applications and Annotations: A Simulator for YAWL Nets

    DEFF Research Database (Denmark)

    Kindler, Ekkart

    2018-01-01

    The ePNK is an Eclipse based platform and framework for developing and integrating Petri net tools and applications. New types of Petri nets can be realized and plugged into the ePNK without any programming by simply providing a model of the concepts of the new Petri net type. Moreover, the ePNK ...

  5. Color imaging fundamentals and applications

    CERN Document Server

    Reinhard, Erik; Oguz Akyuz, Ahmet; Johnson, Garrett

    2008-01-01

    This book provides the reader with an understanding of what color is, where color comes from, and how color can be used correctly in many different applications. The authors first treat the physics of light and its interaction with matter at the atomic level, so that the origins of color can be appreciated. The intimate relationship between energy levels, orbital states, and electromagnetic waves helps to explain why diamonds shimmer, rubies are red, and the feathers of the Blue Jay are blue. Then, color theory is explained from its origin to the current state of the art, including image captu

  6. Planning applications in image analysis

    Science.gov (United States)

    Boddy, Mark; White, Jim; Goldman, Robert; Short, Nick, Jr.

    1994-01-01

    We describe two interim results from an ongoing effort to automate the acquisition, analysis, archiving, and distribution of satellite earth science data. Both results are applications of Artificial Intelligence planning research to the automatic generation of processing steps for image analysis tasks. First, we have constructed a linear conditional planner (CPed), used to generate conditional processing plans. Second, we have extended an existing hierarchical planning system to make use of durations, resources, and deadlines, thus supporting the automatic generation of processing steps in time and resource-constrained environments.

  7. Computer systems for annotation of single molecule fragments

    Science.gov (United States)

    Schwartz, David Charles; Severin, Jessica

    2016-07-19

    There are provided computer systems for visualizing and annotating single molecule images. Annotation systems in accordance with this disclosure allow a user to mark and annotate single molecules of interest and their restriction enzyme cut sites thereby determining the restriction fragments of single nucleic acid molecules. The markings and annotations may be automatically generated by the system in certain embodiments and they may be overlaid translucently onto the single molecule images. An image caching system may be implemented in the computer annotation systems to reduce image processing time. The annotation systems include one or more connectors connecting to one or more databases capable of storing single molecule data as well as other biomedical data. Such diverse array of data can be retrieved and used to validate the markings and annotations. The annotation systems may be implemented and deployed over a computer network. They may be ergonomically optimized to facilitate user interactions.

  8. Hyperspectral imaging and its applications

    Science.gov (United States)

    Serranti, S.; Bonifazi, G.

    2016-04-01

    Hyperspectral imaging (HSI) is an emerging technique that combines the imaging properties of a digital camera with the spectroscopic properties of a spectrometer able to detect the spectral attributes of each pixel in an image. For these characteristics, HSI allows to qualitatively and quantitatively evaluate the effects of the interactions of light with organic and/or inorganic materials. The results of this interaction are usually displayed as a spectral signature characterized by a sequence of energy values, in a pre-defined wavelength interval, for each of the investigated/collected wavelength. Following this approach, it is thus possible to collect, in a fast and reliable way, spectral information that are strictly linked to chemical-physical characteristics of the investigated materials and/or products. Considering that in an hyperspectral image the spectrum of each pixel can be analyzed, HSI can be considered as one of the best nondestructive technology allowing to perform the most accurate and detailed information extraction. HSI can be applied in different wavelength fields, the most common are the visible (VIS: 400-700 nm), the near infrared (NIR: 1000-1700 nm) and the short wave infrared (SWIR: 1000-2500 nm). It can be applied for inspections from micro- to macro-scale, up to remote sensing. HSI produces a large amount of information due to the great number of continuous collected spectral bands. Such an approach, when successful, is quite challenging being usually reliable, robust and characterized by lower costs, if compared with those usually associated to commonly applied analytical off-line and/or on-line analytical approaches. More and more applications have been thus developed and tested, in these last years, especially in food inspection, with a large range of investigated products, such as fruits and vegetables, meat, fish, eggs and cereals, but also in medicine and pharmaceutical sector, in cultural heritage, in material characterization and in

  9. Neutron imaging and applications a reference for the imaging community

    CERN Document Server

    McGreevy, Robert L; Bilheux, Hassina Z

    2009-01-01

    Offers an introduction to the basics of neutron beam production in addition to the wide scope of techniques that enhance imaging application capabilities. This title features a section that describes imaging single grains in polycrystalline materials, neutron imaging of geological materials and other materials science and engineering areas.

  10. Semantic annotation in biomedicine: the current landscape.

    Science.gov (United States)

    Jovanović, Jelena; Bagheri, Ebrahim

    2017-09-22

    The abundance and unstructured nature of biomedical texts, be it clinical or research content, impose significant challenges for the effective and efficient use of information and knowledge stored in such texts. Annotation of biomedical documents with machine intelligible semantics facilitates advanced, semantics-based text management, curation, indexing, and search. This paper focuses on annotation of biomedical entity mentions with concepts from relevant biomedical knowledge bases such as UMLS. As a result, the meaning of those mentions is unambiguously and explicitly defined, and thus made readily available for automated processing. This process is widely known as semantic annotation, and the tools that perform it are known as semantic annotators.Over the last dozen years, the biomedical research community has invested significant efforts in the development of biomedical semantic annotation technology. Aiming to establish grounds for further developments in this area, we review a selected set of state of the art biomedical semantic annotators, focusing particularly on general purpose annotators, that is, semantic annotation tools that can be customized to work with texts from any area of biomedicine. We also examine potential directions for further improvements of today's annotators which could make them even more capable of meeting the needs of real-world applications. To motivate and encourage further developments in this area, along the suggested and/or related directions, we review existing and potential practical applications and benefits of semantic annotators.

  11. Luminescence imaging using radionuclides: a potential application in molecular imaging

    International Nuclear Information System (INIS)

    Park, Jeong Chan; Il An, Gwang; Park, Se-Il; Oh, Jungmin; Kim, Hong Joo; Su Ha, Yeong; Wang, Eun Kyung; Min Kim, Kyeong; Kim, Jung Young; Lee, Jaetae; Welch, Michael J.; Yoo, Jeongsoo

    2011-01-01

    Introduction: Nuclear and optical imaging are complementary in many aspects and there would be many advantages when optical imaging probes are prepared using radionuclides rather than classic fluorophores, and when nuclear and optical dual images are obtained using single imaging probe. Methods: The luminescence intensities of various radionuclides having different decay modes have been assayed using luminescence imaging and in vitro luminometer. Radioiodinated Herceptin was injected into a tumor-bearing mouse, and luminescence and microPET images were obtained. The plant dipped in [ 32 P]phosphate solution was scanned in luminescence mode. Radio-TLC plate was also imaged in the same imaging mode. Results: Radionuclides emitting high energy β + /β - particles showed higher luminescence signals. NIH3T6.7 tumors were detected in both optical and nuclear imaging. The uptake of [ 32 P]phosphate in plant was easily followed by luminescence imaging. Radio-TLC plate was visualized and radiochemical purity was quantified using luminescence imaging. Conclusion: Many radionuclides with high energetic β + or β - particles during decay were found to be imaged in luminescence mode due mainly to Cerenkov radiation. 'Cerenkov imaging' provides a new optical imaging platform and an invaluable bridge between optical and nuclear imaging. New optical imaging probes could be easily prepared using well-established radioiodination methods. Cerenkov imaging will have more applications in the research field of plant science and autoradiography.

  12. Creating Gaze Annotations in Head Mounted Displays

    DEFF Research Database (Denmark)

    Mardanbeigi, Diako; Qvarfordt, Pernilla

    2015-01-01

    To facilitate distributed communication in mobile settings, we developed GazeNote for creating and sharing gaze annotations in head mounted displays (HMDs). With gaze annotations it possible to point out objects of interest within an image and add a verbal description. To create an annota- tion...

  13. Application of phase contrast imaging to mammography

    International Nuclear Information System (INIS)

    Tohyama, Keiko; Yamada, Katsuhiko; Katafuchi, Tetsuro; Matsuo, Satoru; Morishita, Junji

    2005-01-01

    Phase contrast images were obtained experimentally by using a customized mammography unit with a nominal focal spot size of 100 μm and variable source-to-image distances of up to 1.5 m. The purpose of this study was to examine the applicability and potential usefulness of phase contrast imaging for mammography. A mammography phantom (ACR156 RMI phantom) was imaged, and its visibility was examined. The optical density of the phantom images was adjusted to approximately 1.3 for both the contact and phase contrast images. Forty-one observers (18 medical doctors and 23 radiological technologists) participated in visual evaluation of the images. Results showed that, in comparison with the images of contact mammography, the phantom images of phase contrast imaging demonstrated statistically significantly superior visibility for fibers, clustered micro-calcifications, and masses. Therefore, phase contrast imaging obtained by using the customized mammography unit would be useful for improving diagnostic accuracy in mammography. (author)

  14. Impingement: an annotated bibliography

    International Nuclear Information System (INIS)

    Uziel, M.S.; Hannon, E.H.

    1979-04-01

    This bibliography of 655 annotated references on impingement of aquatic organisms at intake structures of thermal-power-plant cooling systems was compiled from the published and unpublished literature. The bibliography includes references from 1928 to 1978 on impingement monitoring programs; impingement impact assessment; applicable law; location and design of intake structures, screens, louvers, and other barriers; fish behavior and swim speed as related to impingement susceptibility; and the effects of light, sound, bubbles, currents, and temperature on fish behavior. References are arranged alphabetically by author or corporate author. Indexes are provided for author, keywords, subject category, geographic location, taxon, and title

  15. Fuzzy image processing and applications with Matlab

    CERN Document Server

    Chaira, Tamalika

    2009-01-01

    In contrast to classical image analysis methods that employ ""crisp"" mathematics, fuzzy set techniques provide an elegant foundation and a set of rich methodologies for diverse image-processing tasks. However, a solid understanding of fuzzy processing requires a firm grasp of essential principles and background knowledge.Fuzzy Image Processing and Applications with MATLAB® presents the integral science and essential mathematics behind this exciting and dynamic branch of image processing, which is becoming increasingly important to applications in areas such as remote sensing, medical imaging,

  16. Annotated chemical patent corpus: a gold standard for text mining.

    Directory of Open Access Journals (Sweden)

    Saber A Akhondi

    Full Text Available Exploring the chemical and biological space covered by patent applications is crucial in early-stage medicinal chemistry activities. Patent analysis can provide understanding of compound prior art, novelty checking, validation of biological assays, and identification of new starting points for chemical exploration. Extracting chemical and biological entities from patents through manual extraction by expert curators can take substantial amount of time and resources. Text mining methods can help to ease this process. To validate the performance of such methods, a manually annotated patent corpus is essential. In this study we have produced a large gold standard chemical patent corpus. We developed annotation guidelines and selected 200 full patents from the World Intellectual Property Organization, United States Patent and Trademark Office, and European Patent Office. The patents were pre-annotated automatically and made available to four independent annotator groups each consisting of two to ten annotators. The annotators marked chemicals in different subclasses, diseases, targets, and modes of action. Spelling mistakes and spurious line break due to optical character recognition errors were also annotated. A subset of 47 patents was annotated by at least three annotator groups, from which harmonized annotations and inter-annotator agreement scores were derived. One group annotated the full set. The patent corpus includes 400,125 annotations for the full set and 36,537 annotations for the harmonized set. All patents and annotated entities are publicly available at www.biosemantics.org.

  17. TU-CD-BRB-07: Identification of Associations Between Radiologist-Annotated Imaging Features and Genomic Alterations in Breast Invasive Carcinoma, a TCGA Phenotype Research Group Study

    Energy Technology Data Exchange (ETDEWEB)

    Rao, A; Net, J [University of Miami, Miami, Florida (United States); Brandt, K [Mayo Clinic, Rochester, Minnesota (United States); Huang, E [National Cancer Institute, NIH, Bethesda, MD (United States); Freymann, J; Kirby, J [Leidos Biomedical Research Inc., Frederick, MD (United States); Burnside, E [University of Wisconsin School of Medicine and Public Health, Madison, Wisconsin (United States); Morris, E; Sutton, E [Memorial Sloan Kettering Cancer Center, New York, NY (United States); Bonaccio, E [Roswell Park Cancer Institute, Buffalo, NY (United States); Giger, M; Jaffe, C [Univ Chicago, Chicago, IL (United States); Ganott, M; Zuley, M [University of Pittsburgh Medical Center - Magee Womens Hospital, Pittsburgh, Pennsylvania (United States); Le-Petross, H [MD Anderson Cancer Center, Houston, TX (United States); Dogan, B [UT MDACC, Houston, TX (United States); Whitman, G [UTMDACC, Houston, TX (United States)

    2015-06-15

    Purpose: To determine associations between radiologist-annotated MRI features and genomic measurements in breast invasive carcinoma (BRCA) from the Cancer Genome Atlas (TCGA). Methods: 98 TCGA patients with BRCA were assessed by a panel of radiologists (TCGA Breast Phenotype Research Group) based on a variety of mass and non-mass features according to the Breast Imaging Reporting and Data System (BI-RADS). Batch corrected gene expression data was obtained from the TCGA Data Portal. The Kruskal-Wallis test was used to assess correlations between categorical image features and tumor-derived genomic features (such as gene pathway activity, copy number and mutation characteristics). Image-derived features were also correlated with estrogen receptor (ER), progesterone receptor (PR), and human epidermal growth factor receptor 2 (HER2/neu) status. Multiple hypothesis correction was done using Benjamini-Hochberg FDR. Associations at an FDR of 0.1 were selected for interpretation. Results: ER status was associated with rim enhancement and peritumoral edema. PR status was associated with internal enhancement. Several components of the PI3K/Akt pathway were associated with rim enhancement as well as heterogeneity. In addition, several components of cell cycle regulation and cell division were associated with imaging characteristics.TP53 and GATA3 mutations were associated with lesion size. MRI features associated with TP53 mutation status were rim enhancement and peritumoral edema. Rim enhancement was associated with activity of RB1, PIK3R1, MAP3K1, AKT1,PI3K, and PIK3CA. Margin status was associated with HIF1A/ARNT, Ras/ GTP/PI3K, KRAS, and GADD45A. Axillary lymphadenopathy was associated with RB1 and BCL2L1. Peritumoral edema was associated with Aurora A/GADD45A, BCL2L1, CCNE1, and FOXA1. Heterogeneous internal nonmass enhancement was associated with EGFR, PI3K, AKT1, HF/MET, and EGFR/Erbb4/neuregulin 1. Diffuse nonmass enhancement was associated with HGF/MET/MUC20/SHIP

  18. Medical imaging technology reviews and computational applications

    CERN Document Server

    Dewi, Dyah

    2015-01-01

    This book presents the latest research findings and reviews in the field of medical imaging technology, covering ultrasound diagnostics approaches for detecting osteoarthritis, breast carcinoma and cardiovascular conditions, image guided biopsy and segmentation techniques for detecting lung cancer, image fusion, and simulating fluid flows for cardiovascular applications. It offers a useful guide for students, lecturers and professional researchers in the fields of biomedical engineering and image processing.

  19. Applications of locally orderless images

    NARCIS (Netherlands)

    Ginneken, van B.; Haar Romenij, ter B.M.

    2000-01-01

    In a recent work, J. J. Koenderink and A. J. Van Doorn considered a family of three intertwined scale-spaces coined the locally orderless image (LOI) (1999, J. Comput. Vision, 31 (2/3), 159–168). The LOI represents the image, observed at inner scale ó, as a local histogram with bin-width â, at each

  20. Laser imaging for clinical applications

    Science.gov (United States)

    Van Houten, John P.; Cheong, Wai-Fung; Kermit, Eben L.; King, Richard A.; Spilman, Stanley D.; Benaron, David A.

    1995-03-01

    Medical optical imaging (MOI) uses light emitted into opaque tissues in order to determine the interior structure and chemical content. These optical techniques have been developed in an attempt to prospectively identify impending brain injuries before they become irreversible, thus allowing injury to be avoided or minimized. Optical imaging and spectroscopy center around the simple idea that light passes through the body in small amounts, and emerges bearing clues about tissues through which it passed. Images can be reconstructed from such data, and this is the basis of optical tomography. Over the past few years, techniques have been developed to allow construction of images from such optical data at the bedside. We have used a time-of-flight system reported earlier to monitor oxygenation and image hemorrhage in neonatal brain. This article summarizes the problems that we believe can be addressed by such techniques, and reports on some of our early results.

  1. Medical image informatics infrastructure design and applications.

    Science.gov (United States)

    Huang, H K; Wong, S T; Pietka, E

    1997-01-01

    Picture archiving and communication systems (PACS) is a system integration of multimodality images and health information systems designed for improving the operation of a radiology department. As it evolves, PACS becomes a hospital image document management system with a voluminous image and related data file repository. A medical image informatics infrastructure can be designed to take advantage of existing data, providing PACS with add-on value for health care service, research, and education. A medical image informatics infrastructure (MIII) consists of the following components: medical images and associated data (including PACS database), image processing, data/knowledge base management, visualization, graphic user interface, communication networking, and application oriented software. This paper describes these components and their logical connection, and illustrates some applications based on the concept of the MIII.

  2. Miscellaneous applications of radionuclide imaging

    International Nuclear Information System (INIS)

    Mishkin, F.S.; Freeman, L.M.

    1984-01-01

    The procedures discussed in this chapter are either developmental, in limited clinical use, or frankly moribund. A number of radionuclide imaging techniques have proved disappointing when approached from a purely anatomic point of view. This is particularly evident to our colleagues with the explosive growth of the noninvasive imaging procedures, magnetic resonance imaging (NMR), CT, and ultrasound, and the introduction of the less invasive digital radiographic approach to vascular opacification, all of which are capable of providing exquisite anatomic or tissue detail beyond the reach of current or reasonably priced nuclear medicine imaging systems. Yet, most nuclear medicine procedures possess the unique advantage of portraying a physiologic function without interfering with that function. Moreover, the procedures can be employed under conditions of stress, which are likely to bring out pathophysiologic abnormalities that remain masked when unchallenged. Information concerning form without functional data has less meaning than both together. The physiologic information inherent in nuclear medicine imaging may often provide not only key diagnostic information but also illuminate a therapeutic trail. Yet, it is often slighted in favor of the anatomic quest. While mastery of the nuances of imaging details remains critical, radionuclide image interpretation must rest upon a firm physiologic foundation. For this reason, this chapter emphasizes the physiologic approach

  3. Metadata requirements for results of diagnostic imaging procedures: a BIIF profile to support user applications

    Science.gov (United States)

    Brown, Nicholas J.; Lloyd, David S.; Reynolds, Melvin I.; Plummer, David L.

    2002-05-01

    A visible digital image is rendered from a set of digital image data. Medical digital image data can be stored as either: (a) pre-rendered format, corresponding to a photographic print, or (b) un-rendered format, corresponding to a photographic negative. The appropriate image data storage format and associated header data (metadata) required by a user of the results of a diagnostic procedure recorded electronically depends on the task(s) to be performed. The DICOM standard provides a rich set of metadata that supports the needs of complex applications. Many end user applications, such as simple report text viewing and display of a selected image, are not so demanding and generic image formats such as JPEG are sometimes used. However, these are lacking some basic identification requirements. In this paper we make specific proposals for minimal extensions to generic image metadata of value in various domains, which enable safe use in the case of two simple healthcare end user scenarios: (a) viewing of text and a selected JPEG image activated by a hyperlink and (b) viewing of one or more JPEG images together with superimposed text and graphics annotation using a file specified by a profile of the ISO/IEC Basic Image Interchange Format (BIIF).

  4. Imaging gaseous detectors and their applications

    CERN Document Server

    Nappi, Eugenio

    2013-01-01

    Covers the detector and imaging technology and their numerous applications in nuclear and high energy physics, astrophysics, medicine and radiation measurements Foreword from G. Charpak, awarded the Nobel Prize in Physics for this invention.

  5. Computer vision for biomedical image applications. Proceedings

    Energy Technology Data Exchange (ETDEWEB)

    Liu, Yanxi [Carnegie Mellon Univ., Pittsburgh, PA (United States). School of Computer Science, The Robotics Institute; Jiang, Tianzi [Chinese Academy of Sciences, Beijing (China). National Lab. of Pattern Recognition, Inst. of Automation; Zhang, Changshui (eds.) [Tsinghua Univ., Beijing, BJ (China). Dept. of Automation

    2005-07-01

    This book constitutes the refereed proceedings of the First International Workshop on Computer Vision for Biomedical Image Applications: Current Techniques and Future Trends, CVBIA 2005, held in Beijing, China, in October 2005 within the scope of ICCV 20. (orig.)

  6. Applications of VLSI circuits to medical imaging

    International Nuclear Information System (INIS)

    O'Donnell, M.

    1988-01-01

    In this paper the application of advanced VLSI circuits to medical imaging is explored. The relationship of both general purpose signal processing chips and custom devices to medical imaging is discussed using examples of fabricated chips. In addition, advanced CAD tools for silicon compilation are presented. Devices built with these tools represent a possible alternative to custom devices and general purpose signal processors for the next generation of medical imaging systems

  7. New approaches in intelligent image analysis techniques, methodologies and applications

    CERN Document Server

    Nakamatsu, Kazumi

    2016-01-01

    This book presents an Introduction and 11 independent chapters, which are devoted to various new approaches of intelligent image processing and analysis. The book also presents new methods, algorithms and applied systems for intelligent image processing, on the following basic topics: Methods for Hierarchical Image Decomposition; Intelligent Digital Signal Processing and Feature Extraction; Data Clustering and Visualization via Echo State Networks; Clustering of Natural Images in Automatic Image Annotation Systems; Control System for Remote Sensing Image Processing; Tissue Segmentation of MR Brain Images Sequence; Kidney Cysts Segmentation in CT Images; Audio Visual Attention Models in Mobile Robots Navigation; Local Adaptive Image Processing; Learning Techniques for Intelligent Access Control; Resolution Improvement in Acoustic Maps. Each chapter is self-contained with its own references. Some of the chapters are devoted to the theoretical aspects while the others are presenting the practical aspects and the...

  8. Prior image constrained image reconstruction in emerging computed tomography applications

    Science.gov (United States)

    Brunner, Stephen T.

    Advances have been made in computed tomography (CT), especially in the past five years, by incorporating prior images into the image reconstruction process. In this dissertation, we investigate prior image constrained image reconstruction in three emerging CT applications: dual-energy CT, multi-energy photon-counting CT, and cone-beam CT in image-guided radiation therapy. First, we investigate the application of Prior Image Constrained Compressed Sensing (PICCS) in dual-energy CT, which has been called "one of the hottest research areas in CT." Phantom and animal studies are conducted using a state-of-the-art 64-slice GE Discovery 750 HD CT scanner to investigate the extent to which PICCS can enable radiation dose reduction in material density and virtual monochromatic imaging. Second, we extend the application of PICCS from dual-energy CT to multi-energy photon-counting CT, which has been called "one of the 12 topics in CT to be critical in the next decade." Numerical simulations are conducted to generate multiple energy bin images for a photon-counting CT acquisition and to investigate the extent to which PICCS can enable radiation dose efficiency improvement. Third, we investigate the performance of a newly proposed prior image constrained scatter correction technique to correct scatter-induced shading artifacts in cone-beam CT, which, when used in image-guided radiation therapy procedures, can assist in patient localization, and potentially, dose verification and adaptive radiation therapy. Phantom studies are conducted using a Varian 2100 EX system with an on-board imager to investigate the extent to which the prior image constrained scatter correction technique can mitigate scatter-induced shading artifacts in cone-beam CT. Results show that these prior image constrained image reconstruction techniques can reduce radiation dose in dual-energy CT by 50% in phantom and animal studies in material density and virtual monochromatic imaging, can lead to radiation

  9. Ultraviolet light imaging technology and applications

    Science.gov (United States)

    Yokoi, Takane; Suzuki, Kenji; Oba, Koichiro

    1991-06-01

    Demands on the high-quality imaging in ultraviolet (UV) light region have been increasing recently, especially in fields such as forensic investigations, laser experiments, spent fuel identification, and so on. Important requirements on the UV imaging devices in such applications are high sensitivity, excellent solar blindness, and small image distortion, since the imaging of very weak UV images are usually carried out under natural sunlight or room illuminations and the image data have to be processed to produce useful two-dimensional quantitative data. A new photocathode has been developed to meet these requirements. It is specially made of RbTe on a sapphire window and its quantum efficiency is as high as 20% with the solar blindness of 10,000. The tube is specially designed to meet UV light optics and to minimize image distortion. It has an invertor type image intensifier tube structure and intensifies the incident UV light up to approximately 10,000 times. The distortion of the output image is suppressed less than 1.8%, because of a specially designed electron optic lens system. The device has shown excellent results in the observation of such objects as fingerprints and footprints in forensic investigations, the Cherenkov light produced by the spent fuels stored in a cooling water pool in the nuclear power station, and UV laser beam path in excimer laser experiments. Furthermore, many other applications of the UV light imaging will be expected in various fields such as semiconductors, cosmetics, and electrical power.

  10. CMASA: an accurate algorithm for detecting local protein structural similarity and its application to enzyme catalytic site annotation

    Directory of Open Access Journals (Sweden)

    Li Gong-Hua

    2010-08-01

    Full Text Available Abstract Background The rapid development of structural genomics has resulted in many "unknown function" proteins being deposited in Protein Data Bank (PDB, thus, the functional prediction of these proteins has become a challenge for structural bioinformatics. Several sequence-based and structure-based methods have been developed to predict protein function, but these methods need to be improved further, such as, enhancing the accuracy, sensitivity, and the computational speed. Here, an accurate algorithm, the CMASA (Contact MAtrix based local Structural Alignment algorithm, has been developed to predict unknown functions of proteins based on the local protein structural similarity. This algorithm has been evaluated by building a test set including 164 enzyme families, and also been compared to other methods. Results The evaluation of CMASA shows that the CMASA is highly accurate (0.96, sensitive (0.86, and fast enough to be used in the large-scale functional annotation. Comparing to both sequence-based and global structure-based methods, not only the CMASA can find remote homologous proteins, but also can find the active site convergence. Comparing to other local structure comparison-based methods, the CMASA can obtain the better performance than both FFF (a method using geometry to predict protein function and SPASM (a local structure alignment method; and the CMASA is more sensitive than PINTS and is more accurate than JESS (both are local structure alignment methods. The CMASA was applied to annotate the enzyme catalytic sites of the non-redundant PDB, and at least 166 putative catalytic sites have been suggested, these sites can not be observed by the Catalytic Site Atlas (CSA. Conclusions The CMASA is an accurate algorithm for detecting local protein structural similarity, and it holds several advantages in predicting enzyme active sites. The CMASA can be used in large-scale enzyme active site annotation. The CMASA can be available by the

  11. Magnetic imaging and its applications to materials

    CERN Document Server

    De Graef, Marc

    2000-01-01

    Volume 36 provides an extensive introduction to magnetic imaging,including theory and practice, utilizing a wide range of magnetic sensitive imaging methods. It also illustrates the applications of these modern experimental techniques together with imaging calculations to today's advanced magnetic materials. This book is geared towards the upper-level undergraduate students and entry-level graduate students majoring in physics or materials science who are interested in magnetic structure and magnetic imaging. Researchers involved in studying magnetic materials should alsofind the book usef

  12. Clinical applications of choroidal imaging technologies

    Directory of Open Access Journals (Sweden)

    Jay Chhablani

    2015-01-01

    Full Text Available Choroid supplies the major blood supply to the eye, especially the outer retinal structures. Its understanding has significantly improved with the advent of advanced imaging modalities such as enhanced depth imaging technique and the newer swept source optical coherence tomography. Recent literature reports the findings of choroidal changes, quantitative as well as qualitative, in various chorioretinal disorders. This review article describes applications of choroidal imaging in the management of common diseases such as age-related macular degeneration, high myopia, central serous chorioretinopathy, chorioretinal inflammatory diseases, and tumors. This article briefly discusses future directions in choroidal imaging including angiography.

  13. Pediatric Electrocardiographic Imaging (ECGI) Applications

    Science.gov (United States)

    Silva, Jennifer N. A.

    2014-01-01

    Summary Noninvasive electrocardiographic imaging (ECGI) has been used in pediatric and congenital heart patients to better understand their electrophysiologic substrates. In this article we focus on the 4 subjects related to pediatric ECGI: 1) ECGI in patients with congenital heart disease and Wolff-Parkinson-White syndrome, 2) ECGI in patients with hypertrophic cardiomyopathy and pre-excitation, 3) ECGI in pediatric patients with Wolff-Parkinson-White syndrome, and 4) ECGI for pediatric cardiac resynchronization therapy. PMID:25722754

  14. Computer applications in diagnostic imaging.

    Science.gov (United States)

    Horii, S C

    1991-03-01

    This article has introduced the nature, generation, use, and future of digital imaging. As digital technology has transformed other aspects of our lives--has the reader tried to buy a conventional record album recently? almost all music store stock is now compact disks--it is sure to continue to transform medicine as well. Whether that transformation will be to our liking as physicians or a source of frustration and disappointment is dependent on understanding the issues involved.

  15. Rotation Covariant Image Processing for Biomedical Applications

    Directory of Open Access Journals (Sweden)

    Henrik Skibbe

    2013-01-01

    Full Text Available With the advent of novel biomedical 3D image acquisition techniques, the efficient and reliable analysis of volumetric images has become more and more important. The amount of data is enormous and demands an automated processing. The applications are manifold, ranging from image enhancement, image reconstruction, and image description to object/feature detection and high-level contextual feature extraction. In most scenarios, it is expected that geometric transformations alter the output in a mathematically well-defined manner. In this paper we emphasis on 3D translations and rotations. Many algorithms rely on intensity or low-order tensorial-like descriptions to fulfill this demand. This paper proposes a general mathematical framework based on mathematical concepts and theories transferred from mathematical physics and harmonic analysis into the domain of image analysis and pattern recognition. Based on two basic operations, spherical tensor differentiation and spherical tensor multiplication, we show how to design a variety of 3D image processing methods in an efficient way. The framework has already been applied to several biomedical applications ranging from feature and object detection tasks to image enhancement and image restoration techniques. In this paper, the proposed methods are applied on a variety of different 3D data modalities stemming from medical and biological sciences.

  16. Teaching and Learning Communities through Online Annotation

    Science.gov (United States)

    van der Pluijm, B.

    2016-12-01

    What do colleagues do with your assigned textbook? What they say or think about the material? Want students to be more engaged in their learning experience? If so, online materials that complement standard lecture format provide new opportunity through managed, online group annotation that leverages the ubiquity of internet access, while personalizing learning. The concept is illustrated with the new online textbook "Processes in Structural Geology and Tectonics", by Ben van der Pluijm and Stephen Marshak, which offers a platform for sharing of experiences, supplementary materials and approaches, including readings, mathematical applications, exercises, challenge questions, quizzes, alternative explanations, and more. The annotation framework used is Hypothes.is, which offers a free, open platform markup environment for annotation of websites and PDF postings. The annotations can be public, grouped or individualized, as desired, including export access and download of annotations. A teacher group, hosted by a moderator/owner, limits access to members of a user group of teachers, so that its members can use, copy or transcribe annotations for their own lesson material. Likewise, an instructor can host a student group that encourages sharing of observations, questions and answers among students and instructor. Also, the instructor can create one or more closed groups that offers study help and hints to students. Options galore, all of which aim to engage students and to promote greater responsibility for their learning experience. Beyond new capacity, the ability to analyze student annotation supports individual learners and their needs. For example, student notes can be analyzed for key phrases and concepts, and identify misunderstandings, omissions and problems. Also, example annotations can be shared to enhance notetaking skills and to help with studying. Lastly, online annotation allows active application to lecture posted slides, supporting real-time notetaking

  17. Ubiquitous Annotation Systems

    DEFF Research Database (Denmark)

    Hansen, Frank Allan

    2006-01-01

    Ubiquitous annotation systems allow users to annotate physical places, objects, and persons with digital information. Especially in the field of location based information systems much work has been done to implement adaptive and context-aware systems, but few efforts have focused on the general...... requirements for linking information to objects in both physical and digital space. This paper surveys annotation techniques from open hypermedia systems, Web based annotation systems, and mobile and augmented reality systems to illustrate different approaches to four central challenges ubiquitous annotation...... systems have to deal with: anchoring, structuring, presentation, and authoring. Through a number of examples each challenge is discussed and HyCon, a context-aware hypermedia framework developed at the University of Aarhus, Denmark, is used to illustrate an integrated approach to ubiquitous annotations...

  18. [Prescription annotations in Welfare Pharmacy].

    Science.gov (United States)

    Han, Yi

    2018-03-01

    Welfare Pharmacy contains medical formulas documented by the government and official prescriptions used by the official pharmacy in the pharmaceutical process. In the last years of Southern Song Dynasty, anonyms gave a lot of prescription annotations, made textual researches for the name, source, composition and origin of the prescriptions, and supplemented important historical data of medical cases and researched historical facts. The annotations of Welfare Pharmacy gathered the essence of medical theory, and can be used as precious materials to correctly understand the syndrome differentiation, compatibility regularity and clinical application of prescriptions. This article deeply investigated the style and form of the prescription annotations in Welfare Pharmacy, the name of prescriptions and the evolution of terminology, the major functions of the prescriptions, processing methods, instructions for taking medicine and taboos of prescriptions, the medical cases and clinical efficacy of prescriptions, the backgrounds, sources, composition and cultural meanings of prescriptions, proposed that the prescription annotations played an active role in the textual dissemination, patent medicine production and clinical diagnosis and treatment of Welfare Pharmacy. This not only helps understand the changes in the names and terms of traditional Chinese medicines in Welfare Pharmacy, but also provides the basis for understanding the knowledge sources, compatibility regularity, important drug innovations and clinical medications of prescriptions in Welfare Pharmacy. Copyright© by the Chinese Pharmaceutical Association.

  19. Scattered Radiation Emission Imaging: Principles and Applications

    Directory of Open Access Journals (Sweden)

    M. K. Nguyen

    2011-01-01

    Full Text Available Imaging processes built on the Compton scattering effect have been under continuing investigation since it was first suggested in the 50s. However, despite many innovative contributions, there are still formidable theoretical and technical challenges to overcome. In this paper, we review the state-of-the-art principles of the so-called scattered radiation emission imaging. Basically, it consists of using the cleverly collected scattered radiation from a radiating object to reconstruct its inner structure. Image formation is based on the mathematical concept of compounded conical projection. It entails a Radon transform defined on circular cone surfaces in order to express the scattered radiation flux density on a detecting pixel. We discuss in particular invertible cases of such conical Radon transforms which form a mathematical basis for image reconstruction methods. Numerical simulations performed in two and three space dimensions speak in favor of the viability of this imaging principle and its potential applications in various fields.

  20. Document Examination: Applications of Image Processing Systems.

    Science.gov (United States)

    Kopainsky, B

    1989-12-01

    Dealing with images is a familiar business for an expert in questioned documents: microscopic, photographic, infrared, and other optical techniques generate images containing the information he or she is looking for. A recent method for extracting most of this information is digital image processing, ranging from the simple contrast and contour enhancement to the advanced restoration of blurred texts. When combined with a sophisticated physical imaging system, an image pricessing system has proven to be a powerful and fast tool for routine non-destructive scanning of suspect documents. This article reviews frequent applications, comprising techniques to increase legibility, two-dimensional spectroscopy (ink discrimination, alterations, erased entries, etc.), comparison techniques (stamps, typescript letters, photo substitution), and densitometry. Computerized comparison of handwriting is not included. Copyright © 1989 Central Police University.

  1. Quality measures in applications of image restoration.

    Science.gov (United States)

    Kriete, A; Naim, M; Schafer, L

    2001-01-01

    We describe a new method for the estimation of image quality in image restoration applications. We demonstrate this technique on a simulated data set of fluorescent beads, in comparison with restoration by three different deconvolution methods. Both the number of iterations and a regularisation factor are varied to enforce changes in the resulting image quality. First, the data sets are directly compared by an accuracy measure. These values serve to validate the image quality descriptor, which is developed on the basis of optical information theory. This most general measure takes into account the spectral energies and the noise, weighted in a logarithmic fashion. It is demonstrated that this method is particularly helpful as a user-oriented method to control the output of iterative image restorations and to eliminate the guesswork in choosing a suitable number of iterations.

  2. Snapshot hyperspectral imaging and practical applications

    International Nuclear Information System (INIS)

    Wong, G

    2009-01-01

    Traditional broadband imaging involves the digital representation of a remote scene within a reduced colour space. Hyperspectral imaging exploits the full spectral dimension, which better reflects the continuous nature of actual spectra. Conventional techniques are all time-delayed whereby spatial or spectral scanning is required for hypercube generation. An innovative and patented technique developed at Heriot-Watt University offers significant potential as a snapshot sensor, to enable benefits for the wider public beyond aerospace imaging. This student-authored paper seeks to promote awareness of this field within the photonic community and its potential advantages for real-time practical applications.

  3. Application of image guidance in pituitary surgery

    Science.gov (United States)

    de Lara, Danielle; Filho, Leo F. S. Ditzel; Prevedello, Daniel M.; Otto, Bradley A.; Carrau, Ricardo L.

    2012-01-01

    Background: Surgical treatment of pituitary pathologies has evolved along the years, adding safety and decreasing morbidity related to the procedure. Advances in the field of radiology, coupled with stereotactic technology and computer modeling, have culminated in the contemporary and widespread use of image guidance systems, as we know them today. Image guidance navigation has become a frequently used technology that provides continuous three-dimensional information for the accurate performance of neurosurgical procedures. We present a discussion about the application of image guidance in pituitary surgeries. Methods: Major indications for image guidance neuronavigation application in pituitary surgery are presented and demonstrated with illustrative cases. Limitations of this technology are also presented. Results: Patients presenting a history of previous transsphenoidal surgeries, anatomical variances of the sphenoid sinus, tumors with a close relation to the internal carotid arteries, and extrasellar tumors are the most important indications for image guidance in pituitary surgeries. The high cost of the equipment, increased time of surgery due to setup time, and registration and the need of specific training for the operating room personnel could be pointed as limitations of this technology. Conclusion: Intraoperative image guidance systems provide real-time images, increasing surgical accuracy and enabling safe, minimally invasive interventions. However, the use of intraoperative navigation is not a replacement for surgical experience and a systematic knowledge of regional anatomy. It must be recognized as a tool by which the neurosurgeon can reduce the risk associated with surgical approach and treatment of pituitary pathologies. PMID:22826819

  4. Oximetry using multispectral imaging: theory and application

    Science.gov (United States)

    MacKenzie, Lewis E.; Harvey, Andrew R.

    2018-06-01

    Multispectral imaging (MSI) is a technique for measurement of blood oxygen saturation in vivo that can be applied using various imaging modalities to provide new insights into physiology and disease development. This tutorial aims to provide a thorough introduction to the theory and application of MSI oximetry for researchers new to the field, whilst also providing detailed information for more experienced researchers. The optical theory underlying two-wavelength oximetry, three-wavelength oximetry, pulse oximetry, and multispectral oximetry algorithms are described in detail. The varied challenges of applying MSI oximetry to in vivo applications are outlined and discussed, covering: the optical properties of blood and tissue, optical paths in blood vessels, tissue auto-fluorescence, oxygen diffusion, and common oximetry artefacts. Essential image processing techniques for MSI are discussed, in particular, image acquisition, image registration strategies, and blood vessel line profile fitting. Calibration and validation strategies for MSI are discussed, including comparison techniques, physiological interventions, and phantoms. The optical principles and unique imaging capabilities of various cutting-edge MSI oximetry techniques are discussed, including photoacoustic imaging, spectroscopic optical coherence tomography, and snapshot MSI.

  5. Multimodal nanoparticle imaging agents: design and applications

    Science.gov (United States)

    Burke, Benjamin P.; Cawthorne, Christopher; Archibald, Stephen J.

    2017-10-01

    Molecular imaging, where the location of molecules or nanoscale constructs can be tracked in the body to report on disease or biochemical processes, is rapidly expanding to include combined modality or multimodal imaging. No single imaging technique can offer the optimum combination of properties (e.g. resolution, sensitivity, cost, availability). The rapid technological advances in hardware to scan patients, and software to process and fuse images, are pushing the boundaries of novel medical imaging approaches, and hand-in-hand with this is the requirement for advanced and specific multimodal imaging agents. These agents can be detected using a selection from radioisotope, magnetic resonance and optical imaging, among others. Nanoparticles offer great scope in this area as they lend themselves, via facile modification procedures, to act as multifunctional constructs. They have relevance as therapeutics and drug delivery agents that can be tracked by molecular imaging techniques with the particular development of applications in optically guided surgery and as radiosensitizers. There has been a huge amount of research work to produce nanoconstructs for imaging, and the parameters for successful clinical translation and validation of therapeutic applications are now becoming much better understood. It is an exciting time of progress for these agents as their potential is closer to being realized with translation into the clinic. The coming 5-10 years will be critical, as we will see if the predicted improvement in clinical outcomes becomes a reality. Some of the latest advances in combination modality agents are selected and the progression pathway to clinical trials analysed. This article is part of the themed issue 'Challenges for chemistry in molecular imaging'.

  6. Fundus autofluorescence applications in retinal imaging

    Science.gov (United States)

    Gabai, Andrea; Veritti, Daniele; Lanzetta, Paolo

    2015-01-01

    Fundus autofluorescence (FAF) is a relatively new imaging technique that can be used to study retinal diseases. It provides information on retinal metabolism and health. Several different pathologies can be detected. Peculiar AF alterations can help the clinician to monitor disease progression and to better understand its pathogenesis. In the present article, we review FAF principles and clinical applications. PMID:26139802

  7. Fundus autofluorescence applications in retinal imaging

    Directory of Open Access Journals (Sweden)

    Andrea Gabai

    2015-01-01

    Full Text Available Fundus autofluorescence (FAF is a relatively new imaging technique that can be used to study retinal diseases. It provides information on retinal metabolism and health. Several different pathologies can be detected. Peculiar AF alterations can help the clinician to monitor disease progression and to better understand its pathogenesis. In the present article, we review FAF principles and clinical applications.

  8. Curve Matching with Applications in Medical Imaging

    DEFF Research Database (Denmark)

    Bauer, Martin; Bruveris, Martins; Harms, Philipp

    2015-01-01

    In the recent years, Riemannian shape analysis of curves and surfaces has found several applications in medical image analysis. In this paper we present a numerical discretization of second order Sobolev metrics on the space of regular curves in Euclidean space. This class of metrics has several...

  9. Recent applications of hyperspectral imaging in microbiology.

    Science.gov (United States)

    Gowen, Aoife A; Feng, Yaoze; Gaston, Edurne; Valdramidis, Vasilis

    2015-05-01

    Hyperspectral chemical imaging (HSI) is a broad term encompassing spatially resolved spectral data obtained through a variety of modalities (e.g. Raman scattering, Fourier transform infrared microscopy, fluorescence and near-infrared chemical imaging). It goes beyond the capabilities of conventional imaging and spectroscopy by obtaining spatially resolved spectra from objects at spatial resolutions varying from the level of single cells up to macroscopic objects (e.g. foods). In tandem with recent developments in instrumentation and sampling protocols, applications of HSI in microbiology have increased rapidly. This article gives a brief overview of the fundamentals of HSI and a comprehensive review of applications of HSI in microbiology over the past 10 years. Technical challenges and future perspectives for these techniques are also discussed. Copyright © 2015 Elsevier B.V. All rights reserved.

  10. Pharmaceutical applications of magnetic resonance imaging (MRI).

    Science.gov (United States)

    Richardson, J Craig; Bowtell, Richard W; Mäder, Karsten; Melia, Colin D

    2005-06-15

    Magnetic resonance imaging (MRI) is a powerful imaging modality that provides internal images of materials and living organisms on a microscopic and macroscopic scale. It is non-invasive and non-destructive, and one of very few techniques that can observe internal events inside undisturbed specimens in situ. It is versatile, as a wide range of NMR modalities can be accessed, and 2D and 3D imaging can be undertaken. Despite widespread use and major advances in clinical MRI, it has seen limited application in the pharmaceutical sciences. In vitro studies have focussed on drug release mechanisms in polymeric delivery systems, but isolated studies of bioadhesion, tablet properties, and extrusion and mixing processes illustrate the wider potential. Perhaps the greatest potential however, lies in investigations of pharmaceuticals in vivo, where pilot human and animal studies have demonstrated we can obtain unique insights into the behaviour of gastrointestinal, topical, colloidal, and targeted drug delivery systems.

  11. Multimodal interaction in image and video applications

    CERN Document Server

    Sappa, Angel D

    2013-01-01

    Traditional Pattern Recognition (PR) and Computer Vision (CV) technologies have mainly focused on full automation, even though full automation often proves elusive or unnatural in many applications, where the technology is expected to assist rather than replace the human agents. However, not all the problems can be automatically solved being the human interaction the only way to tackle those applications. Recently, multimodal human interaction has become an important field of increasing interest in the research community. Advanced man-machine interfaces with high cognitive capabilities are a hot research topic that aims at solving challenging problems in image and video applications. Actually, the idea of computer interactive systems was already proposed on the early stages of computer science. Nowadays, the ubiquity of image sensors together with the ever-increasing computing performance has open new and challenging opportunities for research in multimodal human interaction. This book aims to show how existi...

  12. Uncooled LWIR imaging: applications and market analysis

    Science.gov (United States)

    Takasawa, Satomi

    2015-05-01

    The evolution of infrared (IR) imaging sensor technology for defense market has played an important role in developing commercial market, as dual use of the technology has expanded. In particular, technologies of both reduction in pixel pitch and vacuum package have drastically evolved in the area of uncooled Long-Wave IR (LWIR; 8-14 μm wavelength region) imaging sensor, increasing opportunity to create new applications. From the macroscopic point of view, the uncooled LWIR imaging market is divided into two areas. One is a high-end market where uncooled LWIR imaging sensor with sensitivity as close to that of cooled one as possible is required, while the other is a low-end market which is promoted by miniaturization and reduction in price. Especially, in the latter case, approaches towards consumer market have recently appeared, such as applications of uncooled LWIR imaging sensors to night visions for automobiles and smart phones. The appearance of such a kind of commodity surely changes existing business models. Further technological innovation is necessary for creating consumer market, and there will be a room for other companies treating components and materials such as lens materials and getter materials and so on to enter into the consumer market.

  13. An efficient annotation and gene-expression derivation tool for Illumina Solexa datasets.

    Science.gov (United States)

    Hosseini, Parsa; Tremblay, Arianne; Matthews, Benjamin F; Alkharouf, Nadim W

    2010-07-02

    The data produced by an Illumina flow cell with all eight lanes occupied, produces well over a terabyte worth of images with gigabytes of reads following sequence alignment. The ability to translate such reads into meaningful annotation is therefore of great concern and importance. Very easily, one can get flooded with such a great volume of textual, unannotated data irrespective of read quality or size. CASAVA, a optional analysis tool for Illumina sequencing experiments, enables the ability to understand INDEL detection, SNP information, and allele calling. To not only extract from such analysis, a measure of gene expression in the form of tag-counts, but furthermore to annotate such reads is therefore of significant value. We developed TASE (Tag counting and Analysis of Solexa Experiments), a rapid tag-counting and annotation software tool specifically designed for Illumina CASAVA sequencing datasets. Developed in Java and deployed using jTDS JDBC driver and a SQL Server backend, TASE provides an extremely fast means of calculating gene expression through tag-counts while annotating sequenced reads with the gene's presumed function, from any given CASAVA-build. Such a build is generated for both DNA and RNA sequencing. Analysis is broken into two distinct components: DNA sequence or read concatenation, followed by tag-counting and annotation. The end result produces output containing the homology-based functional annotation and respective gene expression measure signifying how many times sequenced reads were found within the genomic ranges of functional annotations. TASE is a powerful tool to facilitate the process of annotating a given Illumina Solexa sequencing dataset. Our results indicate that both homology-based annotation and tag-count analysis are achieved in very efficient times, providing researchers to delve deep in a given CASAVA-build and maximize information extraction from a sequencing dataset. TASE is specially designed to translate sequence data

  14. Applications Of Binary Image Analysis Techniques

    Science.gov (United States)

    Tropf, H.; Enderle, E.; Kammerer, H. P.

    1983-10-01

    After discussing the conditions where binary image analysis techniques can be used, three new applications of the fast binary image analysis system S.A.M. (Sensorsystem for Automation and Measurement) are reported: (1) The human view direction is measured at TV frame rate while the subject's head is free movable. (2) Industrial parts hanging on a moving conveyor are classified prior to spray painting by robot. (3) In automotive wheel assembly, the eccentricity of the wheel is minimized by turning the tyre relative to the rim in order to balance the eccentricity of the components.

  15. Application of UV Imaging in Formulation Development

    DEFF Research Database (Denmark)

    Sun, Yu; Østergaard, Jesper

    2017-01-01

    defining formulation behavior after exposure to the aqueous environments and pharmaceutical performance is critical in pharmaceutical development, manufacturing and quality control of drugs. UV imaging has been explored as a tool for qualitative and quantitative characterization of drug dissolution...... related to the structural properties of the drug substance or formulation can be monitored. UV imaging is a non-intrusive and simple-to-operate analytical technique which holds potential for providing a mechanistic foundation for formulation development. This review aims to cover applications of UV...

  16. Quick Pad Tagger : An Efficient Graphical User Interface for Building Annotated Corpora with Multiple Annotation Layers

    OpenAIRE

    Marc Schreiber; Kai Barkschat; Bodo Kraft; Albert Zundorf

    2015-01-01

    More and more domain specific applications in the internet make use of Natural Language Processing (NLP) tools (e. g. Information Extraction systems). The output quality of these applications relies on the output quality of the used NLP tools. Often, the quality can be increased by annotating a domain specific corpus. However, annotating a corpus is a time consuming and exhaustive task. To reduce the annota tion time we present...

  17. Extended SWIR imaging sensors for hyperspectral imaging applications

    Science.gov (United States)

    Weber, A.; Benecke, M.; Wendler, J.; Sieck, A.; Hübner, D.; Figgemeier, H.; Breiter, R.

    2016-05-01

    AIM has developed SWIR modules including FPAs based on liquid phase epitaxy (LPE) grown MCT usable in a wide range of hyperspectral imaging applications. Silicon read-out integrated circuits (ROIC) provide various integration and readout modes including specific functions for spectral imaging applications. An important advantage of MCT based detectors is the tunable band gap. The spectral sensitivity of MCT detectors can be engineered to cover the extended SWIR spectral region up to 2.5μm without compromising in performance. AIM developed the technology to extend the spectral sensitivity of its SWIR modules also into the VIS. This has been successfully demonstrated for 384x288 and 1024x256 FPAs with 24μm pitch. Results are presented in this paper. The FPAs are integrated into compact dewar cooler configurations using different types of coolers, like rotary coolers, AIM's long life split linear cooler MCC030 or extreme long life SF100 Pulse Tube cooler. The SWIR modules include command and control electronics (CCE) which allow easy interfacing using a digital standard interface. The development status and performance results of AIM's latest MCT SWIR modules suitable for hyperspectral systems and applications will be presented.

  18. Active gated imaging for automotive safety applications

    Science.gov (United States)

    Grauer, Yoav; Sonn, Ezri

    2015-03-01

    The paper presents the Active Gated Imaging System (AGIS), in relation to the automotive field. AGIS is based on a fast gated-camera equipped with a unique Gated-CMOS sensor, and a pulsed Illuminator, synchronized in the time domain to record images of a certain range of interest which are then processed by computer vision real-time algorithms. In recent years we have learned the system parameters which are most beneficial to night-time driving in terms of; field of view, illumination profile, resolution and processing power. AGIS provides also day-time imaging with additional capabilities, which enhances computer vision safety applications. AGIS provides an excellent candidate for camera-based Advanced Driver Assistance Systems (ADAS) and the path for autonomous driving, in the future, based on its outstanding low/high light-level, harsh weather conditions capabilities and 3D potential growth capabilities.

  19. Deep convolutional neural networks for annotating gene expression patterns in the mouse brain.

    Science.gov (United States)

    Zeng, Tao; Li, Rongjian; Mukkamala, Ravi; Ye, Jieping; Ji, Shuiwang

    2015-05-07

    Profiling gene expression in brain structures at various spatial and temporal scales is essential to understanding how genes regulate the development of brain structures. The Allen Developing Mouse Brain Atlas provides high-resolution 3-D in situ hybridization (ISH) gene expression patterns in multiple developing stages of the mouse brain. Currently, the ISH images are annotated with anatomical terms manually. In this paper, we propose a computational approach to annotate gene expression pattern images in the mouse brain at various structural levels over the course of development. We applied deep convolutional neural network that was trained on a large set of natural images to extract features from the ISH images of developing mouse brain. As a baseline representation, we applied invariant image feature descriptors to capture local statistics from ISH images and used the bag-of-words approach to build image-level representations. Both types of features from multiple ISH image sections of the entire brain were then combined to build 3-D, brain-wide gene expression representations. We employed regularized learning methods for discriminating gene expression patterns in different brain structures. Results show that our approach of using convolutional model as feature extractors achieved superior performance in annotating gene expression patterns at multiple levels of brain structures throughout four developing ages. Overall, we achieved average AUC of 0.894 ± 0.014, as compared with 0.820 ± 0.046 yielded by the bag-of-words approach. Deep convolutional neural network model trained on natural image sets and applied to gene expression pattern annotation tasks yielded superior performance, demonstrating its transfer learning property is applicable to such biological image sets.

  20. Development of terahertz systems for imaging applications

    OpenAIRE

    Maestrojuán Biurrun, Itziar

    2016-01-01

    El principal objetivo de esta tesis fue el estudio y desarrollo de tecnología, concretamente mezcladores armónicos, trabajando a frecuencias de milimétricas y sub-milimétricas con el fin de implementar sistemas para aplicaciones de imagen. The main goal of this thesis was the study and development of technology, specifically harmonic mixers, working at millimetre and submillimetre frequencies in other to implement systems for imaging applications. A couple of sub-harmonic...

  1. Application of particle imaging velocimetry in windtunnels

    International Nuclear Information System (INIS)

    Kompenhans, J.; Reichmuth, J.

    1987-01-01

    Recently the instantaneous and nonintrusive measurement of the flow velocity in a large area of the flow field (two-dimensional plane) became possible by means of particle imaging velocimetry (PIV). Up to now PIV has mainly been used for model experiments at low flow velocities in order to test and to improve the measuring technique. The present aim is the application of PIV in large wind tunnels at high flow velocities. 7 references

  2. Algorithms for reconstructing images for industrial applications

    International Nuclear Information System (INIS)

    Lopes, R.T.; Crispim, V.R.

    1986-01-01

    Several algorithms for reconstructing objects from their projections are being studied in our Laboratory, for industrial applications. Such algorithms are useful locating the position and shape of different composition of materials in the object. A Comparative study of two algorithms is made. The two investigated algorithsm are: The MART (Multiplicative - Algebraic Reconstruction Technique) and the Convolution Method. The comparison are carried out from the point view of the quality of the image reconstructed, number of views and cost. (Author) [pt

  3. Thermoelectric infrared imaging sensors for automotive applications

    Science.gov (United States)

    Hirota, Masaki; Nakajima, Yasushi; Saito, Masanori; Satou, Fuminori; Uchiyama, Makoto

    2004-07-01

    This paper describes three low-cost thermoelectric infrared imaging sensors having a 1,536, 2,304, and 10,800 element thermoelectric focal plane array (FPA) respectively and two experimental automotive application systems. The FPAs are basically fabricated with a conventional IC process and micromachining technologies and have a low cost potential. Among these sensors, the sensor having 2,304 elements provide high responsivity of 5,500 V/W and a very small size with adopting a vacuum-sealed package integrated with a wide-angle ZnS lens. One experimental system incorporated in the Nissan ASV-2 is a blind spot pedestrian warning system that employs four infrared imaging sensors. This system helps alert the driver to the presence of a pedestrian in a blind spot by detecting the infrared radiation emitted from the person"s body. The system can also prevent the vehicle from moving in the direction of the pedestrian. The other is a rearview camera system with an infrared detection function. This system consists of a visible camera and infrared sensors, and it helps alert the driver to the presence of a pedestrian in a rear blind spot. Various issues that will need to be addressed in order to expand the automotive applications of IR imaging sensors in the future are also summarized. This performance is suitable for consumer electronics as well as automotive applications.

  4. Applications of superconducting bolometers in security imaging

    International Nuclear Information System (INIS)

    Luukanen, A; Leivo, M M; Rautiainen, A; Grönholm, M; Toivanen, H; Grönberg, L; Helistö, P; Mäyrä, A; Aikio, M; Luukanen, A; Grossman, E N

    2012-01-01

    Millimeter-wave (MMW) imaging systems are currently undergoing deployment World-wide for airport security screening applications. Security screening through MMW imaging is facilitated by the relatively good transmission of these wavelengths through common clothing materials. Given the long wavelength of operation (frequencies between 20 GHz to ∼ 100 GHz, corresponding to wavelengths between 1.5 cm and 3 mm), existing systems are suited for close-range imaging only due to substantial diffraction effects associated with practical aperture diameters. The present and arising security challenges call for systems that are capable of imaging concealed threat items at stand-off ranges beyond 5 meters at near video frame rates, requiring substantial increase in operating frequency in order to achieve useful spatial resolution. The construction of such imaging systems operating at several hundred GHz has been hindered by the lack of submm-wave low-noise amplifiers. In this paper we summarize our efforts in developing a submm-wave video camera which utilizes cryogenic antenna-coupled microbolometers as detectors. Whilst superconducting detectors impose the use of a cryogenic system, we argue that the resulting back-end complexity increase is a favorable trade-off compared to complex and expensive room temperature submm-wave LNAs both in performance and system cost.

  5. Musculoskeletal applications of magnetic resonance imaging: Council on Scientific Affairs

    International Nuclear Information System (INIS)

    Harms, S.E.; Fisher, C.F.; Fulmer, J.M.

    1989-01-01

    Magnetic resonance imaging provides superior contrast, resolution, and multiplanar imaging capability, allowing excellent definition of soft-tissue and bone marrow abnormalities. For these reasons, magnetic resonance imaging has become a major diagnostic imaging method for the evaluation of many musculoskeletal disorders. The applications of magnetic resonance imaging for musculoskeletal diagnosis are summarized and examples of common clinical situations are given. General guidelines are suggested for the musculoskeletal applications of magnetic resonance imaging

  6. Improved image alignment method in application to X-ray images and biological images.

    Science.gov (United States)

    Wang, Ching-Wei; Chen, Hsiang-Chou

    2013-08-01

    Alignment of medical images is a vital component of a large number of applications throughout the clinical track of events; not only within clinical diagnostic settings, but prominently so in the area of planning, consummation and evaluation of surgical and radiotherapeutical procedures. However, image registration of medical images is challenging because of variations on data appearance, imaging artifacts and complex data deformation problems. Hence, the aim of this study is to develop a robust image alignment method for medical images. An improved image registration method is proposed, and the method is evaluated with two types of medical data, including biological microscopic tissue images and dental X-ray images and compared with five state-of-the-art image registration techniques. The experimental results show that the presented method consistently performs well on both types of medical images, achieving 88.44 and 88.93% averaged registration accuracies for biological tissue images and X-ray images, respectively, and outperforms the benchmark methods. Based on the Tukey's honestly significant difference test and Fisher's least square difference test tests, the presented method performs significantly better than all existing methods (P ≤ 0.001) for tissue image alignment, and for the X-ray image registration, the proposed method performs significantly better than the two benchmark b-spline approaches (P < 0.001). The software implementation of the presented method and the data used in this study are made publicly available for scientific communities to use (http://www-o.ntust.edu.tw/∼cweiwang/ImprovedImageRegistration/). cweiwang@mail.ntust.edu.tw.

  7. Application of UV Imaging in Formulation Development.

    Science.gov (United States)

    Sun, Yu; Østergaard, Jesper

    2017-05-01

    Efficient drug delivery is dependent on the drug substance dissolving in the body fluids, being released from dosage forms and transported to the site of action. A fundamental understanding of the interplay between the physicochemical properties of the active compound and pharmaceutical excipients defining formulation behavior after exposure to the aqueous environments and pharmaceutical performance is critical in pharmaceutical development, manufacturing and quality control of drugs. UV imaging has been explored as a tool for qualitative and quantitative characterization of drug dissolution and release with the characteristic feature of providing real-time visualization of the solution phase drug transport in the vicinity of the formulation. Events occurring during drug dissolution and release, such as polymer swelling, drug precipitation/recrystallization, or solvent-mediated phase transitions related to the structural properties of the drug substance or formulation can be monitored. UV imaging is a non-intrusive and simple-to-operate analytical technique which holds potential for providing a mechanistic foundation for formulation development. This review aims to cover applications of UV imaging in the early and late phase pharmaceutical development with a special focus on the relation between structural properties and performance. Potential areas of future advancement and application are also discussed.

  8. Applications of scientific imaging in environmental toxicology

    Science.gov (United States)

    El-Demerdash, Aref M.

    The national goals of clean air, clean water, and healthy ecosystems are a few of the primary forces that drive the need for better environmental monitoring. As we approach the end of the 1990s, the environmental questions at regional to global scales are being redefined and refined in the light of developments in environmental understanding and technological capability. Research in the use of scientific imaging data for the study of the environment is urgently needed in order to explore the possibilities of utilizing emerging new technologies. The objective of this research proposal is to demonstrate the usability of a wealth of new technology made available in the last decade to providing a better understanding of environmental problems. Research is focused in two imaging techniques macro and micro imaging. Several examples of applications of scientific imaging in research in the field of environmental toxicology were presented. This was achieved on two scales, micro and macro imaging. On the micro level four specific examples were covered. First, the effect of utilizing scanning electron microscopy as an imaging tool in enhancing taxa identification when studying diatoms was presented. Second, scanning electron microscopy combined with energy dispersive x-ray analyzer were demonstrated as a valuable and effective tool for identifying and analyzing household dust samples. Third, electronic autoradiography combined with FT-IR microscopy were used to study the distribution pattern of [14C]-Malathion in rats as a result of dermal exposure. The results of the autoradiography made on skin sections of the application site revealed the presence of [ 14C]-activity in the first region of the skin. These results were evidenced by FT-IR microscopy. The obtained results suggest that the penetration of Malathion into the skin and other tissues is vehicle and dose dependent. The results also suggest the use of FT-IR microscopy imaging for monitoring the disposition of

  9. Clinical applications of cardiovascular magnetic resonance imaging

    International Nuclear Information System (INIS)

    Marcu, C.B.; Beek, A.M.; Van Rossum, A.C.

    2006-01-01

    Cardiovascular magnetic resonance imaging (MRI) has evolved from an effective research tool into a clinically proven, safe and comprehensive imaging modality. It provides anatomic and functional information in acquired and congenital heart disease and is the most precise technique for quantification of ventricular volumes, function and mass. Owing to its excellent interstudy reproducibility, cardiovascular MRI is the optimal method for assessment of changes in ventricular parameters after therapeutic intervention. Delayed contrast enhancement is an accurate and robust method used in the diagnosis of ischemic and nonischemic cardiomyopathies and less common diseases, such as cardiac sarcoidosis and myocarditis. First-pass magnetic contrast myocardial perfusion is becoming an alternative to radionuclide techniques for the detection of coronary atherosclerotic disease. In this review we outline the techniques used in cardiovascular MRI and discuss the most common clinical applications. (author)

  10. Image processing in radiology. Current applications

    International Nuclear Information System (INIS)

    Neri, E.; Caramella, D.; Bartolozzi, C.

    2008-01-01

    Few fields have witnessed such impressive advances as image processing in radiology. The progress achieved has revolutionized diagnosis and greatly facilitated treatment selection and accurate planning of procedures. This book, written by leading experts from many countries, provides a comprehensive and up-to-date description of how to use 2D and 3D processing tools in clinical radiology. The first section covers a wide range of technical aspects in an informative way. This is followed by the main section, in which the principal clinical applications are described and discussed in depth. To complete the picture, a third section focuses on various special topics. The book will be invaluable to radiologists of any subspecialty who work with CT and MRI and would like to exploit the advantages of image processing techniques. It also addresses the needs of radiographers who cooperate with clinical radiologists and should improve their ability to generate the appropriate 2D and 3D processing. (orig.)

  11. AggNet: Deep Learning From Crowds for Mitosis Detection in Breast Cancer Histology Images.

    Science.gov (United States)

    Albarqouni, Shadi; Baur, Christoph; Achilles, Felix; Belagiannis, Vasileios; Demirci, Stefanie; Navab, Nassir

    2016-05-01

    The lack of publicly available ground-truth data has been identified as the major challenge for transferring recent developments in deep learning to the biomedical imaging domain. Though crowdsourcing has enabled annotation of large scale databases for real world images, its application for biomedical purposes requires a deeper understanding and hence, more precise definition of the actual annotation task. The fact that expert tasks are being outsourced to non-expert users may lead to noisy annotations introducing disagreement between users. Despite being a valuable resource for learning annotation models from crowdsourcing, conventional machine-learning methods may have difficulties dealing with noisy annotations during training. In this manuscript, we present a new concept for learning from crowds that handle data aggregation directly as part of the learning process of the convolutional neural network (CNN) via additional crowdsourcing layer (AggNet). Besides, we present an experimental study on learning from crowds designed to answer the following questions. 1) Can deep CNN be trained with data collected from crowdsourcing? 2) How to adapt the CNN to train on multiple types of annotation datasets (ground truth and crowd-based)? 3) How does the choice of annotation and aggregation affect the accuracy? Our experimental setup involved Annot8, a self-implemented web-platform based on Crowdflower API realizing image annotation tasks for a publicly available biomedical image database. Our results give valuable insights into the functionality of deep CNN learning from crowd annotations and prove the necessity of data aggregation integration.

  12. Annotating individual human genomes.

    Science.gov (United States)

    Torkamani, Ali; Scott-Van Zeeland, Ashley A; Topol, Eric J; Schork, Nicholas J

    2011-10-01

    Advances in DNA sequencing technologies have made it possible to rapidly, accurately and affordably sequence entire individual human genomes. As impressive as this ability seems, however, it will not likely amount to much if one cannot extract meaningful information from individual sequence data. Annotating variations within individual genomes and providing information about their biological or phenotypic impact will thus be crucially important in moving individual sequencing projects forward, especially in the context of the clinical use of sequence information. In this paper we consider the various ways in which one might annotate individual sequence variations and point out limitations in the available methods for doing so. It is arguable that, in the foreseeable future, DNA sequencing of individual genomes will become routine for clinical, research, forensic, and personal purposes. We therefore also consider directions and areas for further research in annotating genomic variants. Copyright © 2011 Elsevier Inc. All rights reserved.

  13. ANNOTATING INDIVIDUAL HUMAN GENOMES*

    Science.gov (United States)

    Torkamani, Ali; Scott-Van Zeeland, Ashley A.; Topol, Eric J.; Schork, Nicholas J.

    2014-01-01

    Advances in DNA sequencing technologies have made it possible to rapidly, accurately and affordably sequence entire individual human genomes. As impressive as this ability seems, however, it will not likely to amount to much if one cannot extract meaningful information from individual sequence data. Annotating variations within individual genomes and providing information about their biological or phenotypic impact will thus be crucially important in moving individual sequencing projects forward, especially in the context of the clinical use of sequence information. In this paper we consider the various ways in which one might annotate individual sequence variations and point out limitations in the available methods for doing so. It is arguable that, in the foreseeable future, DNA sequencing of individual genomes will become routine for clinical, research, forensic, and personal purposes. We therefore also consider directions and areas for further research in annotating genomic variants. PMID:21839162

  14. A boosting framework for visuality-preserving distance metric learning and its application to medical image retrieval.

    Science.gov (United States)

    Yang, Liu; Jin, Rong; Mummert, Lily; Sukthankar, Rahul; Goode, Adam; Zheng, Bin; Hoi, Steven C H; Satyanarayanan, Mahadev

    2010-01-01

    Similarity measurement is a critical component in content-based image retrieval systems, and learning a good distance metric can significantly improve retrieval performance. However, despite extensive study, there are several major shortcomings with the existing approaches for distance metric learning that can significantly affect their application to medical image retrieval. In particular, "similarity" can mean very different things in image retrieval: resemblance in visual appearance (e.g., two images that look like one another) or similarity in semantic annotation (e.g., two images of tumors that look quite different yet are both malignant). Current approaches for distance metric learning typically address only one goal without consideration of the other. This is problematic for medical image retrieval where the goal is to assist doctors in decision making. In these applications, given a query image, the goal is to retrieve similar images from a reference library whose semantic annotations could provide the medical professional with greater insight into the possible interpretations of the query image. If the system were to retrieve images that did not look like the query, then users would be less likely to trust the system; on the other hand, retrieving images that appear superficially similar to the query but are semantically unrelated is undesirable because that could lead users toward an incorrect diagnosis. Hence, learning a distance metric that preserves both visual resemblance and semantic similarity is important. We emphasize that, although our study is focused on medical image retrieval, the problem addressed in this work is critical to many image retrieval systems. We present a boosting framework for distance metric learning that aims to preserve both visual and semantic similarities. The boosting framework first learns a binary representation using side information, in the form of labeled pairs, and then computes the distance as a weighted Hamming

  15. Annotation and Anonymity: Playful Photo-Sharing by Visiting Groups of Teenagers

    OpenAIRE

    Rowland, Duncan; Appiah-Kubi, Kwamena; Shipp, Victoria; Mortier, Richard Michael; Benford, Steve

    2015-01-01

    This paper investigates the playful photo taking behaviour of teenagers during group visits to two touristic public events (an airshow and a guided tour of a museum). These studies provide the feedback for the iterative development of a smartphone based anonymous image annotation and sharing application. The resulting implications for the design of such photo systems are examined, specifically the appropriateness of opportunistic upload for social media. Playfulness in photography has many im...

  16. Application of Java technology in radiation image processing

    International Nuclear Information System (INIS)

    Cheng Weifeng; Li Zheng; Chen Zhiqiang; Zhang Li; Gao Wenhuan

    2002-01-01

    The acquisition and processing of radiation image plays an important role in modern application of civil nuclear technology. The author analyzes the rationale of Java image processing technology which includes Java AWT, Java 2D and JAI. In order to demonstrate applicability of Java technology in field of image processing, examples of application of JAI technology in processing of radiation images of large container have been given

  17. The effectiveness of annotated (vs. non-annotated) digital pathology slides as a teaching tool during dermatology and pathology residencies.

    Science.gov (United States)

    Marsch, Amanda F; Espiritu, Baltazar; Groth, John; Hutchens, Kelli A

    2014-06-01

    With today's technology, paraffin-embedded, hematoxylin & eosin-stained pathology slides can be scanned to generate high quality virtual slides. Using proprietary software, digital images can also be annotated with arrows, circles and boxes to highlight certain diagnostic features. Previous studies assessing digital microscopy as a teaching tool did not involve the annotation of digital images. The objective of this study was to compare the effectiveness of annotated digital pathology slides versus non-annotated digital pathology slides as a teaching tool during dermatology and pathology residencies. A study group composed of 31 dermatology and pathology residents was asked to complete an online pre-quiz consisting of 20 multiple choice style questions, each associated with a static digital pathology image. After completion, participants were given access to an online tutorial composed of digitally annotated pathology slides and subsequently asked to complete a post-quiz. A control group of 12 residents completed a non-annotated version of the tutorial. Nearly all participants in the study group improved their quiz score, with an average improvement of 17%, versus only 3% (P = 0.005) in the control group. These results support the notion that annotated digital pathology slides are superior to non-annotated slides for the purpose of resident education. © 2014 John Wiley & Sons A/S. Published by John Wiley & Sons Ltd.

  18. Discovering gene annotations in biomedical text databases

    Directory of Open Access Journals (Sweden)

    Ozsoyoglu Gultekin

    2008-03-01

    pattern occurrences with similar semantics. Relatively low recall performance of our pattern-based approach may be enhanced either by employing a probabilistic annotation framework based on the annotation neighbourhoods in textual data, or, alternatively, the statistical enrichment threshold may be adjusted to lower values for applications that put more value on achieving higher recall values.

  19. Annotation: The Savant Syndrome

    Science.gov (United States)

    Heaton, Pamela; Wallace, Gregory L.

    2004-01-01

    Background: Whilst interest has focused on the origin and nature of the savant syndrome for over a century, it is only within the past two decades that empirical group studies have been carried out. Methods: The following annotation briefly reviews relevant research and also attempts to address outstanding issues in this research area.…

  20. Annotating Emotions in Meetings

    NARCIS (Netherlands)

    Reidsma, Dennis; Heylen, Dirk K.J.; Ordelman, Roeland J.F.

    We present the results of two trials testing procedures for the annotation of emotion and mental state of the AMI corpus. The first procedure is an adaptation of the FeelTrace method, focusing on a continuous labelling of emotion dimensions. The second method is centered around more discrete

  1. Computational Phase Imaging for Biomedical Applications

    Science.gov (United States)

    Nguyen, Tan Huu

    laser comes at the expense of speckles, which degrades image quality. Therefore, solutions purely based on physical modeling and computations to remove these artifacts, using white-light illumination, are highly desirable. Here, using physical optics, we develop a theoretical model that accurately explains the effects of partial coherence on image information and phase information. The model is further combined with numerical processing to suppress the artifacts, and recover the correct phase information. The third topic is devoted to applying QPI to clinical applications. Traditionally, stained tissues are used in prostate cancer diagnosis instead. The reason is that tissue samples used in diagnosis are nearly transparent under bright field inspection if unstained. Contrast-enhanced microscopy techniques, e.g., phase contrast microscopy (PC) and differential interference contrast microscopy (DIC), can render visibility of the untagged samples with high throughput. However, since these methods are intensity-based, the contrast of acquired images varies significantly from one imaging facility to another, preventing them from being used in diagnosis. Inheriting the merits of PC, SLIM produces phase maps, which measure the refractive index of label-free samples. However, the maps measured by SLIM are not affected by variation in imaging conditions, e.g., illumination, magnification, etc., allowing consistent imaging results when using SLIM across different clinical institutions. Here, we combine SLIM images with machine learning for automatic diagnosis results for prostate cancer. We focus on two diagnosis problems of automatic Gleason grading and cancer vs. non-cancer diagnosis. Finally, we introduce a new imaging modality, named Gradient Light Interference Microscopy (GLIM), which is able to image through optically thick samples using low spatial coherence illumination. The key benefit of GLIM comes from a large numerical aperture of the condenser, which is 0.55 NA

  2. A special designed library for medical imaging applications

    International Nuclear Information System (INIS)

    Lymberopoulos, D.; Kotsopoulos, S.; Zoupas, V.; Yoldassis, N.; Spyropoulos, C.

    1994-01-01

    The present paper deals with a sophisticated and flexible library of medical purpose image processing routines. It contains modules for simple as well as advanced gray or colour image processing. This library offers powerful features for medical image processing and analysis applications, thus providing the physician with a means of analyzing and estimating medical images in order to accomplish their diagnostic procedures

  3. Near-infrared spectroscopic tissue imaging for medical applications

    Science.gov (United States)

    Demos, Stavros [Livermore, CA; Staggs, Michael C [Tracy, CA

    2006-12-12

    Near infrared imaging using elastic light scattering and tissue autofluorescence are explored for medical applications. The approach involves imaging using cross-polarized elastic light scattering and tissue autofluorescence in the Near Infra-Red (NIR) coupled with image processing and inter-image operations to differentiate human tissue components.

  4. Annotated Bibliography of EDGE2D Use

    Energy Technology Data Exchange (ETDEWEB)

    J.D. Strachan and G. Corrigan

    2005-06-24

    This annotated bibliography is intended to help EDGE2D users, and particularly new users, find existing published literature that has used EDGE2D. Our idea is that a person can find existing studies which may relate to his intended use, as well as gain ideas about other possible applications by scanning the attached tables.

  5. Annotated Bibliography of EDGE2D Use

    International Nuclear Information System (INIS)

    Strachan, J.D.; Corrigan, G.

    2005-01-01

    This annotated bibliography is intended to help EDGE2D users, and particularly new users, find existing published literature that has used EDGE2D. Our idea is that a person can find existing studies which may relate to his intended use, as well as gain ideas about other possible applications by scanning the attached tables

  6. Solar Tutorial and Annotation Resource (STAR)

    Science.gov (United States)

    Showalter, C.; Rex, R.; Hurlburt, N. E.; Zita, E. J.

    2009-12-01

    We have written a software suite designed to facilitate solar data analysis by scientists, students, and the public, anticipating enormous datasets from future instruments. Our “STAR" suite includes an interactive learning section explaining 15 classes of solar events. Users learn software tools that exploit humans’ superior ability (over computers) to identify many events. Annotation tools include time slice generation to quantify loop oscillations, the interpolation of event shapes using natural cubic splines (for loops, sigmoids, and filaments) and closed cubic splines (for coronal holes). Learning these tools in an environment where examples are provided prepares new users to comfortably utilize annotation software with new data. Upon completion of our tutorial, users are presented with media of various solar events and asked to identify and annotate the images, to test their mastery of the system. Goals of the project include public input into the data analysis of very large datasets from future solar satellites, and increased public interest and knowledge about the Sun. In 2010, the Solar Dynamics Observatory (SDO) will be launched into orbit. SDO’s advancements in solar telescope technology will generate a terabyte per day of high-quality data, requiring innovation in data management. While major projects develop automated feature recognition software, so that computers can complete much of the initial event tagging and analysis, still, that software cannot annotate features such as sigmoids, coronal magnetic loops, coronal dimming, etc., due to large amounts of data concentrated in relatively small areas. Previously, solar physicists manually annotated these features, but with the imminent influx of data it is unrealistic to expect specialized researchers to examine every image that computers cannot fully process. A new approach is needed to efficiently process these data. Providing analysis tools and data access to students and the public have proven

  7. Electrical Resistivity Imaging for environmental applications

    International Nuclear Information System (INIS)

    Leite, O.; Bernard, J.; Vermeersch, F.

    2007-01-01

    For a few years, the evolution of measuring equipment and of interpretation software have permitted to develop a new electrical resistivity technique called resistivity imaging where the equipment, which includes a large number of electrodes located along a line at the same time, carries out an automatic switching of these electrodes for acquiring profiling data. The apparent resistivity pseudo sections measured with such a technique are processed by an inversion software which gives interpreted resistivity and depth values for the anomalies detected along the profile. The multi-electrode resistivity technique consists in using a multi-core cable with as many conductors (24, 48, 72, 96) as electrodes plugged into the ground at a fixed spacing, every 5m for instance. In the resistivitymeter itself are located the relays which ensure the switching of those electrodes according to a sequence of readings predefined and stored in the internal memory of the equipment. The various combinations of transmitting (A,B) and receiving (M,N) pairs of electrodes construct the mixed sounding / profiling section, with a maximum investigation depth which mainly depends on the total length of the cable. The 2D resistivity images obtained with such a multi-electrode technique are used for studying the shallow stuctures of the underground located a few tens of metres down to about one hundred metres depth; these images supply an information which complements the one obtained with the more traditionnal Vertical Electrical Sounding (VES) technique, which mainly aims at determining the depths of horizontal 1D structures from the surface down to several hundreds metres depths. Several examples are presented for various types of applications: groundwater (intrusion of salt water in fresh water), geotechnics (detection of a fault in a granitic area), environment (delineation of a waste disposal area) and archaeology (discovery of an ancient tomb)

  8. Novel Metal Clusters for Imaging Applications

    KAUST Repository

    Alsaiari, Shahad K.

    2014-05-01

    During the past few years, gold nanoparticles (AuNPs) have received considerable attention in many fields due to their optical properties, photothermal effect and biocompatibility. AuNPs, particularly AuNCs and AuNRs, exhibit great potential in diagnostics and imaging. In the present study, AuNCs were used to selectively image and quantify intracellular antioxidants. It was reported by Chen et al. that the strong fluorescence of AuNCs is quenched by highly reactive oxygen species (hROS). Most of applications depend on fluorescence quenching, however, for our project we designed turn-on fluorescent sensors using AuNCs that sense antioxidants. In the presence of antioxidants, AuNCs fluorescence switch on, while in the absence of antioxidants their fluorescence immediately turn off due to hROS effect. AuNRs were also used for cellular imaging in which AuNRs were conjugated to Cy3-labelled molecular beacon (MB) DNA. Next, the previous complex was loaded in two different strains of magnetotactic bacteria (MTB). MTB were used as a targeted delivery vehicle in which magnetosomes direct the movement of bacteria. The DNA sequence was specific to a certain sequence in mitochondria. The exposure of MTB to an alternating magnetic field (AMF) leads to the increase of temperature inside the bacteria, which destruct the cell wall, and hence, bacterial payload is released. When MD-DNA hybrid with the target sequence, AuNR and Cy3 separate from each other, the fluorescence of the Cy3 is restored.

  9. Ten steps to get started in Genome Assembly and Annotation

    Science.gov (United States)

    Dominguez Del Angel, Victoria; Hjerde, Erik; Sterck, Lieven; Capella-Gutierrez, Salvadors; Notredame, Cederic; Vinnere Pettersson, Olga; Amselem, Joelle; Bouri, Laurent; Bocs, Stephanie; Klopp, Christophe; Gibrat, Jean-Francois; Vlasova, Anna; Leskosek, Brane L.; Soler, Lucile; Binzer-Panchal, Mahesh; Lantz, Henrik

    2018-01-01

    As a part of the ELIXIR-EXCELERATE efforts in capacity building, we present here 10 steps to facilitate researchers getting started in genome assembly and genome annotation. The guidelines given are broadly applicable, intended to be stable over time, and cover all aspects from start to finish of a general assembly and annotation project. Intrinsic properties of genomes are discussed, as is the importance of using high quality DNA. Different sequencing technologies and generally applicable workflows for genome assembly are also detailed. We cover structural and functional annotation and encourage readers to also annotate transposable elements, something that is often omitted from annotation workflows. The importance of data management is stressed, and we give advice on where to submit data and how to make your results Findable, Accessible, Interoperable, and Reusable (FAIR). PMID:29568489

  10. Inverse synthetic aperture radar imaging principles, algorithms and applications

    CERN Document Server

    Chen , Victor C

    2014-01-01

    Inverse Synthetic Aperture Radar Imaging: Principles, Algorithms and Applications is based on the latest research on ISAR imaging of moving targets and non-cooperative target recognition (NCTR). With a focus on the advances and applications, this book will provide readers with a working knowledge on various algorithms of ISAR imaging of targets and implementation with MATLAB. These MATLAB algorithms will prove useful in order to visualize and manipulate some simulated ISAR images.

  11. Sharing Map Annotations in Small Groups: X Marks the Spot

    Science.gov (United States)

    Congleton, Ben; Cerretani, Jacqueline; Newman, Mark W.; Ackerman, Mark S.

    Advances in location-sensing technology, coupled with an increasingly pervasive wireless Internet, have made it possible (and increasingly easy) to access and share information with context of one’s geospatial location. We conducted a four-phase study, with 27 students, to explore the practices surrounding the creation, interpretation and sharing of map annotations in specific social contexts. We found that annotation authors consider multiple factors when deciding how to annotate maps, including the perceived utility to the audience and how their contributions will reflect on the image they project to others. Consumers of annotations value the novelty of information, but must be convinced of the author’s credibility. In this paper we describe our study, present the results, and discuss implications for the design of software for sharing map annotations.

  12. Application of Quantum Dots in Biological Imaging

    Directory of Open Access Journals (Sweden)

    Shan Jin

    2011-01-01

    Full Text Available Quantum dots (QDs are a group of semiconducting nanomaterials with unique optical and electronic properties. They have distinct advantages over traditional fluorescent organic dyes in chemical and biological studies in terms of tunable emission spectra, signal brightness, photostability, and so forth. Currently, the major type of QDs is the heavy metal-containing II-IV, IV-VI, or III-V QDs. Silicon QDs and conjugated polymer dots have also been developed in order to lower the potential toxicity of the fluorescent probes for biological applications. Aqueous solubility is the common problem for all types of QDs when they are employed in the biological researches, such as in vitro and in vivo imaging. To circumvent this problem, ligand exchange and polymer coating are proven to be effective, besides synthesizing QDs in aqueous solutions directly. However, toxicity is another big concern especially for in vivo studies. Ligand protection and core/shell structure can partly solve this problem. With the rapid development of QDs research, new elements and new morphologies have been introduced to this area to fabricate more safe and efficient QDs for biological applications.

  13. Viewpoints on Medical Image Processing: From Science to Application.

    Science.gov (United States)

    Deserno Né Lehmann, Thomas M; Handels, Heinz; Maier-Hein Né Fritzsche, Klaus H; Mersmann, Sven; Palm, Christoph; Tolxdorff, Thomas; Wagenknecht, Gudrun; Wittenberg, Thomas

    2013-05-01

    Medical image processing provides core innovation for medical imaging. This paper is focused on recent developments from science to applications analyzing the past fifteen years of history of the proceedings of the German annual meeting on medical image processing (BVM). Furthermore, some members of the program committee present their personal points of views: (i) multi-modality for imaging and diagnosis, (ii) analysis of diffusion-weighted imaging, (iii) model-based image analysis, (iv) registration of section images, (v) from images to information in digital endoscopy, and (vi) virtual reality and robotics. Medical imaging and medical image computing is seen as field of rapid development with clear trends to integrated applications in diagnostics, treatment planning and treatment.

  14. Viewpoints on Medical Image Processing: From Science to Application

    Science.gov (United States)

    Deserno (né Lehmann), Thomas M.; Handels, Heinz; Maier-Hein (né Fritzsche), Klaus H.; Mersmann, Sven; Palm, Christoph; Tolxdorff, Thomas; Wagenknecht, Gudrun; Wittenberg, Thomas

    2013-01-01

    Medical image processing provides core innovation for medical imaging. This paper is focused on recent developments from science to applications analyzing the past fifteen years of history of the proceedings of the German annual meeting on medical image processing (BVM). Furthermore, some members of the program committee present their personal points of views: (i) multi-modality for imaging and diagnosis, (ii) analysis of diffusion-weighted imaging, (iii) model-based image analysis, (iv) registration of section images, (v) from images to information in digital endoscopy, and (vi) virtual reality and robotics. Medical imaging and medical image computing is seen as field of rapid development with clear trends to integrated applications in diagnostics, treatment planning and treatment. PMID:24078804

  15. Numerical methods in image processing for applications in jewellery industry

    OpenAIRE

    Petrla, Martin

    2016-01-01

    Presented thesis deals with a problem from the field of image processing for application in multiple scanning of jewelery stones. The aim is to develop a method for preprocessing and subsequent mathematical registration of images in order to increase the effectivity and reliability of the output quality control. For these purposes the thesis summerizes mathematical definition of digital image as well as theoretical base of image registration. It proposes a method adjusting every single image ...

  16. Resonance Energy Transfer Molecular Imaging Application in Biomedicine

    Directory of Open Access Journals (Sweden)

    NIE Da-hong1,2;TANG Gang-hua1,3

    2016-11-01

    Full Text Available Resonance energy transfer molecular imaging (RETI can markedly improve signal intensity and tissue penetrating capacity of optical imaging, and have huge potential application in the deep-tissue optical imaging in vivo. Resonance energy transfer (RET is an energy transition from the donor to an acceptor that is in close proximity, including non-radiative resonance energy transfer and radiative resonance energy transfer. RETI is an optical imaging technology that is based on RET. RETI mainly contains fluorescence resonance energy transfer imaging (FRETI, bioluminescence resonance energy transfer imaging (BRETI, chemiluminescence resonance energy transfer imaging (CRETI, and radiative resonance energy transfer imaging (RRETI. RETI is the hot field of molecular imaging research and has been widely used in the fields of biology and medicine. This review mainly focuses on RETI principle and application in biomedicine.

  17. Stable image acquisition for mobile image processing applications

    Science.gov (United States)

    Henning, Kai-Fabian; Fritze, Alexander; Gillich, Eugen; Mönks, Uwe; Lohweg, Volker

    2015-02-01

    Today, mobile devices (smartphones, tablets, etc.) are widespread and of high importance for their users. Their performance as well as versatility increases over time. This leads to the opportunity to use such devices for more specific tasks like image processing in an industrial context. For the analysis of images requirements like image quality (blur, illumination, etc.) as well as a defined relative position of the object to be inspected are crucial. Since mobile devices are handheld and used in constantly changing environments the challenge is to fulfill these requirements. We present an approach to overcome the obstacles and stabilize the image capturing process such that image analysis becomes significantly improved on mobile devices. Therefore, image processing methods are combined with sensor fusion concepts. The approach consists of three main parts. First, pose estimation methods are used to guide a user moving the device to a defined position. Second, the sensors data and the pose information are combined for relative motion estimation. Finally, the image capturing process is automated. It is triggered depending on the alignment of the device and the object as well as the image quality that can be achieved under consideration of motion and environmental effects.

  18. Application of Nanoparticles in Diagnostic Imaging via ...

    African Journals Online (AJOL)

    ... ability to image quantitatively both morphological and physiological functions of the tissue, ... Several types of contrast media are used in medical imaging and they can ... which is still widely used to examine internal organs of the body and ...

  19. Application of homomorphism to secure image sharing

    Science.gov (United States)

    Islam, Naveed; Puech, William; Hayat, Khizar; Brouzet, Robert

    2011-09-01

    In this paper, we present a new approach for sharing images between l players by exploiting the additive and multiplicative homomorphic properties of two well-known public key cryptosystems, i.e. RSA and Paillier. Contrary to the traditional schemes, the proposed approach employs secret sharing in a way that limits the influence of the dealer over the protocol and allows each player to participate with the help of his key-image. With the proposed approach, during the encryption step, each player encrypts his own key-image using the dealer's public key. The dealer encrypts the secret-to-be-shared image with the same public key and then, the l encrypted key-images plus the encrypted to-be shared image are multiplied homomorphically to get another encrypted image. After this step, the dealer can safely get a scrambled image which corresponds to the addition or multiplication of the l + 1 original images ( l key-images plus the secret image) because of the additive homomorphic property of the Paillier algorithm or multiplicative homomorphic property of the RSA algorithm. When the l players want to extract the secret image, they do not need to use keys and the dealer has no role. Indeed, with our approach, to extract the secret image, the l players need only to subtract their own key-image with no specific order from the scrambled image. Thus, the proposed approach provides an opportunity to use operators like multiplication on encrypted images for the development of a secure privacy preserving protocol in the image domain. We show that it is still possible to extract a visible version of the secret image with only l-1 key-images (when one key-image is missing) or when the l key-images used for the extraction are different from the l original key-images due to a lossy compression for example. Experimental results and security analysis verify and prove that the proposed approach is secure from cryptographic viewpoint.

  20. Ion implantation: an annotated bibliography

    International Nuclear Information System (INIS)

    Ting, R.N.; Subramanyam, K.

    1975-10-01

    Ion implantation is a technique for introducing controlled amounts of dopants into target substrates, and has been successfully used for the manufacture of silicon semiconductor devices. Ion implantation is superior to other methods of doping such as thermal diffusion and epitaxy, in view of its advantages such as high degree of control, flexibility, and amenability to automation. This annotated bibliography of 416 references consists of journal articles, books, and conference papers in English and foreign languages published during 1973-74, on all aspects of ion implantation including range distribution and concentration profile, channeling, radiation damage and annealing, compound semiconductors, structural and electrical characterization, applications, equipment and ion sources. Earlier bibliographies on ion implantation, and national and international conferences in which papers on ion implantation were presented have also been listed separately

  1. Image enhancement technology research for army applications

    NARCIS (Netherlands)

    Schwering, P.B.W.; Kemp, R.A.W.; Schutte, K.

    2013-01-01

    Recognition and identification ranges are limited to the quality of the images. Both the received contrast and the spatial resolution determine if objects are recognizable. Several aspects affect the image quality. First of all the sensor itself. The image quality depends on the size of the infrared

  2. Application of automatic image analysis in wood science

    Science.gov (United States)

    Charles W. McMillin

    1982-01-01

    In this paper I describe an image analysis system and illustrate with examples the application of automatic quantitative measurement to wood science. Automatic image analysis, a powerful and relatively new technology, uses optical, video, electronic, and computer components to rapidly derive information from images with minimal operator interaction. Such instruments...

  3. The GATO gene annotation tool for research laboratories

    Directory of Open Access Journals (Sweden)

    A. Fujita

    2005-11-01

    Full Text Available Large-scale genome projects have generated a rapidly increasing number of DNA sequences. Therefore, development of computational methods to rapidly analyze these sequences is essential for progress in genomic research. Here we present an automatic annotation system for preliminary analysis of DNA sequences. The gene annotation tool (GATO is a Bioinformatics pipeline designed to facilitate routine functional annotation and easy access to annotated genes. It was designed in view of the frequent need of genomic researchers to access data pertaining to a common set of genes. In the GATO system, annotation is generated by querying some of the Web-accessible resources and the information is stored in a local database, which keeps a record of all previous annotation results. GATO may be accessed from everywhere through the internet or may be run locally if a large number of sequences are going to be annotated. It is implemented in PHP and Perl and may be run on any suitable Web server. Usually, installation and application of annotation systems require experience and are time consuming, but GATO is simple and practical, allowing anyone with basic skills in informatics to access it without any special training. GATO can be downloaded at [http://mariwork.iq.usp.br/gato/]. Minimum computer free space required is 2 MB.

  4. Application of cone beam computed tomography in facial imaging science

    Institute of Scientific and Technical Information of China (English)

    Zacharias Fourie; Janalt Damstra; Yijin Ren

    2012-01-01

    The use of three-dimensional (3D) methods for facial imaging has increased significantly over the past years.Traditional 2D imaging has gradually being replaced by 3D images in different disciplines,particularly in the fields of orthodontics,maxillofacial surgery,plastic and reconstructive surgery,neurosurgery and forensic sciences.In most cases,3D facial imaging overcomes the limitations of traditional 2D methods and provides the clinician with more accurate information regarding the soft-tissues and the underlying skeleton.The aim of this study was to review the types of imaging methods used for facial imaging.It is important to realize the difference between the types of 3D imaging methods as application and indications thereof may differ.Since 3D cone beam computed tomography (CBCT) imaging will play an increasingly importanl role in orthodontics and orthognathic surgery,special emphasis should be placed on discussing CBCT applications in facial evaluations.

  5. Annotation of Regular Polysemy

    DEFF Research Database (Denmark)

    Martinez Alonso, Hector

    Regular polysemy has received a lot of attention from the theory of lexical semantics and from computational linguistics. However, there is no consensus on how to represent the sense of underspecified examples at the token level, namely when annotating or disambiguating senses of metonymic words...... and metonymic. We have conducted an analysis in English, Danish and Spanish. Later on, we have tried to replicate the human judgments by means of unsupervised and semi-supervised sense prediction. The automatic sense-prediction systems have been unable to find empiric evidence for the underspecified sense, even...

  6. MimoSA: a system for minimotif annotation

    Directory of Open Access Journals (Sweden)

    Kundeti Vamsi

    2010-06-01

    Full Text Available Abstract Background Minimotifs are short peptide sequences within one protein, which are recognized by other proteins or molecules. While there are now several minimotif databases, they are incomplete. There are reports of many minimotifs in the primary literature, which have yet to be annotated, while entirely novel minimotifs continue to be published on a weekly basis. Our recently proposed function and sequence syntax for minimotifs enables us to build a general tool that will facilitate structured annotation and management of minimotif data from the biomedical literature. Results We have built the MimoSA application for minimotif annotation. The application supports management of the Minimotif Miner database, literature tracking, and annotation of new minimotifs. MimoSA enables the visualization, organization, selection and editing functions of minimotifs and their attributes in the MnM database. For the literature components, Mimosa provides paper status tracking and scoring of papers for annotation through a freely available machine learning approach, which is based on word correlation. The paper scoring algorithm is also available as a separate program, TextMine. Form-driven annotation of minimotif attributes enables entry of new minimotifs into the MnM database. Several supporting features increase the efficiency of annotation. The layered architecture of MimoSA allows for extensibility by separating the functions of paper scoring, minimotif visualization, and database management. MimoSA is readily adaptable to other annotation efforts that manually curate literature into a MySQL database. Conclusions MimoSA is an extensible application that facilitates minimotif annotation and integrates with the Minimotif Miner database. We have built MimoSA as an application that integrates dynamic abstract scoring with a high performance relational model of minimotif syntax. MimoSA's TextMine, an efficient paper-scoring algorithm, can be used to

  7. Colour application on mammography image segmentation

    Science.gov (United States)

    Embong, R.; Aziz, N. M. Nik Ab.; Karim, A. H. Abd; Ibrahim, M. R.

    2017-09-01

    The segmentation process is one of the most important steps in image processing and computer vision since it is vital in the initial stage of image analysis. Segmentation of medical images involves complex structures and it requires precise segmentation result which is necessary for clinical diagnosis such as the detection of tumour, oedema, and necrotic tissues. Since mammography images are grayscale, researchers are looking at the effect of colour in the segmentation process of medical images. Colour is known to play a significant role in the perception of object boundaries in non-medical colour images. Processing colour images require handling more data, hence providing a richer description of objects in the scene. Colour images contain ten percent (10%) additional edge information as compared to their grayscale counterparts. Nevertheless, edge detection in colour image is more challenging than grayscale image as colour space is considered as a vector space. In this study, we implemented red, green, yellow, and blue colour maps to grayscale mammography images with the purpose of testing the effect of colours on the segmentation of abnormality regions in the mammography images. We applied the segmentation process using the Fuzzy C-means algorithm and evaluated the percentage of average relative error of area for each colour type. The results showed that all segmentation with the colour map can be done successfully even for blurred and noisy images. Also the size of the area of the abnormality region is reduced when compare to the segmentation area without the colour map. The green colour map segmentation produced the smallest percentage of average relative error (10.009%) while yellow colour map segmentation gave the largest percentage of relative error (11.367%).

  8. A special designed library for medical imaging applications

    Energy Technology Data Exchange (ETDEWEB)

    Lymberopoulos, D; Kotsopoulos, S; Zoupas, V; Yoldassis, N [Departmeent of Electrical Engineering, University of Patras, Patras 26 110 Greece (Greece); Spyropoulos, C [School of Medicine, Regional University Hospital, University of Patras, Patras 26 110 Greece (Greece)

    1994-12-31

    The present paper deals with a sophisticated and flexible library of medical purpose image processing routines. It contains modules for simple as well as advanced gray or colour image processing. This library offers powerful features for medical image processing and analysis applications, thus providing the physician with a means of analyzing and estimating medical images in order to accomplish their diagnostic procedures. 6 refs, 1 figs.

  9. Review of diffusion tensor imaging and its application in children

    Energy Technology Data Exchange (ETDEWEB)

    Vorona, Gregory A. [Children' s Hospital of Richmond at Virginia Commonwealth University, Department of Radiology, Richmond, VA (United States); Berman, Jeffrey I. [Children' s Hospital of Philadelphia, Department of Radiology, Philadelphia, PA (United States)

    2015-09-15

    Diffusion MRI is an imaging technique that uses the random motion of water to probe tissue microstructure. Diffusion tensor imaging (DTI) can quantitatively depict the organization and connectivity of white matter. Given the non-invasiveness of the technique, DTI has become a widely used tool for researchers and clinicians to examine the white matter of children. This review covers the basics of diffusion-weighted imaging and diffusion tensor imaging and discusses examples of their clinical application in children. (orig.)

  10. Fundamentals and applications of neutron imaging. Application part 9. Application of neutron imaging to biological research

    International Nuclear Information System (INIS)

    Kawabata, Yuji

    2007-01-01

    For radiography, the use of neutrons as a complement to X-rays is especially suitable for biological research such as plant, wood, and medical application due to the enhanced sensitivity to light elements such as hydrogen, carbon, and nitrogen. The present paper introduces applications of neutron CT to the humidity (water) distribution and its variation in the flowering plant as cut carnation, observation of water movement in refrigerated chrysanthemum leaves using very cold neutron and in cut leaves using deuterium oxide and ordinary water, measurement of water movement in sprouting cone and soy bean and growing ginseng in the soil, and other applications as to archaeological wood immersed in a restoration solution and to medical purposes. (S. Ohno)

  11. Predicting word sense annotation agreement

    DEFF Research Database (Denmark)

    Martinez Alonso, Hector; Johannsen, Anders Trærup; Lopez de Lacalle, Oier

    2015-01-01

    High agreement is a common objective when annotating data for word senses. However, a number of factors make perfect agreement impossible, e.g. the limitations of the sense inventories, the difficulty of the examples or the interpretation preferences of the annotations. Estimating potential...... agreement is thus a relevant task to supplement the evaluation of sense annotations. In this article we propose two methods to predict agreement on word-annotation instances. We experiment with a continuous representation and a three-way discretization of observed agreement. In spite of the difficulty...

  12. Signal and image processing in medical applications

    CERN Document Server

    Kumar, Amit; Rahim, B Abdul; Kumar, D Sravan

    2016-01-01

    This book highlights recent findings on and analyses conducted on signals and images in the area of medicine. The experimental investigations involve a variety of signals and images and their methodologies range from very basic to sophisticated methods. The book explains how signal and image processing methods can be used to detect and forecast abnormalities in an easy-to-follow manner, offering a valuable resource for researchers, engineers, physicians and bioinformatics researchers alike.

  13. Perceptual digital imaging methods and applications

    CERN Document Server

    Lukac, Rastislav

    2012-01-01

    Visual perception is a complex process requiring interaction between the receptors in the eye that sense the stimulus and the neural system and the brain that are responsible for communicating and interpreting the sensed visual information. This process involves several physical, neural, and cognitive phenomena whose understanding is essential to design effective and computationally efficient imaging solutions. Building on advances in computer vision, image and video processing, neuroscience, and information engineering, perceptual digital imaging greatly enhances the capabilities of tradition

  14. Oncologic applications of diagnostic imaging techniques

    International Nuclear Information System (INIS)

    Forrest, L.J.; Thrall, D.E.

    1995-01-01

    Before appropriate therapy can be instituted for a cancer patient, the presence and extent of tumor must be evaluated. Deciding which imaging technique to use depends on tumor location, type, and biologic behavior. Conventional radiography provides important information at a relatively low cost compared with other imaging modalities. Ultrasound is a valuable adjunct to radiography, but does not replace it because both imaging modalities provide unique information. Nuclear medicine procedures contribute additional, unique data by providing physiological information, but specificity is lacking. Both CT and MRI provide images with exquisite anatomic detail, but availability and cost prohibit their general use

  15. Ontological Annotation with WordNet

    Energy Technology Data Exchange (ETDEWEB)

    Sanfilippo, Antonio P.; Tratz, Stephen C.; Gregory, Michelle L.; Chappell, Alan R.; Whitney, Paul D.; Posse, Christian; Paulson, Patrick R.; Baddeley, Bob; Hohimer, Ryan E.; White, Amanda M.

    2006-06-06

    Semantic Web applications require robust and accurate annotation tools that are capable of automating the assignment of ontological classes to words in naturally occurring text (ontological annotation). Most current ontologies do not include rich lexical databases and are therefore not easily integrated with word sense disambiguation algorithms that are needed to automate ontological annotation. WordNet provides a potentially ideal solution to this problem as it offers a highly structured lexical conceptual representation that has been extensively used to develop word sense disambiguation algorithms. However, WordNet has not been designed as an ontology, and while it can be easily turned into one, the result of doing this would present users with serious practical limitations due to the great number of concepts (synonym sets) it contains. Moreover, mapping WordNet to an existing ontology may be difficult and requires substantial labor. We propose to overcome these limitations by developing an analytical platform that (1) provides a WordNet-based ontology offering a manageable and yet comprehensive set of concept classes, (2) leverages the lexical richness of WordNet to give an extensive characterization of concept class in terms of lexical instances, and (3) integrates a class recognition algorithm that automates the assignment of concept classes to words in naturally occurring text. The ensuing framework makes available an ontological annotation platform that can be effectively integrated with intelligence analysis systems to facilitate evidence marshaling and sustain the creation and validation of inference models.

  16. Automating Ontological Annotation with WordNet

    Energy Technology Data Exchange (ETDEWEB)

    Sanfilippo, Antonio P.; Tratz, Stephen C.; Gregory, Michelle L.; Chappell, Alan R.; Whitney, Paul D.; Posse, Christian; Paulson, Patrick R.; Baddeley, Bob L.; Hohimer, Ryan E.; White, Amanda M.

    2006-01-22

    Semantic Web applications require robust and accurate annotation tools that are capable of automating the assignment of ontological classes to words in naturally occurring text (ontological annotation). Most current ontologies do not include rich lexical databases and are therefore not easily integrated with word sense disambiguation algorithms that are needed to automate ontological annotation. WordNet provides a potentially ideal solution to this problem as it offers a highly structured lexical conceptual representation that has been extensively used to develop word sense disambiguation algorithms. However, WordNet has not been designed as an ontology, and while it can be easily turned into one, the result of doing this would present users with serious practical limitations due to the great number of concepts (synonym sets) it contains. Moreover, mapping WordNet to an existing ontology may be difficult and requires substantial labor. We propose to overcome these limitations by developing an analytical platform that (1) provides a WordNet-based ontology offering a manageable and yet comprehensive set of concept classes, (2) leverages the lexical richness of WordNet to give an extensive characterization of concept class in terms of lexical instances, and (3) integrates a class recognition algorithm that automates the assignment of concept classes to words in naturally occurring text. The ensuing framework makes available an ontological annotation platform that can be effectively integrated with intelligence analysis systems to facilitate evidence marshaling and sustain the creation and validation of inference models.

  17. Phylogenetic molecular function annotation

    International Nuclear Information System (INIS)

    Engelhardt, Barbara E; Jordan, Michael I; Repo, Susanna T; Brenner, Steven E

    2009-01-01

    It is now easier to discover thousands of protein sequences in a new microbial genome than it is to biochemically characterize the specific activity of a single protein of unknown function. The molecular functions of protein sequences have typically been predicted using homology-based computational methods, which rely on the principle that homologous proteins share a similar function. However, some protein families include groups of proteins with different molecular functions. A phylogenetic approach for predicting molecular function (sometimes called 'phylogenomics') is an effective means to predict protein molecular function. These methods incorporate functional evidence from all members of a family that have functional characterizations using the evolutionary history of the protein family to make robust predictions for the uncharacterized proteins. However, they are often difficult to apply on a genome-wide scale because of the time-consuming step of reconstructing the phylogenies of each protein to be annotated. Our automated approach for function annotation using phylogeny, the SIFTER (Statistical Inference of Function Through Evolutionary Relationships) methodology, uses a statistical graphical model to compute the probabilities of molecular functions for unannotated proteins. Our benchmark tests showed that SIFTER provides accurate functional predictions on various protein families, outperforming other available methods.

  18. Applicability of compton imaging in nuclear decommissioning activities

    International Nuclear Information System (INIS)

    Ljubenov, V.Lj.; Marinkovic, P.M.

    2002-01-01

    During the decommissioning of nuclear facilities significant part of the activities is related to the radiological characterization, waste classification and management. For these purposes a relatively new imaging technique, based on information from the gamma radiation that undergoes Compton scattering, is applicable. Compton imaging systems have a number of advantages for nuclear waste characterization, such as identifying hot spots in mixed waste in order to reduce the volume of high-level waste requiring extensive treatment or long-term storage, imaging large contaminated areas and objects etc. Compton imaging also has potential applications for monitoring of production, transport and storage of nuclear materials and components. This paper discusses some system design requirements and performance specifications for these applications. The advantages of Compton imaging are compared to competing imaging techniques. (author)

  19. BOOK REVIEW: Infrared Thermal Imaging: Fundamentals, Research and Applications Infrared Thermal Imaging: Fundamentals, Research and Applications

    Science.gov (United States)

    Planinsic, Gorazd

    2011-09-01

    Ten years ago, a book with a title like this would be interesting only to a narrow circle of specialists. Thanks to rapid advances in technology, the price of thermal imaging devices has dropped sharply, so they have, almost overnight, become accessible to a wide range of users. As the authors point out in the preface, the growth of this area has led to a paradoxical situation: now there are probably more infrared (IR) cameras sold worldwide than there are people who understand the basic physics behind them and know how to correctly interpret the colourful images that are obtained with these devices. My experience confirms this. When I started using the IR camera during lectures on the didactics of physics, I soon realized that I needed more knowledge, which I later found in this book. A wide range of potential readers and topical areas provides a good motive for writing a book such as this one, but it also represents a major challenge for authors, as compromises in the style of writing and choice of topics are required. The authors of this book have successfully achieved this, and indeed done an excellent job. This book addresses a wide range of readers, from engineers, technicians, and physics and science teachers in schools and universities, to researchers and specialists who are professionally active in the field. As technology in this area has made great progress in recent times, this book is also a valuable guide for those who opt to purchase an infrared camera. Chapters in this book could be divided into three areas: the fundamentals of IR thermal imaging and related physics (two chapters); IR imaging systems and methods (two chapters) and applications, including six chapters on pedagogical applications; IR imaging of buildings and infrastructure, industrial applications, microsystems, selected topics in research and industry, and selected applications from other fields. All chapters contain numerous colour pictures and diagrams, and a rich list of relevant

  20. Application of stereo-imaging technology to medical field.

    Science.gov (United States)

    Nam, Kyoung Won; Park, Jeongyun; Kim, In Young; Kim, Kwang Gi

    2012-09-01

    There has been continuous development in the area of stereoscopic medical imaging devices, and many stereoscopic imaging devices have been realized and applied in the medical field. In this article, we review past and current trends pertaining to the application stereo-imaging technologies in the medical field. We describe the basic principles of stereo vision and visual issues related to it, including visual discomfort, binocular disparities, vergence-accommodation mismatch, and visual fatigue. We also present a brief history of medical applications of stereo-imaging techniques, examples of recently developed stereoscopic medical devices, and patent application trends as they pertain to stereo-imaging medical devices. Three-dimensional (3D) stereo-imaging technology can provide more realistic depth perception to the viewer than conventional two-dimensional imaging technology. Therefore, it allows for a more accurate understanding and analysis of the morphology of an object. Based on these advantages, the significance of stereoscopic imaging in the medical field increases in accordance with the increase in the number of laparoscopic surgeries, and stereo-imaging technology plays a key role in the diagnoses of the detailed morphologies of small biological specimens. The application of 3D stereo-imaging technology to the medical field will help improve surgical accuracy, reduce operation times, and enhance patient safety. Therefore, it is important to develop more enhanced stereoscopic medical devices.

  1. Applications of stochastic geometry in image analysis

    NARCIS (Netherlands)

    Lieshout, van M.N.M.; Kendall, W.S.; Molchanov, I.S.

    2009-01-01

    A discussion is given of various stochastic geometry models (random fields, sequential object processes, polygonal field models) which can be used in intermediate and high-level image analysis. Two examples are presented of actual image analysis problems (motion tracking in video,

  2. Field application of feature-enhanced imaging

    International Nuclear Information System (INIS)

    Mucciardi, A.N.

    1988-01-01

    One of the more challenging ultrasonic inspection problems is bimetallic weld inspection or, in general, dissimilar metal welds. These types of welds involve complicated geometries and various mixtures of materials. Attempts to address this problem with imaging alone have fallen short of desired goals. The probable reason for this is the lack of information supplied by imaging systems, which are limited to amplitude and time displays. Having RF information available for analysis greatly enhances the information obtainable from dissimilar metal welds and, coupled with the spatial map generated by an imaging system, can significantly improve the reliability of dissimilar metal weld inspections. Ultra Image and TestPro are, respectively, an imaging system and a feature-based signal analysis system. The purpose of this project is to integrate these two systems to produce a feature-enhanced imaging system. This means that a software link is established between Ultra Image and the PC-based TestPro system so that the user of the combined system can perform all the usual imaging functions and also have available a wide variety of RF signal analysis functions. The analysis functions include waveform feature-based pattern recognition as well as artificial intelligence/expert system techniques

  3. Imaging-Genetics Applications in Child Psychiatry

    Science.gov (United States)

    Pine, Daniel S.; Ernst, Monique; Leibenluft, Ellen

    2010-01-01

    Objective: To place imaging-genetics research in the context of child psychiatry. Method: A conceptual overview is provided, followed by discussion of specific research examples. Results: Imaging-genetics research is described linking brain function to two specific genes, for the serotonin-reuptake-transporter protein and a monoamine oxidase…

  4. Image acquisition system for traffic monitoring applications

    Science.gov (United States)

    Auty, Glen; Corke, Peter I.; Dunn, Paul; Jensen, Murray; Macintyre, Ian B.; Mills, Dennis C.; Nguyen, Hao; Simons, Ben

    1995-03-01

    An imaging system for monitoring traffic on multilane highways is discussed. The system, named Safe-T-Cam, is capable of operating 24 hours per day in all but extreme weather conditions and can capture still images of vehicles traveling up to 160 km/hr. Systems operating at different remote locations are networked to allow transmission of images and data to a control center. A remote site facility comprises a vehicle detection and classification module (VCDM), an image acquisition module (IAM) and a license plate recognition module (LPRM). The remote site is connected to the central site by an ISDN communications network. The remote site system is discussed in this paper. The VCDM consists of a video camera, a specialized exposure control unit to maintain consistent image characteristics, and a 'real-time' image processing system that processes 50 images per second. The VCDM can detect and classify vehicles (e.g. cars from trucks). The vehicle class is used to determine what data should be recorded. The VCDM uses a vehicle tracking technique to allow optimum triggering of the high resolution camera of the IAM. The IAM camera combines the features necessary to operate consistently in the harsh environment encountered when imaging a vehicle 'head-on' in both day and night conditions. The image clarity obtained is ideally suited for automatic location and recognition of the vehicle license plate. This paper discusses the camera geometry, sensor characteristics and the image processing methods which permit consistent vehicle segmentation from a cluttered background allowing object oriented pattern recognition to be used for vehicle classification. The image capture of high resolution images and the image characteristics required for the LPRMs automatic reading of vehicle license plates, is also discussed. The results of field tests presented demonstrate that the vision based Safe-T-Cam system, currently installed on open highways, is capable of producing automatic

  5. Interferometric Imaging and its Application to 4D Imaging

    KAUST Repository

    Sinha, Mrinal

    2018-03-01

    This thesis describes new interferometric imaging methods for migration and waveform inversion. The key idea is to use reflection events from a known reference reflector to ”naturally redatum” the receivers and sources to the reference reflector. Here, ”natural redatuming” is a data-driven process where the redatuming Green’s functions are obtained from the data. Interferometric imaging eliminates the statics associated with the noisy overburden above the reference reflector. To mitigate the defocussing caused by overburden errors I first propose the use of interferometric least-squares migration (ILSM) to estimate the migration image. Here, a known reflector is used as the reference interface for ILSM, and the data are naturally redatumed to this reference interface before imaging. Numerical results on synthetic and field data show that ILSM can significantly reduce the defocussing artifacts in the migration image. Next, I develop a waveform tomography approach for inverting the velocity model by mitigating the velocity errors in the overburden. Unresolved velocity errors in the overburden velocity model can cause conventional full-waveform inversion to get stuck in a local minimum. To resolve this problem, I present interferometric full-waveform inversion (IFWI), where conventional waveform tomography is reformulated so a velocity model is found that minimizes the objective function with an interferometric crosscorrelogram misfit. Numerical examples show that IFWI, compared to FWI, computes a significantly more accurate velocity model in the presence of a nearsurface with unknown velocity anomalies. I use IFWI and ILSM for 4D imaging where seismic data are recorded at different times over the same reservoir. To eliminate the time-varying effects of the near surface both data sets are virtually redatumed to a common reference interface before migration. This largely eliminates the overburden-induced statics errors in both data sets. Results with

  6. Pixel Detectors for Particle Physics and Imaging Applications

    CERN Document Server

    Wermes, N

    2003-01-01

    Semiconductor pixel detectors offer features for the detection of radiation which are interesting for particle physics detectors as well as for imaging e.g. in biomedical applications (radiography, autoradiography, protein crystallography) or in Xray astronomy. At the present time hybrid pixel detectors are technologically mastered to a large extent and large scale particle detectors are being built. Although the physical requirements are often quite different, imaging applications are emerging and interesting prototype results are available. Monolithic detectors, however, offer interesting features for both fields in future applications. The state of development of hybrid and monolithic pixel detectors, excluding CCDs, and their different suitability for particle detection and imaging, is reviewed.

  7. Mesotext. Framing and exploring annotations

    NARCIS (Netherlands)

    Boot, P.; Boot, P.; Stronks, E.

    2007-01-01

    From the introduction: Annotation is an important item on the wish list for digital scholarly tools. It is one of John Unsworth’s primitives of scholarship (Unsworth 2000). Especially in linguistics,a number of tools have been developed that facilitate the creation of annotations to source material

  8. THE DIMENSIONS OF COMPOSITION ANNOTATION.

    Science.gov (United States)

    MCCOLLY, WILLIAM

    ENGLISH TEACHER ANNOTATIONS WERE STUDIED TO DETERMINE THE DIMENSIONS AND PROPERTIES OF THE ENTIRE SYSTEM FOR WRITING CORRECTIONS AND CRITICISMS ON COMPOSITIONS. FOUR SETS OF COMPOSITIONS WERE WRITTEN BY STUDENTS IN GRADES 9 THROUGH 13. TYPESCRIPTS OF THE COMPOSITIONS WERE ANNOTATED BY CLASSROOM ENGLISH TEACHERS. THEN, 32 ENGLISH TEACHERS JUDGED…

  9. Application of radionuclide imaging in hyperparathyroidism

    International Nuclear Information System (INIS)

    Zheng Yumin; Yan Jue

    2011-01-01

    Hyperparathyroidism (HPT) is overactivity of the parathyroid glands resulting in excess production of parathyroid hormone. Excessive parathyroid hormone secretion may be due to problems in the glands themselves, or may be secondary HPT. The diagnosis is mainly based on the patient's medical history and biochemical tests. The best treatment nowadays is surgical removal of the overactive parathyroid glands or adenoma. The imaging methods for the preoperative localization diagnosis include radionuclide imaging,ultrasonography, CT, MRI, etc. This article was a summary of HPT radionuclide imaging. (authors)

  10. Functional mesoporous silica nanoparticles for bio-imaging applications.

    Science.gov (United States)

    Cha, Bong Geun; Kim, Jaeyun

    2018-03-22

    Biomedical investigations using mesoporous silica nanoparticles (MSNs) have received significant attention because of their unique properties including controllable mesoporous structure, high specific surface area, large pore volume, and tunable particle size. These unique features make MSNs suitable for simultaneous diagnosis and therapy with unique advantages to encapsulate and load a variety of therapeutic agents, deliver these agents to the desired location, and release the drugs in a controlled manner. Among various clinical areas, nanomaterials-based bio-imaging techniques have advanced rapidly with the development of diverse functional nanoparticles. Due to the unique features of MSNs, an imaging agent supported by MSNs can be a promising system for developing targeted bio-imaging contrast agents with high structural stability and enhanced functionality that enable imaging of various modalities. Here, we review the recent achievements on the development of functional MSNs for bio-imaging applications, including optical imaging, magnetic resonance imaging (MRI), positron emission tomography (PET), computed tomography (CT), ultrasound imaging, and multimodal imaging for early diagnosis. With further improvement in noninvasive bio-imaging techniques, the MSN-supported imaging agent systems are expected to contribute to clinical applications in the future. This article is categorized under: Diagnostic Tools > In vivo Nanodiagnostics and Imaging Nanotechnology Approaches to Biology > Nanoscale Systems in Biology. © 2018 Wiley Periodicals, Inc.

  11. Development and application of high energy imaging technology

    International Nuclear Information System (INIS)

    Chen Shengzu

    1999-01-01

    High Energy Positron Imaging (HEPI) is a new technology. The idea of positron imaging can be traced back to early 1950's. HEPI imaging is formed by positron emitter radionuclide produced by cyclotron, such as 15 O, 13 N, 11 C and 18 F, which are most abundant elements in human body. Clinical applications of HEPI have been witnessed rapidly in recent years. HEPI imaging can be obtained by both PET and SPECT, namely high energy collimation imaging, Mdecular Coincidence Detection (MCD) and positron emission tomography

  12. Multi-sensor image fusion and its applications

    CERN Document Server

    Blum, Rick S

    2005-01-01

    Taking another lesson from nature, the latest advances in image processing technology seek to combine image data from several diverse types of sensors in order to obtain a more accurate view of the scene: very much the same as we rely on our five senses. Multi-Sensor Image Fusion and Its Applications is the first text dedicated to the theory and practice of the registration and fusion of image data, covering such approaches as statistical methods, color-related techniques, model-based methods, and visual information display strategies.After a review of state-of-the-art image fusion techniques,

  13. Gynecologic imaging: Current and emerging applications

    Directory of Open Access Journals (Sweden)

    Iyer V

    2010-01-01

    Full Text Available Common diagnostic challenges in gynecology and the role of imaging in their evaluation are reviewed. Etiologies of abnormal uterine bleeding identified on pelvic sonography and sonohysterography are presented. An algorithmic approach for characterizing an incidentally detected adnexal mass and use of magnetic resonance imaging for definitive diagnosis are discussed. Finally, the role of F18-fluorodeoxyglucose positron emission tomography in the management of gynecological malignancies, and pitfalls associated with their use are examined.

  14. Magnetic particle imaging: from proof of principle to preclinical applications

    Science.gov (United States)

    Knopp, T.; Gdaniec, N.; Möddel, M.

    2017-07-01

    Tomographic imaging has become a mandatory tool for the diagnosis of a majority of diseases in clinical routine. Since each method has its pros and cons, a variety of them is regularly used in clinics to satisfy all application needs. Magnetic particle imaging (MPI) is a relatively new tomographic imaging technique that images magnetic nanoparticles with a high spatiotemporal resolution in a quantitative way, and in turn is highly suited for vascular and targeted imaging. MPI was introduced in 2005 and now enters the preclinical research phase, where medical researchers get access to this new technology and exploit its potential under physiological conditions. Within this paper, we review the development of MPI since its introduction in 2005. Besides an in-depth description of the basic principles, we provide detailed discussions on imaging sequences, reconstruction algorithms, scanner instrumentation and potential medical applications.

  15. Infrared hyperspectral imaging miniaturized for UAV applications

    Science.gov (United States)

    Hinnrichs, Michele; Hinnrichs, Bradford; McCutchen, Earl

    2017-02-01

    Pacific Advanced Technology (PAT) has developed an infrared hyperspectral camera, both MWIR and LWIR, small enough to serve as a payload on a miniature unmanned aerial vehicles. The optical system has been integrated into the cold-shield of the sensor enabling the small size and weight of the sensor. This new and innovative approach to infrared hyperspectral imaging spectrometer uses micro-optics and will be explained in this paper. The micro-optics are made up of an area array of diffractive optical elements where each element is tuned to image a different spectral region on a common focal plane array. The lenslet array is embedded in the cold-shield of the sensor and actuated with a miniature piezo-electric motor. This approach enables rapid infrared spectral imaging with multiple spectral images collected and processed simultaneously each frame of the camera. This paper will present our optical mechanical design approach which results in an infrared hyper-spectral imaging system that is small enough for a payload on a mini-UAV or commercial quadcopter. Also, an example of how this technology can easily be used to quantify a hydrocarbon gas leak's volume and mass flowrates. The diffractive optical elements used in the lenslet array are blazed gratings where each lenslet is tuned for a different spectral bandpass. The lenslets are configured in an area array placed a few millimeters above the focal plane and embedded in the cold-shield to reduce the background signal normally associated with the optics. We have developed various systems using a different number of lenslets in the area array. Depending on the size of the focal plane and the diameter of the lenslet array will determine the spatial resolution. A 2 x 2 lenslet array will image four different spectral images of the scene each frame and when coupled with a 512 x 512 focal plane array will give spatial resolution of 256 x 256 pixel each spectral image. Another system that we developed uses a 4 x 4

  16. Field Imaging Spectroscopy. Applications in Earthquake Geology

    Science.gov (United States)

    Ragona, D.; Minster, B.; Rockwell, T. K.; Fialko, Y.; Jussila, J.; Blom, R.

    2005-12-01

    Field Imaging Spectroscopy in the visible and infrared sections of the spectrum can be used as a technique to assist paleoseismological studies. Submeter range hyperspectral images of paleoseismic excavations can assist the analyisis and interpretation of the earthquake history of a site. They also provide an excellent platform for storage of the stratigraphic and structural information collected from such a site. At the present, most field data are collected descriptively. This greatly enhances the range of information that can be recorded in the field. The descriptions are documented on hand drawn field logs and/or photomosaics constructed from individual photographs. Recently developed portable hyperspectral sensors acquire high-quality spectroscopic information at high spatial resolution (pixel size ~ 0.5 mm at 50 cm) over frequencies ranging from the visible band to short wave infrared. The new data collection and interpretation methodology that we are developing (Field Imaging Spectroscopy) makes available, for the first time, a tool to quantitatively analyze paleoseismic and stratigraphic information. The reflectance spectra of each sub-millimeter portion of the material are stored in a 3-D matrix (hyperspectral cube) that can be analyzed by visual inspection, or by using a large variety of algorithms. The reflectance spectrum is related to the chemical composition and physical properties of the surface therefore hyperspectral images are capable of revealing subtle changes in texture, composition and weathering. For paleoseismic studies, we are primarily interested in distinguishing changes between layers at a given site (spectral stratigraphy) rather than the precise composition of the layers, although this is an added benefit. We have experimented with push-broom (panoramic) portable scanners, and acquired data form portions of fault exposures and cores. These images were processed using well-known imaging processing algorithms, and the results have being

  17. Processing Infrared Images For Fire Management Applications

    Science.gov (United States)

    Warren, John R.; Pratt, William K.

    1981-12-01

    The USDA Forest Service has used airborne infrared systems for forest fire detection and mapping for many years. The transfer of the images from plane to ground and the transposition of fire spots and perimeters to maps has been performed manually. A new system has been developed which uses digital image processing, transmission, and storage. Interactive graphics, high resolution color display, calculations, and computer model compatibility are featured in the system. Images are acquired by an IR line scanner and converted to 1024 x 1024 x 8 bit frames for transmission to the ground at a 1.544 M bit rate over a 14.7 GHZ carrier. Individual frames are received and stored, then transferred to a solid state memory to refresh the display at a conventional 30 frames per second rate. Line length and area calculations, false color assignment, X-Y scaling, and image enhancement are available. Fire spread can be calculated for display and fire perimeters plotted on maps. The performance requirements, basic system, and image processing will be described.

  18. Comparison of concept recognizers for building the Open Biomedical Annotator

    Directory of Open Access Journals (Sweden)

    Rubin Daniel

    2009-09-01

    Full Text Available Abstract The National Center for Biomedical Ontology (NCBO is developing a system for automated, ontology-based access to online biomedical resources (Shah NH, et al.: Ontology-driven indexing of public datasets for translational bioinformatics. BMC Bioinformatics 2009, 10(Suppl 2:S1. The system's indexing workflow processes the text metadata of diverse resources such as datasets from GEO and ArrayExpress to annotate and index them with concepts from appropriate ontologies. This indexing requires the use of a concept-recognition tool to identify ontology concepts in the resource's textual metadata. In this paper, we present a comparison of two concept recognizers – NLM's MetaMap and the University of Michigan's Mgrep. We utilize a number of data sources and dictionaries to evaluate the concept recognizers in terms of precision, recall, speed of execution, scalability and customizability. Our evaluations demonstrate that Mgrep has a clear edge over MetaMap for large-scale service oriented applications. Based on our analysis we also suggest areas of potential improvements for Mgrep. We have subsequently used Mgrep to build the Open Biomedical Annotator service. The Annotator service has access to a large dictionary of biomedical terms derived from the United Medical Language System (UMLS and NCBO ontologies. The Annotator also leverages the hierarchical structure of the ontologies and their mappings to expand annotations. The Annotator service is available to the community as a REST Web service for creating ontology-based annotations of their data.

  19. Multimodal imaging of bone metastases: From preclinical to clinical applications

    Directory of Open Access Journals (Sweden)

    Stephan Ellmann

    2015-10-01

    Full Text Available Metastases to the skeletal system are commonly observed in cancer patients, highly affecting the patients' quality of life. Imaging plays a major role in detection, follow-up, and molecular characterisation of metastatic disease. Thus, imaging techniques have been optimised and combined in a multimodal and multiparametric manner for assessment of complementary aspects in osseous metastases. This review summarises both application of the most relevant imaging techniques for bone metastasis in preclinical models and the clinical setting.

  20. Remote Sensing Image in the Application of Agricultural Tourism Planning

    Directory of Open Access Journals (Sweden)

    Guojing Fan

    2013-06-01

    Full Text Available This paper introduces the processing technology of high resolution remote sensing image, the specific making process of tourism map and different remote sensing data in the key application of tourism planning and so on. Remote sensing extracts agricultural tourism planning information, improving the scientificalness and operability of agricultural tourism planning. Therefore remote sensing image in the application of agricultural tourism planning will be the inevitable trend of tourism development.

  1. Graphical User Interfaces for Volume Rendering Applications in Medical Imaging

    OpenAIRE

    Lindfors, Lisa; Lindmark, Hanna

    2002-01-01

    Volume rendering applications are used in medical imaging in order to facilitate the analysis of three-dimensional image data. This study focuses on how to improve the usability of graphical user interfaces of these systems, by gathering user requirements. This is achieved by evaluations of existing systems, together with interviews and observations at clinics in Sweden that use volume rendering to some extent. The usability of the applications of today is not sufficient, according to the use...

  2. VIS/NIR imaging application for honey floral origin determination

    NARCIS (Netherlands)

    Minaei, Saeid; Shafiee, Sahameh; Polder, Gerrit; Moghadam-Charkari, Nasrolah; Ruth, van Saskia; Barzegar, Mohsen; Zahiri, Javad; Alewijn, Martin; Kuś, Piotr M.

    2017-01-01

    Nondestructive methods are of utmost importance for honey characterization. This study investigates the potential application of VIS-NIR hyperspectral imaging for detection of honey flower origin using machine learning techniques. Hyperspectral images of 52 honey samples were taken in

  3. Efficient Image Blur in Web-Based Applications

    DEFF Research Database (Denmark)

    Kraus, Martin

    2010-01-01

    Scripting languages require the use of high-level library functions to implement efficient image processing; thus, real-time image blur in web-based applications is a challenging task unless specific library functions are available for this purpose. We present a pyramid blur algorithm, which can ...

  4. Neural networks: Application to medical imaging

    Science.gov (United States)

    Clarke, Laurence P.

    1994-01-01

    The research mission is the development of computer assisted diagnostic (CAD) methods for improved diagnosis of medical images including digital x-ray sensors and tomographic imaging modalities. The CAD algorithms include advanced methods for adaptive nonlinear filters for image noise suppression, hybrid wavelet methods for feature segmentation and enhancement, and high convergence neural networks for feature detection and VLSI implementation of neural networks for real time analysis. Other missions include (1) implementation of CAD methods on hospital based picture archiving computer systems (PACS) and information networks for central and remote diagnosis and (2) collaboration with defense and medical industry, NASA, and federal laboratories in the area of dual use technology conversion from defense or aerospace to medicine.

  5. Multivendor Spectral-Domain Optical Coherence Tomography Dataset, Observer Annotation Performance Evaluation, and Standardized Evaluation Framework for Intraretinal Cystoid Fluid Segmentation

    Directory of Open Access Journals (Sweden)

    Jing Wu

    2016-01-01

    Full Text Available Development of image analysis and machine learning methods for segmentation of clinically significant pathology in retinal spectral-domain optical coherence tomography (SD-OCT, used in disease detection and prediction, is limited due to the availability of expertly annotated reference data. Retinal segmentation methods use datasets that either are not publicly available, come from only one device, or use different evaluation methodologies making them difficult to compare. Thus we present and evaluate a multiple expert annotated reference dataset for the problem of intraretinal cystoid fluid (IRF segmentation, a key indicator in exudative macular disease. In addition, a standardized framework for segmentation accuracy evaluation, applicable to other pathological structures, is presented. Integral to this work is the dataset used which must be fit for purpose for IRF segmentation algorithm training and testing. We describe here a multivendor dataset comprised of 30 scans. Each OCT scan for system training has been annotated by multiple graders using a proprietary system. Evaluation of the intergrader annotations shows a good correlation, thus making the reproducibly annotated scans suitable for the training and validation of image processing and machine learning based segmentation methods. The dataset will be made publicly available in the form of a segmentation Grand Challenge.

  6. Nuclear cardiac imaging: Principles and applications

    Energy Technology Data Exchange (ETDEWEB)

    Iskandrian, A.S.

    1987-01-01

    This book is divided into 11 chapters. The first three provide a short description of the instrumentation, radiopharmaceuticals, and imaging techniques used in nuclear cardiology. Chapter 4 discusses exercise testing. Chapter 5 gives the theory, technical aspects, and interpretations of thallium-201 myocardial imaging and radionuclide ventriculography. The remaining chapters discuss the use of these techniques in patients with coronary artery disease, acute myocardial infarction, valvular heart disease, and other forms of cardiac disease. The author intended to emphasize the implications of nuclear cardiology procedures on patient care management and to provide a comprehensive bibliography.

  7. Nuclear cardiac imaging: Principles and applications

    International Nuclear Information System (INIS)

    Iskandrian, A.S.

    1987-01-01

    This book is divided into 11 chapters. The first three provide a short description of the instrumentation, radiopharmaceuticals, and imaging techniques used in nuclear cardiology. Chapter 4 discusses exercise testing. Chapter 5 gives the theory, technical aspects, and interpretations of thallium-201 myocardial imaging and radionuclide ventriculography. The remaining chapters discuss the use of these techniques in patients with coronary artery disease, acute myocardial infarction, valvular heart disease, and other forms of cardiac disease. The author intended to emphasize the implications of nuclear cardiology procedures on patient care management and to provide a comprehensive bibliography

  8. Theory and Application of Image Enhancement

    Science.gov (United States)

    1994-02-01

    for collecting RGB data and displaying image 11205 beadbits - 0: updown m 0 11210 rowx - 3: columnx m 22: vidthlx - 58: depthx - 6: forex - 15: backx...1 11220 VIEW PRINT 2 TO 24 11230 CALL box(rowx, columnx, widthlx, depthx, forex , backx) 11240 LOCATE rowx + 1, columnx + 1: INPUT ’Type Image...Filename " f$ 11242 IF f$ - " THEN forex - 0: backx - 0 11244 IF £0 - " THEN CALL box(rowx, columnx, widthlx, depthx, forex , backx) 11246 IF f$ - THEN

  9. Model and Interoperability using Meta Data Annotations

    Science.gov (United States)

    David, O.

    2011-12-01

    Software frameworks and architectures are in need for meta data to efficiently support model integration. Modelers have to know the context of a model, often stepping into modeling semantics and auxiliary information usually not provided in a concise structure and universal format, consumable by a range of (modeling) tools. XML often seems the obvious solution for capturing meta data, but its wide adoption to facilitate model interoperability is limited by XML schema fragmentation, complexity, and verbosity outside of a data-automation process. Ontologies seem to overcome those shortcomings, however the practical significance of their use remains to be demonstrated. OMS version 3 took a different approach for meta data representation. The fundamental building block of a modular model in OMS is a software component representing a single physical process, calibration method, or data access approach. Here, programing language features known as Annotations or Attributes were adopted. Within other (non-modeling) frameworks it has been observed that annotations lead to cleaner and leaner application code. Framework-supported model integration, traditionally accomplished using Application Programming Interfaces (API) calls is now achieved using descriptive code annotations. Fully annotated components for various hydrological and Ag-system models now provide information directly for (i) model assembly and building, (ii) data flow analysis for implicit multi-threading or visualization, (iii) automated and comprehensive model documentation of component dependencies, physical data properties, (iv) automated model and component testing, calibration, and optimization, and (v) automated audit-traceability to account for all model resources leading to a particular simulation result. Such a non-invasive methodology leads to models and modeling components with only minimal dependencies on the modeling framework but a strong reference to its originating code. Since models and

  10. Light field imaging and application analysis in THz

    Science.gov (United States)

    Zhang, Hongfei; Su, Bo; He, Jingsuo; Zhang, Cong; Wu, Yaxiong; Zhang, Shengbo; Zhang, Cunlin

    2018-01-01

    The light field includes the direction information and location information. Light field imaging can capture the whole light field by single exposure. The four-dimensional light field function model represented by two-plane parameter, which is proposed by Levoy, is adopted in the light field. Acquisition of light field is based on the microlens array, camera array and the mask. We calculate the dates of light-field to synthetize light field image. The processing techniques of light field data include technology of refocusing rendering, technology of synthetic aperture and technology of microscopic imaging. Introducing the technology of light field imaging into THz, the efficiency of 3D imaging is higher than that of conventional THz 3D imaging technology. The advantages compared with visible light field imaging include large depth of field, wide dynamic range and true three-dimensional. It has broad application prospects.

  11. Terahertz Imaging for Biomedical Applications Pattern Recognition and Tomographic Reconstruction

    CERN Document Server

    Yin, Xiaoxia; Abbott, Derek

    2012-01-01

    Terahertz Imaging for Biomedical Applications: Pattern Recognition and Tomographic Reconstruction presents the necessary algorithms needed to assist screening, diagnosis, and treatment, and these algorithms will play a critical role in the accurate detection of abnormalities present in biomedical imaging. Terahertz biomedical imaging has become an area of interest due to its ability to simultaneously acquire both image and spectral information. Terahertz imaging systems are being commercialized with an increasing number of trials performed in a biomedical setting. Terahertz tomographic imaging and detection technology contributes to the ability to identify opaque objects with clear boundaries,and would be useful to both in vivo and ex vivo environments. This book also: Introduces terahertz radiation techniques and provides a number of topical examples of signal and image processing, as well as machine learning Presents the most recent developments in an emerging field, terahertz radiation Utilizes new methods...

  12. 3D integration technologies for imaging applications

    International Nuclear Information System (INIS)

    Moor, Piet de

    2008-01-01

    The aim of this paper is to give an overview of micro-electronic technologies under development today, and how they are impacting on the radiation detection and imaging of tomorrow. After a short introduction, the different enabling technologies will be discussed. Finally, a few examples of ongoing developments at IMEC on advanced detector systems will be given

  13. Thermoelectric infrared imager and automotive applications

    Science.gov (United States)

    Hirota, Masaki; Satou, Fuminori; Saito, Masanori; Kishi, Youichi; Nakajima, Yasushi; Uchiyama, Makato

    2001-10-01

    This paper describes a newly developed thermoelectric infrared imager having a 48 X 32 element thermoelectric focal plane array (FPA) and an experimental vehicle featuring a blind spot pedestrian warning system, which employs four infrared imagers. The imager measures 100 mm in width, 60 mm in height and 80 mm in depth, weighs 400 g, and has an overall field of view (FOV) of 40 deg X 20 deg. The power consumption of the imager is 3 W. The pedestrian detection program is stored in a CPU chip on a printed circuit board (PCB). The FPA provides high responsivity of 2,100 V/W, a time constant of 25 msec, and a low cost potential. Each element has external dimensions of 190 μm x 190 μm, and consists of six pairs of thermocouples and an Au-black absorber that is precisely patterned by low-pressure evaporation and lift-off technologies. The experimental vehicle is called the Nissan ASV-2 (Advanced Safety Vehicle-2), which incorporates a wide range of integrated technologies aimed at reducing traffic accidents. The blind spot pedestrian warning system alerts the driver to the presence of a pedestrian in a blind spot by detecting the infrared radiation emitted from the person's body. This system also prevents the vehicle from moving in the direction of the pedestrian.

  14. Jannovar: a java library for exome annotation.

    Science.gov (United States)

    Jäger, Marten; Wang, Kai; Bauer, Sebastian; Smedley, Damian; Krawitz, Peter; Robinson, Peter N

    2014-05-01

    Transcript-based annotation and pedigree analysis are two basic steps in the computational analysis of whole-exome sequencing experiments in genetic diagnostics and disease-gene discovery projects. Here, we present Jannovar, a stand-alone Java application as well as a Java library designed to be used in larger software frameworks for exome and genome analysis. Jannovar uses an interval tree to identify all transcripts affected by a given variant, and provides Human Genome Variation Society-compliant annotations both for variants affecting coding sequences and splice junctions as well as untranslated regions and noncoding RNA transcripts. Jannovar can also perform family-based pedigree analysis with Variant Call Format (VCF) files with data from members of a family segregating a Mendelian disorder. Using a desktop computer, Jannovar requires a few seconds to annotate a typical VCF file with exome data. Jannovar is freely available under the BSD2 license. Source code as well as the Java application and library file can be downloaded from http://compbio.charite.de (with tutorial) and https://github.com/charite/jannovar. © 2014 WILEY PERIODICALS, INC.

  15. Chado controller: advanced annotation management with a community annotation system.

    Science.gov (United States)

    Guignon, Valentin; Droc, Gaëtan; Alaux, Michael; Baurens, Franc-Christophe; Garsmeur, Olivier; Poiron, Claire; Carver, Tim; Rouard, Mathieu; Bocs, Stéphanie

    2012-04-01

    We developed a controller that is compliant with the Chado database schema, GBrowse and genome annotation-editing tools such as Artemis and Apollo. It enables the management of public and private data, monitors manual annotation (with controlled vocabularies, structural and functional annotation controls) and stores versions of annotation for all modified features. The Chado controller uses PostgreSQL and Perl. The Chado Controller package is available for download at http://www.gnpannot.org/content/chado-controller and runs on any Unix-like operating system, and documentation is available at http://www.gnpannot.org/content/chado-controller-doc The system can be tested using the GNPAnnot Sandbox at http://www.gnpannot.org/content/gnpannot-sandbox-form valentin.guignon@cirad.fr; stephanie.sidibe-bocs@cirad.fr Supplementary data are available at Bioinformatics online.

  16. Machine Learning in Radiology: Applications Beyond Image Interpretation.

    Science.gov (United States)

    Lakhani, Paras; Prater, Adam B; Hutson, R Kent; Andriole, Kathy P; Dreyer, Keith J; Morey, Jose; Prevedello, Luciano M; Clark, Toshi J; Geis, J Raymond; Itri, Jason N; Hawkins, C Matthew

    2018-02-01

    Much attention has been given to machine learning and its perceived impact in radiology, particularly in light of recent success with image classification in international competitions. However, machine learning is likely to impact radiology outside of image interpretation long before a fully functional "machine radiologist" is implemented in practice. Here, we describe an overview of machine learning, its application to radiology and other domains, and many cases of use that do not involve image interpretation. We hope that better understanding of these potential applications will help radiology practices prepare for the future and realize performance improvement and efficiency gains. Copyright © 2017 American College of Radiology. Published by Elsevier Inc. All rights reserved.

  17. Displaying Annotations for Digitised Globes

    Science.gov (United States)

    Gede, Mátyás; Farbinger, Anna

    2018-05-01

    Thanks to the efforts of the various globe digitising projects, nowadays there are plenty of old globes that can be examined as 3D models on the computer screen. These globes usually contain a lot of interesting details that an average observer would not entirely discover for the first time. The authors developed a website that can display annotations for such digitised globes. These annotations help observers of the globe to discover all the important, interesting details. Annotations consist of a plain text title, a HTML formatted descriptive text and a corresponding polygon and are stored in KML format. The website is powered by the Cesium virtual globe engine.

  18. NoGOA: predicting noisy GO annotations using evidences and sparse representation.

    Science.gov (United States)

    Yu, Guoxian; Lu, Chang; Wang, Jun

    2017-07-21

    Gene Ontology (GO) is a community effort to represent functional features of gene products. GO annotations (GOA) provide functional associations between GO terms and gene products. Due to resources limitation, only a small portion of annotations are manually checked by curators, and the others are electronically inferred. Although quality control techniques have been applied to ensure the quality of annotations, the community consistently report that there are still considerable noisy (or incorrect) annotations. Given the wide application of annotations, however, how to identify noisy annotations is an important but yet seldom studied open problem. We introduce a novel approach called NoGOA to predict noisy annotations. NoGOA applies sparse representation on the gene-term association matrix to reduce the impact of noisy annotations, and takes advantage of sparse representation coefficients to measure the semantic similarity between genes. Secondly, it preliminarily predicts noisy annotations of a gene based on aggregated votes from semantic neighborhood genes of that gene. Next, NoGOA estimates the ratio of noisy annotations for each evidence code based on direct annotations in GOA files archived on different periods, and then weights entries of the association matrix via estimated ratios and propagates weights to ancestors of direct annotations using GO hierarchy. Finally, it integrates evidence-weighted association matrix and aggregated votes to predict noisy annotations. Experiments on archived GOA files of six model species (H. sapiens, A. thaliana, S. cerevisiae, G. gallus, B. Taurus and M. musculus) demonstrate that NoGOA achieves significantly better results than other related methods and removing noisy annotations improves the performance of gene function prediction. The comparative study justifies the effectiveness of integrating evidence codes with sparse representation for predicting noisy GO annotations. Codes and datasets are available at http://mlda.swu.edu.cn/codes.php?name=NoGOA .

  19. Longwave Imaging for Astronomical Applications, Phase II

    Data.gov (United States)

    National Aeronautics and Space Administration — We propose to develop a compact portable longwave camera for astronomical applications. In Phase 1, we successfully developed the eye of the camera, i.e. the focal...

  20. Integrating Web Services into Map Image Applications

    National Research Council Canada - National Science Library

    Tu, Shengru

    2003-01-01

    Web services have been opening a wide avenue for software integration. In this paper, we have reported our experiments with three applications that are built by utilizing and providing web services for Geographic Information Systems (GIS...

  1. Applications of magnetic resonance image segmentation in neurology

    Science.gov (United States)

    Heinonen, Tomi; Lahtinen, Antti J.; Dastidar, Prasun; Ryymin, Pertti; Laarne, Paeivi; Malmivuo, Jaakko; Laasonen, Erkki; Frey, Harry; Eskola, Hannu

    1999-05-01

    After the introduction of digital imagin devices in medicine computerized tissue recognition and classification have become important in research and clinical applications. Segmented data can be applied among numerous research fields including volumetric analysis of particular tissues and structures, construction of anatomical modes, 3D visualization, and multimodal visualization, hence making segmentation essential in modern image analysis. In this research project several PC based software were developed in order to segment medical images, to visualize raw and segmented images in 3D, and to produce EEG brain maps in which MR images and EEG signals were integrated. The software package was tested and validated in numerous clinical research projects in hospital environment.

  2. Image feature detectors and descriptors foundations and applications

    CERN Document Server

    Hassaballah, Mahmoud

    2016-01-01

    This book provides readers with a selection of high-quality chapters that cover both theoretical concepts and practical applications of image feature detectors and descriptors. It serves as reference for researchers and practitioners by featuring survey chapters and research contributions on image feature detectors and descriptors. Additionally, it emphasizes several keywords in both theoretical and practical aspects of image feature extraction. The keywords include acceleration of feature detection and extraction, hardware implantations, image segmentation, evolutionary algorithm, ordinal measures, as well as visual speech recognition. .

  3. Proton imaging apparatus for proton therapy application

    International Nuclear Information System (INIS)

    Sipala, V.; Lo Presti, D.; Brianzi, M.; Civinini, C.; Bruzzi, M.; Scaringella, M.; Talamonti, C.; Bucciolini, M.; Cirrone, G.A.P.; Cuttone, G.; Randazzo, N.; Stancampiano, C.; Tesi, M.

    2011-01-01

    Radiotherapy with protons, due to the physical properties of these particles, offers several advantages for cancer therapy as compared to the traditional radiotherapy and photons. In the clinical use of proton beams, a p CT (Proton Computer Tomography) apparatus can contribute to improve the accuracy of the patient positioning and dose distribution calculation. In this paper a p CT apparatus built by the Prima (Proton Imaging) Italian Collaboration will be presented and the preliminary results will be discussed.

  4. Liposomes - experiment of magnetic resonance imaging application

    International Nuclear Information System (INIS)

    Mathieu, S.

    1987-01-01

    Most pharmaceutical research effort with liposomes has been involved with the investigation of their use as drug carriers to particular target organs. Recently there has been a growing interest in liposomes not only as carrier of drugs but as a tool for the introduction of various substances into the human body. In this study, liposome delivery of nitroxyl radicals as NMR contrast agent for improved tissue imaging is experimented in rats [fr

  5. Application of XR imaging in dentistry

    International Nuclear Information System (INIS)

    Trendafilova, N.; Gagova, P.

    2015-01-01

    Full text: For accurate and sure diagnosis in dentistry except anamnestic information (history taking) and clinical examination as an obligatory clinical examination must attend imaging investigation. For diagnosis of diseases of the teeth are used a number of imaging methods. The most widespread of them are segmental roentgenography, ortopamtomography Bitewing, as well as increasingly coming in use dental cone-beam computed tomography (3D CBCT). The aim is to introduce the types of radiographs and their benefits for prompt and proper treatment. Documentary method - a review and analysis of literature and Internet sources are made. Results and comments: Segmented radiography gives us information about the state of the tooth as a whole. Using this method gives an opportunity to visualize the crown, the neck and the root of the tooth. Ortopantomography gives a general view of the state of the maxilla and mandibula, the teeth, part of the maxillary sinuses and temporomandibular joints. Some or extra teeth, are discovered as well as dental disease conditions, bone abnormalities, cysts and others. Bitewing is used when caries is strongly suspected, though not visually, in cases of bone loss and others. The advantage of early and accurate diagnosis is to reduce future complications. Conclusion: good prevention and early detection of dental anomalies and pathologies is performed with the help of X-ray. The selection of the correct method for imaging, the proper use of X-ray machines and placing the required security means reducing the amount of radiation exposure to the patient

  6. X-ray imaging for security applications

    Science.gov (United States)

    Evans, J. Paul

    2004-01-01

    The X-ray screening of luggage by aviation security personnel may be badly hindered by the lack of visual cues to depth in an image that has been produced by transmitted radiation. Two-dimensional "shadowgraphs" with "organic" and "metallic" objects encoded using two different colors (usually orange and blue) are still in common use. In the context of luggage screening there are no reliable cues to depth present in individual shadowgraph X-ray images. Therefore, the screener is required to convert the 'zero depth resolution' shadowgraph into a three-dimensional mental picture to be able to interpret the relative spatial relationship of the objects under inspection. Consequently, additional cognitive processing is required e.g. integration, inference and memory. However, these processes can lead to serious misinterpretations of the actual physical structure being examined. This paper describes the development of a stereoscopic imaging technique enabling the screener to utilise binocular stereopsis and kinetic depth to enhance their interpretation of the actual nature of the objects under examination. Further work has led to the development of a technique to combine parallax data (to calculate the thickness of a target material) with the results of a basis material subtraction technique to approximate the target's effective atomic number and density. This has been achieved in preliminary experiments with a novel spatially interleaved dual-energy sensor which reduces the number of scintillation elements required by 50% in comparison to conventional sensor configurations.

  7. Comparison of mouse mammary gland imaging techniques and applications: Reflectance confocal microscopy, GFP Imaging, and ultrasound

    International Nuclear Information System (INIS)

    Tilli, Maddalena T; Parrish, Angela R; Cotarla, Ion; Jones, Laundette P; Johnson, Michael D; Furth, Priscilla A

    2008-01-01

    Genetically engineered mouse models of mammary gland cancer enable the in vivo study of molecular mechanisms and signaling during development and cancer pathophysiology. However, traditional whole mount and histological imaging modalities are only applicable to non-viable tissue. We evaluated three techniques that can be quickly applied to living tissue for imaging normal and cancerous mammary gland: reflectance confocal microscopy, green fluorescent protein imaging, and ultrasound imaging. In the current study, reflectance confocal imaging offered the highest resolution and was used to optically section mammary ductal structures in the whole mammary gland. Glands remained viable in mammary gland whole organ culture when 1% acetic acid was used as a contrast agent. Our application of using green fluorescent protein expressing transgenic mice in our study allowed for whole mammary gland ductal structures imaging and enabled straightforward serial imaging of mammary gland ducts in whole organ culture to visualize the growth and differentiation process. Ultrasound imaging showed the lowest resolution. However, ultrasound was able to detect mammary preneoplastic lesions 0.2 mm in size and was used to follow cancer growth with serial imaging in living mice. In conclusion, each technique enabled serial imaging of living mammary tissue and visualization of growth and development, quickly and with minimal tissue preparation. The use of the higher resolution reflectance confocal and green fluorescent protein imaging techniques and lower resolution ultrasound were complementary

  8. Aliphatic polyesters for medical imaging and theranostic applications.

    Science.gov (United States)

    Nottelet, Benjamin; Darcos, Vincent; Coudane, Jean

    2015-11-01

    Medical imaging is a cornerstone of modern medicine. In that context the development of innovative imaging systems combining biomaterials and contrast agents (CAs)/imaging probes (IPs) for improved diagnostic and theranostic applications focuses intense research efforts. In particular, the classical aliphatic (co)polyesters poly(lactide) (PLA), poly(lactide-co-glycolide) (PLGA) and poly(ɛ-caprolactone) (PCL), attract much attention due to their long track record in the medical field. This review aims therefore at providing a state-of-the-art of polyester-based imaging systems. In a first section a rapid description of the various imaging modalities, including magnetic resonance imaging (MRI), optical imaging, computed tomography (CT), ultrasound (US) and radionuclide imaging (SPECT, PET) will be given. Then, the two main strategies used to combine the CAs/IPs and the polyesters will be discussed. In more detail we will first present the strategies relying on CAs/IPs encapsulation in nanoparticles, micelles, dendrimers or capsules. We will then present chemical modifications of polyesters backbones and/or polyester surfaces to yield macromolecular imaging agents. Finally, opportunities offered by these innovative systems will be illustrated with some recent examples in the fields of cell labeling, diagnostic or theranostic applications and medical devices. Copyright © 2015 Elsevier B.V. All rights reserved.

  9. Detecting content adaptive scaling of images for forensic applications

    Science.gov (United States)

    Fillion, Claude; Sharma, Gaurav

    2010-01-01

    Content-aware resizing methods have recently been developed, among which, seam-carving has achieved the most widespread use. Seam-carving's versatility enables deliberate object removal and benign image resizing, in which perceptually important content is preserved. Both types of modifications compromise the utility and validity of the modified images as evidence in legal and journalistic applications. It is therefore desirable that image forensic techniques detect the presence of seam-carving. In this paper we address detection of seam-carving for forensic purposes. As in other forensic applications, we pose the problem of seam-carving detection as the problem of classifying a test image in either of two classes: a) seam-carved or b) non-seam-carved. We adopt a pattern recognition approach in which a set of features is extracted from the test image and then a Support Vector Machine based classifier, trained over a set of images, is utilized to estimate which of the two classes the test image lies in. Based on our study of the seam-carving algorithm, we propose a set of intuitively motivated features for the detection of seam-carving. Our methodology for detection of seam-carving is then evaluated over a test database of images. We demonstrate that the proposed method provides the capability for detecting seam-carving with high accuracy. For images which have been reduced 30% by benign seam-carving, our method provides a classification accuracy of 91%.

  10. Study of x-ray CCD image sensor and application

    Science.gov (United States)

    Wang, Shuyun; Li, Tianze

    2008-12-01

    In this paper, we expounded the composing, specialty, parameter, its working process, key techniques and methods for charge coupled devices (CCD) twice value treatment. Disposal process for CCD video signal quantification was expatiated; X-ray image intensifier's constitutes, function of constitutes, coupling technique of X-ray image intensifier and CCD were analyzed. We analyzed two effective methods to reduce the harm to human beings when X-ray was used in the medical image. One was to reduce X-ray's radiation and adopt to intensify the image penetrated by X-ray to gain the same effect. The other was to use the image sensor to transfer the images to the safe area for observation. On this base, a new method was presented that CCD image sensor and X-ray image intensifier were combined organically. A practical medical X-ray photo electricity system was designed which can be used in the records and time of the human's penetrating images. The system was mainly made up with the medical X-ray, X-ray image intensifier, CCD vidicon with high resolution, image processor, display and so on. Its characteristics are: change the invisible X-ray into the visible light image; output the vivid images; short image recording time etc. At the same time we analyzed the main aspects which affect the system's resolution. Medical photo electricity system using X-ray image sensor can reduce the X-ray harm to human sharply when it is used in the medical diagnoses. At last we analyzed and looked forward the system's application in medical engineering and the related fields.

  11. Image processing applications: From particle physics to society

    International Nuclear Information System (INIS)

    Sotiropoulou, C.-L.; Citraro, S.; Dell'Orso, M.; Luciano, P.; Gkaitatzis, S.; Giannetti, P.

    2017-01-01

    We present an embedded system for extremely efficient real-time pattern recognition execution, enabling technological advancements with both scientific and social impact. It is a compact, fast, low consumption processing unit (PU) based on a combination of Field Programmable Gate Arrays (FPGAs) and the full custom associative memory chip. The PU has been developed for real time tracking in particle physics experiments, but delivers flexible features for potential application in a wide range of fields. It has been proposed to be used in accelerated pattern matching execution for Magnetic Resonance Fingerprinting (biomedical applications), in real time detection of space debris trails in astronomical images (space applications) and in brain emulation for image processing (cognitive image processing). We illustrate the potentiality of the PU for the new applications.

  12. Multifunctional Nanoparticles for Drug Delivery Applications Imaging, Targeting, and Delivery

    CERN Document Server

    Prud'homme, Robert

    2012-01-01

    This book clearly demonstrates the progression of nanoparticle therapeutics from basic research to applications. Unlike other books covering nanoparticles used in medical applications, Multifunctional Nanoparticles for Drug Delivery Applications presents the medical challenges that can be reduced or even overcome by recent advances in nanoscale drug delivery. Each chapter highlights recent progress in the design and engineering of select multifunctional nanoparticles with topics covering targeting, imaging, delivery, diagnostics, and therapy.

  13. Annotations to quantum statistical mechanics

    CERN Document Server

    Kim, In-Gee

    2018-01-01

    This book is a rewritten and annotated version of Leo P. Kadanoff and Gordon Baym’s lectures that were presented in the book Quantum Statistical Mechanics: Green’s Function Methods in Equilibrium and Nonequilibrium Problems. The lectures were devoted to a discussion on the use of thermodynamic Green’s functions in describing the properties of many-particle systems. The functions provided a method for discussing finite-temperature problems with no more conceptual difficulty than ground-state problems, and the method was equally applicable to boson and fermion systems and equilibrium and nonequilibrium problems. The lectures also explained nonequilibrium statistical physics in a systematic way and contained essential concepts on statistical physics in terms of Green’s functions with sufficient and rigorous details. In-Gee Kim thoroughly studied the lectures during one of his research projects but found that the unspecialized method used to present them in the form of a book reduced their readability. He st...

  14. COGNATE: comparative gene annotation characterizer.

    Science.gov (United States)

    Wilbrandt, Jeanne; Misof, Bernhard; Niehuis, Oliver

    2017-07-17

    The comparison of gene and genome structures across species has the potential to reveal major trends of genome evolution. However, such a comparative approach is currently hampered by a lack of standardization (e.g., Elliott TA, Gregory TR, Philos Trans Royal Soc B: Biol Sci 370:20140331, 2015). For example, testing the hypothesis that the total amount of coding sequences is a reliable measure of potential proteome diversity (Wang M, Kurland CG, Caetano-Anollés G, PNAS 108:11954, 2011) requires the application of standardized definitions of coding sequence and genes to create both comparable and comprehensive data sets and corresponding summary statistics. However, such standard definitions either do not exist or are not consistently applied. These circumstances call for a standard at the descriptive level using a minimum of parameters as well as an undeviating use of standardized terms, and for software that infers the required data under these strict definitions. The acquisition of a comprehensive, descriptive, and standardized set of parameters and summary statistics for genome publications and further analyses can thus greatly benefit from the availability of an easy to use standard tool. We developed a new open-source command-line tool, COGNATE (Comparative Gene Annotation Characterizer), which uses a given genome assembly and its annotation of protein-coding genes for a detailed description of the respective gene and genome structure parameters. Additionally, we revised the standard definitions of gene and genome structures and provide the definitions used by COGNATE as a working draft suggestion for further reference. Complete parameter lists and summary statistics are inferred using this set of definitions to allow down-stream analyses and to provide an overview of the genome and gene repertoire characteristics. COGNATE is written in Perl and freely available at the ZFMK homepage ( https://www.zfmk.de/en/COGNATE ) and on github ( https

  15. Parallelism and Scalability in an Image Processing Application

    DEFF Research Database (Denmark)

    Rasmussen, Morten Sleth; Stuart, Matthias Bo; Karlsson, Sven

    2008-01-01

    parallel programs. This paper investigates parallelism and scalability of an embedded image processing application. The major challenges faced when parallelizing the application were to extract enough parallelism from the application and to reduce load imbalance. The application has limited immediately......The recent trends in processor architecture show that parallel processing is moving into new areas of computing in the form of many-core desktop processors and multi-processor system-on-chip. This means that parallel processing is required in application areas that traditionally have not used...

  16. Parallelism and Scalability in an Image Processing Application

    DEFF Research Database (Denmark)

    Rasmussen, Morten Sleth; Stuart, Matthias Bo; Karlsson, Sven

    2009-01-01

    parallel programs. This paper investigates parallelism and scalability of an embedded image processing application. The major challenges faced when parallelizing the application were to extract enough parallelism from the application and to reduce load imbalance. The application has limited immediately......The recent trends in processor architecture show that parallel processing is moving into new areas of computing in the form of many-core desktop processors and multi-processor system-on-chips. This means that parallel processing is required in application areas that traditionally have not used...

  17. MATLAB-based Applications for Image Processing and Image Quality Assessment – Part II: Experimental Results

    Directory of Open Access Journals (Sweden)

    L. Krasula

    2012-04-01

    Full Text Available The paper provides an overview of some possible usage of the software described in the Part I. It contains the real examples of image quality improvement, distortion simulations, objective and subjective quality assessment and other ways of image processing that can be obtained by the individual applications.

  18. Advances and applications of optimised algorithms in image processing

    CERN Document Server

    Oliva, Diego

    2017-01-01

    This book presents a study of the use of optimization algorithms in complex image processing problems. The problems selected explore areas ranging from the theory of image segmentation to the detection of complex objects in medical images. Furthermore, the concepts of machine learning and optimization are analyzed to provide an overview of the application of these tools in image processing. The material has been compiled from a teaching perspective. Accordingly, the book is primarily intended for undergraduate and postgraduate students of Science, Engineering, and Computational Mathematics, and can be used for courses on Artificial Intelligence, Advanced Image Processing, Computational Intelligence, etc. Likewise, the material can be useful for research from the evolutionary computation, artificial intelligence and image processing co.

  19. Some computer applications and digital image processing in nuclear medicine

    International Nuclear Information System (INIS)

    Lowinger, T.

    1981-01-01

    Methods of digital image processing are applied to problems in nuclear medicine imaging. The symmetry properties of central nervous system lesions are exploited in an attempt to determine the three-dimensional radioisotope density distribution within the lesions. An algorithm developed by astronomers at the end of the 19th century to determine the distribution of matter in globular clusters is applied to tumors. This algorithm permits the emission-computed-tomographic reconstruction of spherical lesions from a single view. The three-dimensional radioisotope distribution derived by the application of the algorithm can be used to characterize the lesions. The applicability to nuclear medicine images of ten edge detection methods in general usage in digital image processing were evaluated. A general model of image formation by scintillation cameras is developed. The model assumes that objects to be imaged are composed of a finite set of points. The validity of the model has been verified by its ability to duplicate experimental results. Practical applications of this work involve quantitative assessment of the distribution of radipharmaceuticals under clinical situations and the study of image processing algorithms

  20. High speed global shutter image sensors for professional applications

    Science.gov (United States)

    Wu, Xu; Meynants, Guy

    2015-04-01

    Global shutter imagers expand the use to miscellaneous applications, such as machine vision, 3D imaging, medical imaging, space etc. to eliminate motion artifacts in rolling shutter imagers. A low noise global shutter pixel requires more than one non-light sensitive memory to reduce the read noise. But larger memory area reduces the fill-factor of the pixels. Modern micro-lenses technology can compensate this fill-factor loss. Backside illumination (BSI) is another popular technique to improve the pixel fill-factor. But some pixel architecture may not reach sufficient shutter efficiency with backside illumination. Non-light sensitive memory elements make the fabrication with BSI possible. Machine vision like fast inspection system, medical imaging like 3D medical or scientific applications always ask for high frame rate global shutter image sensors. Thanks to the CMOS technology, fast Analog-to-digital converters (ADCs) can be integrated on chip. Dual correlated double sampling (CDS) on chip ADC with high interface digital data rate reduces the read noise and makes more on-chip operation control. As a result, a global shutter imager with digital interface is a very popular solution for applications with high performance and high frame rate requirements. In this paper we will review the global shutter architectures developed in CMOSIS, discuss their optimization process and compare their performances after fabrication.

  1. Elastic models application for thorax image registration

    International Nuclear Information System (INIS)

    Correa Prado, Lorena S; Diaz, E Andres Valdez; Romo, Raul

    2007-01-01

    This work consist of the implementation and evaluation of elastic alignment algorithms of biomedical images, which were taken at thorax level and simulated with the 4D NCAT digital phantom. Radial Basis Functions spatial transformations (RBF), a kind of spline, which allows carrying out not only global rigid deformations but also local elastic ones were applied, using a point-matching method. The applied functions were: Thin Plate Spline (TPS), Multiquadric (MQ) Gaussian and B-Spline, which were evaluated and compared by means of calculating the Target Registration Error and similarity measures between the registered images (the squared sum of intensity differences (SSD) and correlation coefficient (CC)). In order to value the user incurred error in the point-matching and segmentation tasks, two algorithms were also designed that calculate the Fiduciary Localization Error. TPS and MQ were demonstrated to have better performance than the others. It was proved RBF represent an adequate model for approximating the thorax deformable behaviour. Validation algorithms showed the user error was not significant

  2. Application of lectins to tumor imaging radiopharmaceuticals

    International Nuclear Information System (INIS)

    Kojima, Shuji; Jay, M.

    1986-01-01

    We investigated the in vitro binding of 125 I-lectins to Ehrlich ascites tumor (EAT) cells and in vivo uptake of 125 I-lectins in Ehrlich solid tumor (EST) bearing mice. In in vitro binding assays, phaseolus vulgaris agglutinin (PHA), pisum sativum agglutinin (PSA), and concanavalia agglutinin (Con A) showed a high affinity for EAT cells. The in vivo biodistribution of 125 I-lectins showed 125 I-PSA to be significantly taken up into EST tissues 24 h postinjection. After IV injection of 125 I-PSA, uptake of the radioactivity into the tumor tissues reached a maximum at 6 h, and thereafter decreased. Rapid disappearance of the radioactivity from blood and its excretion into kidney soon after injection of 125 I-PSA were observed. When compared with the biodistribution of 67 Ga-citrate in EST bearing mice 24 h postinjection, tumor to liver (T/B), tumor to muscle (T/M), and tumor to blood (T/B) ratios were superior for 125 I-PSA. At 6 h postinjection, the T/B-ratio of 125 I-PSA was 2.5, and this value may be sufficient to enable discernable diagnostic images. Our results suggest that PSA might be a useful tumor imaging radiopharmaceutical. (orig.)

  3. Translational Applications of Molecular Imaging and Radionuclide Therapy

    International Nuclear Information System (INIS)

    Welch, Michael J.; Eckelman, William C.; Vera, David

    2005-01-01

    Molecular imaging is becoming a larger part of imaging research and practice. The Office of Biological and Environmental Research of the Department of Energy funds a significant number of researchers in this area. The proposal is to partially fund a workshop to inform scientists working in nuclear medicine and nuclear medicine practitioners of the recent advances of molecular imaging in nuclear medicine as well as other imaging modalities. A limited number of topics related to radionuclide therapy will also be discussed. The proposal is to request partial funds for the workshop entitled ''Translational Applications of Molecular Imaging and Radionuclide Therapy'' to be held prior to the Society of Nuclear Medicine Annual Meeting in Toronto, Canada in June 2005. The meeting will be held on June 17-18. This will allow scientists interested in all aspects of nuclear medicine imaging to attend. The chair of the organizing group is Dr. Michael J. Welch. The organizing committee consists of Dr. Welch, Dr. William C. Eckelman and Dr. David Vera. The goal is to invite speakers to discuss the most recent advances of modern molecular imaging and therapy. Speakers will present advances made in in vivo tagging imaging assays, technical aspects of small animal imaging, in vivo imaging and bench to bedside translational study; and the role of a diagnostic scan on therapy selection. This latter topic will include discussions on therapy and new approaches to dosimetry. Several of these topics are those funded by the Department of Energy Office of Biological and Environmental Research

  4. Fundamentals and applications of neutron imaging. Application part 3. Application of neutron imaging in aircraft, space rocket, car and gunpowder industries

    International Nuclear Information System (INIS)

    Ikeda, Yasushi

    2007-01-01

    Neutron imaging is applied to nondestructive test. Four neutron imaging facilities are used in Japan. The application examples of industries are listed in the table: space rocket, aircraft, car, liquid metal, and works of art. Neutron imaging of transportation equipments are illustrated as an application to industry. X-ray radiography testing (XRT) image and neutron radiography testing (NRT) image of turbine blade of aircraft engine, honeycomb structure of aircraft, helicopter rotor blade, trigger tube, separation nut of space rocket, carburetor of car, BMW engine, fireworks and ammunitions are illustrated. (S.Y.)

  5. Implementation and applications of dual-modality imaging

    Science.gov (United States)

    Hasegawa, Bruce H.; Barber, William C.; Funk, Tobias; Hwang, Andrew B.; Taylor, Carmen; Sun, Mingshan; Seo, Youngho

    2004-06-01

    In medical diagnosis, functional or physiological data can be acquired using radionuclide imaging with positron emission tomography or with single-photon emission computed tomography. However, anatomical or structural data can be acquired using X-ray computed tomography. In dual-modality imaging, both radionuclide and X-ray detectors are incorporated in an imaging system to allow both functional and structural data to be acquired in a single procedure without removing the patient from the imaging system. In a clinical setting, dual-modality imaging systems commonly are used to localize radiopharmaceutical uptake with respect to the patient's anatomy. This helps the clinician to differentiate disease from regions of normal radiopharmaceutical accumulation, to improve diagnosis or cancer staging, or to facilitate planning for radiation therapy or surgery. While initial applications of dual-modality imaging were developed for clinical imaging on humans, it now is recognized that these systems have potentially important applications for imaging small animals involved in experimental studies including basic investigations of mammalian biology and development of new pharmaceuticals for diagnosis or treatment of disease.

  6. Image reconstruction. Application to transverse axial tomography

    International Nuclear Information System (INIS)

    Aubry, Florent.

    1977-09-01

    A method of computerized tridimensional image reconstruction from their projection, especially in the computerized transverse axial tomography is suggested. First, the different techniques actually developped and presented in the literature are analyzed. Then, the equipment used is briefly described. The reconstruction algorithm developped is presented. This algorithm is based on the convolution method, well adapted to the real conditions of exploitation. It is an extension of SHEPP and LOGAN's algorithm. A correction of the self absorption and of the detector's response is proposed. Finally, the first results obtained which are satisfactory are given. The simplicity of the method which does not need a too long computation time makes possible the implementation of the algorithm on a mini-computer [fr

  7. Clinical application of functional magnetic resonance imaging

    International Nuclear Information System (INIS)

    Alwatban, Adnan Z.W.

    2002-01-01

    The work described in this thesis was carried out at the Magnetic Resonance Centre of the University of Nottingham during the time from May 1998 to April 2001, and is the work of the author except where indicated by reference. The main source of signal changes in functional magnetic resonance imaging (fMRJ) is the fluctuation of paramagnetic deoxyhaemoglobin in the venous blood during different states of functional performance. For the work of this thesis, fMRI studies were carried out using a 3 T MR system with an echo planar imaging (EPI) pulse sequence. Hearing research utilising fMRI has been previously reported in normal subjects. Hearing fMRI is normally performed by stimulating the auditory cortex via an acoustic task presentation such as music, tone, etc. However, performing the same research on deaf subjects requires special equipment to be designed to allow direct stimulation of the auditory nerve. In this thesis, a new method of direct electrical stimulation of the auditory nerve is described that uses a transtympanic electrode implanted onto the surface of the cochlea. This approach would however, result in electromotive forces (EMFs) being induced by the time varying magnetic field, which would lead to current flow and heating, as well as deflection of the metallic electrode within the static magnetic field, and image distortion due to the magnetic susceptibility difference. A gold-plated tungsten electrode with a zero magnetic susceptibility was developed to avoid image distortion. Used with carbon leads and a carbon reference pad, it enabled safe, distortion-free fMRI studies of deaf subjects. The study revealed activation of the primary auditory cortex. This fMRI procedure can be used to demonstrate whether the auditory pathway is fully intact, and may provide a useful method for pre-operative assessment of candidates for cochlear implantation. Glucose is the energy source on which the function of the human brain is entirely dependent. Failure to

  8. Clinical application of functional magnetic resonance imaging

    Energy Technology Data Exchange (ETDEWEB)

    Alwatban, Adnan Z W

    2002-07-01

    The work described in this thesis was carried out at the Magnetic Resonance Centre of the University of Nottingham during the time from May 1998 to April 2001, and is the work of the author except where indicated by reference. The main source of signal changes in functional magnetic resonance imaging (fMRJ) is the fluctuation of paramagnetic deoxyhaemoglobin in the venous blood during different states of functional performance. For the work of this thesis, fMRI studies were carried out using a 3 T MR system with an echo planar imaging (EPI) pulse sequence. Hearing research utilising fMRI has been previously reported in normal subjects. Hearing fMRI is normally performed by stimulating the auditory cortex via an acoustic task presentation such as music, tone, etc. However, performing the same research on deaf subjects requires special equipment to be designed to allow direct stimulation of the auditory nerve. In this thesis, a new method of direct electrical stimulation of the auditory nerve is described that uses a transtympanic electrode implanted onto the surface of the cochlea. This approach would however, result in electromotive forces (EMFs) being induced by the time varying magnetic field, which would lead to current flow and heating, as well as deflection of the metallic electrode within the static magnetic field, and image distortion due to the magnetic susceptibility difference. A gold-plated tungsten electrode with a zero magnetic susceptibility was developed to avoid image distortion. Used with carbon leads and a carbon reference pad, it enabled safe, distortion-free fMRI studies of deaf subjects. The study revealed activation of the primary auditory cortex. This fMRI procedure can be used to demonstrate whether the auditory pathway is fully intact, and may provide a useful method for pre-operative assessment of candidates for cochlear implantation. Glucose is the energy source on which the function of the human brain is entirely dependent. Failure to

  9. High resolution imaging detectors and applications

    CERN Document Server

    Saha, Swapan K

    2015-01-01

    Interferometric observations need snapshots of very high time resolution of the order of (i) frame integration of about 100 Hz or (ii) photon-recording rates of several megahertz (MHz). Detectors play a key role in astronomical observations, and since the explanation of the photoelectric effect by Albert Einstein, the technology has evolved rather fast. The present-day technology has made it possible to develop large-format complementary metal oxide–semiconductor (CMOS) and charge-coupled device (CCD) array mosaics, orthogonal transfer CCDs, electron-multiplication CCDs, electron-avalanche photodiode arrays, and quantum-well infrared (IR) photon detectors. The requirements to develop artifact-free photon shot noise-limited images are higher sensitivity and quantum efficiency, reduced noise that includes dark current, read-out and amplifier noise, smaller point-spread functions, and higher spectral bandwidth. This book aims to address such systems, technologies and design, evaluation and calibration, control...

  10. Applications of evolutionary computation in image processing and pattern recognition

    CERN Document Server

    Cuevas, Erik; Perez-Cisneros, Marco

    2016-01-01

    This book presents the use of efficient Evolutionary Computation (EC) algorithms for solving diverse real-world image processing and pattern recognition problems. It provides an overview of the different aspects of evolutionary methods in order to enable the reader in reaching a global understanding of the field and, in conducting studies on specific evolutionary techniques that are related to applications in image processing and pattern recognition. It explains the basic ideas of the proposed applications in a way that can also be understood by readers outside of the field. Image processing and pattern recognition practitioners who are not evolutionary computation researchers will appreciate the discussed techniques beyond simple theoretical tools since they have been adapted to solve significant problems that commonly arise on such areas. On the other hand, members of the evolutionary computation community can learn the way in which image processing and pattern recognition problems can be translated into an...

  11. The application of similar image retrieval in electronic commerce.

    Science.gov (United States)

    Hu, YuPing; Yin, Hua; Han, Dezhi; Yu, Fei

    2014-01-01

    Traditional online shopping platform (OSP), which searches product information by keywords, faces three problems: indirect search mode, large search space, and inaccuracy in search results. For solving these problems, we discuss and research the application of similar image retrieval in electronic commerce. Aiming at improving the network customers' experience and providing merchants with the accuracy of advertising, we design a reasonable and extensive electronic commerce application system, which includes three subsystems: image search display subsystem, image search subsystem, and product information collecting subsystem. This system can provide seamless connection between information platform and OSP, on which consumers can automatically and directly search similar images according to the pictures from information platform. At the same time, it can be used to provide accuracy of internet marketing for enterprises. The experiment shows the efficiency of constructing the system.

  12. Cervical cancer. Application of MR imaging in brachytherapy

    International Nuclear Information System (INIS)

    Ebe, Kazuyu; Matsunaga, Naofumi

    1996-01-01

    For the purpose of application of MRI in arrangement of brachytherapy of cervical cancer, a method was proposed to see the radiation doses in surrounding tissues by superimposing the dose distribution pattern of the radiation source on the MR image. The applicator for the source was filled with water to get its T2-weighted image and was inserted in the patients. The MRI apparatus was Siemens Magnetom Vision (1.5T) with phased array coil. T2-weighted sagittal and coronary images were taken by turbospin echo and HASTE methods. The section thickness was 5 mm. The dose distribution pattern was superimposed on the frontal and lateral images by Siemens Mevaplan to see the doses in surrounding tissues. In 4 patients, it was possible to estimate the radiation dose in the posterior wall of bladder, anterior wall of rectum and urinary duct. The method is promising for arranging brachytherapy of cervical cancer. (K.H.)

  13. The Application of Similar Image Retrieval in Electronic Commerce

    Directory of Open Access Journals (Sweden)

    YuPing Hu

    2014-01-01

    Full Text Available Traditional online shopping platform (OSP, which searches product information by keywords, faces three problems: indirect search mode, large search space, and inaccuracy in search results. For solving these problems, we discuss and research the application of similar image retrieval in electronic commerce. Aiming at improving the network customers’ experience and providing merchants with the accuracy of advertising, we design a reasonable and extensive electronic commerce application system, which includes three subsystems: image search display subsystem, image search subsystem, and product information collecting subsystem. This system can provide seamless connection between information platform and OSP, on which consumers can automatically and directly search similar images according to the pictures from information platform. At the same time, it can be used to provide accuracy of internet marketing for enterprises. The experiment shows the efficiency of constructing the system.

  14. The Application of Similar Image Retrieval in Electronic Commerce

    Science.gov (United States)

    Hu, YuPing; Yin, Hua; Han, Dezhi; Yu, Fei

    2014-01-01

    Traditional online shopping platform (OSP), which searches product information by keywords, faces three problems: indirect search mode, large search space, and inaccuracy in search results. For solving these problems, we discuss and research the application of similar image retrieval in electronic commerce. Aiming at improving the network customers' experience and providing merchants with the accuracy of advertising, we design a reasonable and extensive electronic commerce application system, which includes three subsystems: image search display subsystem, image search subsystem, and product information collecting subsystem. This system can provide seamless connection between information platform and OSP, on which consumers can automatically and directly search similar images according to the pictures from information platform. At the same time, it can be used to provide accuracy of internet marketing for enterprises. The experiment shows the efficiency of constructing the system. PMID:24883411

  15. A hyperspectral image analysis workbench for environmental science applications

    Energy Technology Data Exchange (ETDEWEB)

    Christiansen, J.H.; Zawada, D.G.; Simunich, K.L.; Slater, J.C.

    1992-10-01

    A significant challenge to the information sciences is to provide more powerful and accessible means to exploit the enormous wealth of data available from high-resolution imaging spectrometry, or ``hyperspectral`` imagery, for analysis, for mapping purposes, and for input to environmental modeling applications. As an initial response to this challenge, Argonne`s Advanced Computer Applications Center has developed a workstation-based prototype software workbench which employs Al techniques and other advanced approaches to deduce surface characteristics and extract features from the hyperspectral images. Among its current capabilities, the prototype system can classify pixels by abstract surface type. The classification process employs neural network analysis of inputs which include pixel spectra and a variety of processed image metrics, including image ``texture spectra`` derived from fractal signatures computed for subimage tiles at each wavelength.

  16. A hyperspectral image analysis workbench for environmental science applications

    Energy Technology Data Exchange (ETDEWEB)

    Christiansen, J.H.; Zawada, D.G.; Simunich, K.L.; Slater, J.C.

    1992-01-01

    A significant challenge to the information sciences is to provide more powerful and accessible means to exploit the enormous wealth of data available from high-resolution imaging spectrometry, or hyperspectral'' imagery, for analysis, for mapping purposes, and for input to environmental modeling applications. As an initial response to this challenge, Argonne's Advanced Computer Applications Center has developed a workstation-based prototype software workbench which employs Al techniques and other advanced approaches to deduce surface characteristics and extract features from the hyperspectral images. Among its current capabilities, the prototype system can classify pixels by abstract surface type. The classification process employs neural network analysis of inputs which include pixel spectra and a variety of processed image metrics, including image texture spectra'' derived from fractal signatures computed for subimage tiles at each wavelength.

  17. Comparison of a semi-automatic annotation tool and a natural language processing application for the generation of clinical statement entries.

    Science.gov (United States)

    Lin, Ching-Heng; Wu, Nai-Yuan; Lai, Wei-Shao; Liou, Der-Ming

    2015-01-01

    Electronic medical records with encoded entries should enhance the semantic interoperability of document exchange. However, it remains a challenge to encode the narrative concept and to transform the coded concepts into a standard entry-level document. This study aimed to use a novel approach for the generation of entry-level interoperable clinical documents. Using HL7 clinical document architecture (CDA) as the example, we developed three pipelines to generate entry-level CDA documents. The first approach was a semi-automatic annotation pipeline (SAAP), the second was a natural language processing (NLP) pipeline, and the third merged the above two pipelines. We randomly selected 50 test documents from the i2b2 corpora to evaluate the performance of the three pipelines. The 50 randomly selected test documents contained 9365 words, including 588 Observation terms and 123 Procedure terms. For the Observation terms, the merged pipeline had a significantly higher F-measure than the NLP pipeline (0.89 vs 0.80, pgenerating entry-level interoperable clinical documents. © The Author 2014. Published by Oxford University Press on behalf of the American Medical Informatics Association. All rights reserved. For Permissions, please email: journals.permissions@oup.comFor numbered affiliation see end of article.

  18. CAGE_peaks_annotation - FANTOM5 | LSDB Archive [Life Science Database Archive metadata

    Lifescience Database Archive (English)

    Full Text Available switchLanguage; BLAST Search Image Search Home About Archive Update History Data List Contact us FANTOM...file File name: CAGE_peaks_annotation File URL: ftp://ftp.biosciencedbc.jp/archive/fantom...on Download License Update History of This Database Site Policy | Contact Us CAGE_peaks_annotation - FANTOM5 | LSDB Archive ...

  19. Application of proton radiography to medical imaging

    International Nuclear Information System (INIS)

    Kramer, S.L.; Martin, R.L.; Moffett, D.R.; Colton, E.

    1977-12-01

    The use of charged particles for radiographic applications has been considered for some time, but progress has been impeded by the cost and availability of suitable accelerators. However, recent developments in technology could overcome these problems. A review is presented of the physical principles leading to an improvement in mass resolution per unit of absorbed dose for charged particle radiography relative to x-ray radiography. The quantitative comparisons between x-ray and proton radiographs presented here confirm this advantage. The implications of proton radiography on cancer detection, as well as future plans for developing a proton tomographic system, are discussed

  20. Management of Scientific Images: an approach to the extraction, annotation and retrieval of figures in the field of High Energy Physics

    CERN Document Server

    Praczyk, Piotr Adam; Mele, Salvatore

    The information environment of the first decade of the XXIst century is unprecedented. The physical barriers limiting access to the knowledge are disappearing as traditional methods of accessing information are being replaced or enhanced by computer systems. Digital systems are able to manage much larger sets of documents, confronting information users with the deluge of documents related to their topic of interest. This new situation created an incentive for the rapid development of Data Mining techniques and to the creation of more efficient search engines capable of limiting the search results to a small subset of the most relevant ones. However, most of the up to date search engines operate using the text descriptions of the documents. Those descriptions can either be extracted from the content of the document or be obtained from the external sources. The retrieval based on the non-textual content of documents is a subject of ongoing research. In particular, the retrieval of images and unlocking the infor...

  1. Fertilizer application and root development analyzed by neutron imaging

    International Nuclear Information System (INIS)

    Nihei, Naoto; Tanoi, Keitaro; Nakanishi, Tomoko M.

    2013-01-01

    We studied the development of the soybean root system under different application of fertilizer applying neutron imaging technique. When neutron beam was irradiated, the root image as well as fertilizer imbedded in a thin aluminum container was clearly projected, since water amount in roots are higher than that in soil. Through image analysis, the development of root system was studied under different application of the fertilizer. The development of a main root with lateral roots was observed without applying fertilizer. When the fertilizer was homogeneously supplied to the soil, the morphological development of the root showed the similar pattern to that grown without fertilizer, in different to the amount of the fertilizer. In the case of local application of the fertilizer, lateral position or downward to the main root, the inhibition of the root growth was observed, suggesting that the localization of the fertilizer is responsible for reduction of the soybean yield. (author)

  2. Applications of the Preclinical Molecular Imaging in Biomedicine: Gene Therapy

    International Nuclear Information System (INIS)

    Collantes, M.; Peñuelas, I.

    2014-01-01

    Gene therapy constitutes a promising option for efficient and targeted treatment of several inherited disorders. Imaging techniques using ionizing radiation as PET or SPECT are used for non-invasive monitoring of the distribution and kinetics of vector-mediated gene expression. In this review the main reporter gene/reporter probe strategies are summarized, as well as the contribution of preclinical models to the development of this new imaging modality previously to its application in clinical arena. [es

  3. Advances in the Application of Image Processing Fruit Grading

    OpenAIRE

    Fang , Chengjun; Hua , Chunjian

    2013-01-01

    International audience; In the perspective of actual production, the paper presents the advances in the application of image processing fruit grading from several aspects, such as processing precision and processing speed of image processing technology. Furthermore, the different algorithms about detecting size, shape, color and defects are combined effectively to reduce the complexity of each algorithm and achieve a balance between the processing precision and processing speed are keys to au...

  4. Oncological applications of 18F-FDG PET imaging

    International Nuclear Information System (INIS)

    Li Lin

    2000-01-01

    Considering normal distribution of 18 F-FDG in human body, 18 F-FDG imaging using PET can be applied to brain tumors, colorectal cancer, lymphoma, melanoma, lung cancer and head and neck cancer. The author briefly focuses on application of 18 F-FDG PET imaging to breast cancer, pancreatic cancer, hepatocellular carcinoma, musculoskeletal neoplasms, endocrine neoplasms, genitourinary neoplasms, esophageal and gastric carcinomas

  5. Cross-relaxation imaging:methods, challenges and applications

    International Nuclear Information System (INIS)

    Stikov, Nikola

    2010-01-01

    An overview of quantitative magnetization transfer (qMT) is given, with focus on cross relaxation imaging (CRI) as a fast method for quantifying the proportion of protons bound to complex macromolecules in tissue. The procedure for generating CRI maps is outlined, showing examples in the human brain and knee, and discussing the caveats and challenges in generating precise and accurate CRI maps. Finally, several applications of CRI for imaging tissue microstructure are presented.(Author)

  6. A semi-automatic annotation tool for cooking video

    Science.gov (United States)

    Bianco, Simone; Ciocca, Gianluigi; Napoletano, Paolo; Schettini, Raimondo; Margherita, Roberto; Marini, Gianluca; Gianforme, Giorgio; Pantaleo, Giuseppe

    2013-03-01

    In order to create a cooking assistant application to guide the users in the preparation of the dishes relevant to their profile diets and food preferences, it is necessary to accurately annotate the video recipes, identifying and tracking the foods of the cook. These videos present particular annotation challenges such as frequent occlusions, food appearance changes, etc. Manually annotate the videos is a time-consuming, tedious and error-prone task. Fully automatic tools that integrate computer vision algorithms to extract and identify the elements of interest are not error free, and false positive and false negative detections need to be corrected in a post-processing stage. We present an interactive, semi-automatic tool for the annotation of cooking videos that integrates computer vision techniques under the supervision of the user. The annotation accuracy is increased with respect to completely automatic tools and the human effort is reduced with respect to completely manual ones. The performance and usability of the proposed tool are evaluated on the basis of the time and effort required to annotate the same video sequences.

  7. On the applicability of numerical image mapping for PIV image analysis near curved interfaces

    International Nuclear Information System (INIS)

    Masullo, Alessandro; Theunissen, Raf

    2017-01-01

    This paper scrutinises the general suitability of image mapping for particle image velocimetry (PIV) applications. Image mapping can improve PIV measurement accuracy by eliminating overlap between the PIV interrogation windows and an interface, as illustrated by some examples in the literature. Image mapping transforms the PIV images using a curvilinear interface-fitted mesh prior to performing the PIV cross correlation. However, degrading effects due to particle image deformation and the Jacobian transformation inherent in the mapping along curvilinear grid lines have never been deeply investigated. Here, the implementation of image mapping from mesh generation to image resampling is presented in detail, and related error sources are analysed. Systematic comparison with standard PIV approaches shows that image mapping is effective only in a very limited set of flow conditions and geometries, and depends strongly on a priori knowledge of the boundary shape and streamlines. In particular, with strongly curved geometries or streamlines that are not parallel to the interface, the image-mapping approach is easily outperformed by more traditional image analysis methodologies invoking suitable spatial relocation of the obtained displacement vector. (paper)

  8. FPGA implementation of image dehazing algorithm for real time applications

    Science.gov (United States)

    Kumar, Rahul; Kaushik, Brajesh Kumar; Balasubramanian, R.

    2017-09-01

    Weather degradation such as haze, fog, mist, etc. severely reduces the effective range of visual surveillance. This degradation is a spatially varying phenomena, which makes this problem non trivial. Dehazing is an essential preprocessing stage in applications such as long range imaging, border security, intelligent transportation system, etc. However, these applications require low latency of the preprocessing block. In this work, single image dark channel prior algorithm is modified and implemented for fast processing with comparable visual quality of the restored image/video. Although conventional single image dark channel prior algorithm is computationally expensive, it yields impressive results. Moreover, a two stage image dehazing architecture is introduced, wherein, dark channel and airlight are estimated in the first stage. Whereas, transmission map and intensity restoration are computed in the next stages. The algorithm is implemented using Xilinx Vivado software and validated by using Xilinx zc702 development board, which contains an Artix7 equivalent Field Programmable Gate Array (FPGA) and ARM Cortex A9 dual core processor. Additionally, high definition multimedia interface (HDMI) has been incorporated for video feed and display purposes. The results show that the dehazing algorithm attains 29 frames per second for the image resolution of 1920x1080 which is suitable of real time applications. The design utilizes 9 18K_BRAM, 97 DSP_48, 6508 FFs and 8159 LUTs.

  9. Dissimilarity Application in Digitized Mammographic Images Classification

    Directory of Open Access Journals (Sweden)

    Ubaldo Bottigli

    2006-06-01

    Full Text Available Purpose of this work is the development of an automatic classification system which could be useful for radiologists in the investigation of breast cancer. The software has been designed in the framework of the MAGIC-5 collaboration. In the traditional way of learning from examples of objects the classifiers are built in a feature space. However, an alternative ways can be found by constructing decision rules on dissimilarity (distance representations. In such a recognition process a new object is described by its distances to (a subset of the training samples. The use of the dissimilarities is especially of interest when features are difficult to obtain or when they have a little discriminative power. In the automatic classification system the suspicious regions with high probability to include a lesion are extracted from the image as regions of interest (ROIs. Each ROI is characterized by some features extracted from co-occurrence matrix containing spatial statistics information on ROI pixel grey tones. A dissimilarity representation of these features is made before the classification. A feed-forward neural network is employed to distinguish pathological records, from non-pathological ones by the new features. The results obtained in terms of sensitivity and specificity will be presented.

  10. Novel plasmonic polarimeter for biomedical imaging applications

    Science.gov (United States)

    Cheney, Alec; Chen, Borui; Cartwright, Alexander; Thomay, Tim

    2018-02-01

    Using polarized light in medical imaging is a valuable tool for diagnostic purposes since light traveling through scattering tissues such as skin, blood, or cartilage may be subject to changes in polarization. We present a new detection scheme and sensor that allows for directly measuring the polarization of light electronically using a plasmonic sensor. The sensor we fabricated consists of a plasmonic nano-grating that is embedded in a Wheatstone circuit. Using resistive losses induced by optically excited plasmons has shown promise as a CMOScompatible plasmonic light detector. Since the plasmonic response is sensitive to polarization with respect to the grating orientation, measuring the resistance change under incident light supplies a direct electronic measure of the polarization of light without polarization optics. Increased electron scattering introduced by plasmons in an applied current results in a measurable decrease in electrical conductance of a grating, allowing a purely electronic readout of a plasmonic excitation. Accordingly, because of its plasmonic nature, such a detector is dependent on both the wavelength and polarization of incident light with a response time limited by the surface plasmon lifetime.

  11. Challenges in Whole-Genome Annotation of Pyrosequenced Eukaryotic Genomes

    Energy Technology Data Exchange (ETDEWEB)

    Kuo, Alan; Grigoriev, Igor

    2009-04-17

    Pyrosequencing technologies such as 454/Roche and Solexa/Illumina vastly lower the cost of nucleotide sequencing compared to the traditional Sanger method, and thus promise to greatly expand the number of sequenced eukaryotic genomes. However, the new technologies also bring new challenges such as shorter reads and new kinds and higher rates of sequencing errors, which complicate genome assembly and gene prediction. At JGI we are deploying 454 technology for the sequencing and assembly of ever-larger eukaryotic genomes. Here we describe our first whole-genome annotation of a purely 454-sequenced fungal genome that is larger than a yeast (>30 Mbp). The pezizomycotine (filamentous ascomycote) Aspergillus carbonarius belongs to the Aspergillus section Nigri species complex, members of which are significant as platforms for bioenergy and bioindustrial technology, as members of soil microbial communities and players in the global carbon cycle, and as agricultural toxigens. Application of a modified version of the standard JGI Annotation Pipeline has so far predicted ~;;10k genes. ~;;12percent of these preliminary annotations suffer a potential frameshift error, which is somewhat higher than the ~;;9percent rate in the Sanger-sequenced and conventionally assembled and annotated genome of fellow Aspergillus section Nigri member A. niger. Also,>90percent of A. niger genes have potential homologs in the A. carbonarius preliminary annotation. Weconclude, and with further annotation and comparative analysis expect to confirm, that 454 sequencing strategies provide a promising substrate for annotation of modestly sized eukaryotic genomes. We will also present results of annotation of a number of other pyrosequenced fungal genomes of bioenergy interest.

  12. Synchrotrons and their applications in medical imaging and therapy

    International Nuclear Information System (INIS)

    Lewis, R.

    2004-01-01

    Full text: Australasia's first synchrotron is being built on the campus of Monash University near Melbourne. Is it of any relevance to the medical imaging and radiation therapy communities? The answer is an unequivocal yes. Synchrotrons overcome many of the problems with conventional X-ray sources and as a result make it possible to demonstrate extraordinary advances in both X-ray imaging and indeed in radio-therapy. Synchrotron imaging offers us a window into what is possible and the results are spectacular. Specific examples include lung images that reveal alveolar structure and computed tomography of single cells. For therapy treatments are being pioneered that seem to be effective on high grade gliomas. An overview of the status of medical applications using synchrotrons will be given and the proposed Australian medical imaging and therapy facilities will be described and some of the proposed research highlighted. Copyright (2004) Australasian College of Physical Scientists and Engineers in Medicine

  13. DIANE stationary neutron radiography system image quality and industrial applications

    International Nuclear Information System (INIS)

    Cluzeau, S.; Huet, J.; Tourneur, P. le

    1994-01-01

    The SODERN neutron radiography laboratory has operated since February 1993 using a sealed tube generator (GENIE 46). An experimental programme of characterization (dosimetry, spectroscopy) has confirmed the expected performances concerning: neutron flux intensity, neutron energy range, residual gamma flux. Results are given in a specific report [2]. This paper is devoted to the image performance reporting. ASTM and specific indicators have been used to test the image quality with various converters and films. The corresponding modulation transfer functions are to be determined from image processing. Some industrial applications have demonstrated the capabilities of the system: corrosion detection in aircraft parts, ammunitions filling testing, detection of polymer lacks in sandwich steel sheets, detection of moisture in a probe for geophysics, residual ceramic cores imaging in turbine blades. Various computerized electronic imaging systems will be tested to improve the industrial capabilities. (orig.)

  14. Imaging with electromagnetic spectrum applications in food and agriculture

    CERN Document Server

    Jayasuriya, Hemantha

    2014-01-01

    This book demonstrates how imaging techniques, applying different frequency bands from the electromagnetic spectrum, are used in scientific research. Illustrated with numerous examples this book is structured according to the different radiation bands: From Gamma-rays over UV and IR to radio frequencies. In order to ensure a clear understanding of the processing methodologies, the text is enriched with descriptions of how digital images are formed, acquired, processed and how to extract information from them. A special emphasis is given to the application of imaging techniques in food and agriculture research.

  15. Application of image processing technology in yarn hairiness detection

    Directory of Open Access Journals (Sweden)

    Guohong ZHANG

    2016-02-01

    Full Text Available Digital image processing technology is one of the new methods for yarn detection, which can realize the digital characterization and objective evaluation of yarn appearance. This paper overviews the current status of development and application of digital image processing technology used for yarn hairiness evaluation, and analyzes and compares the traditional detection methods and this new developed method. Compared with the traditional methods, the image processing technology based method is more objective, fast and accurate, which is the vital development trend of the yarn appearance evaluation.

  16. All-optoelectronic continuous wave THz imaging for biomedical applications

    International Nuclear Information System (INIS)

    Siebert, Karsten J; Loeffler, Torsten; Quast, Holger; Thomson, Mark; Bauer, Tobias; Leonhardt, Rainer; Czasch, Stephanie; Roskos, Hartmut G

    2002-01-01

    We present an all-optoelectronic THz imaging system for ex vivo biomedical applications based on photomixing of two continuous-wave laser beams using photoconductive antennas. The application of hyperboloidal lenses is discussed. They allow for f-numbers less than 1/2 permitting better focusing and higher spatial resolution compared to off-axis paraboloidal mirrors whose f-numbers for practical reasons must be larger than 1/2. For a specific histological sample, an analysis of image noise is discussed

  17. Development and application of PET-MRI image fusion technology

    International Nuclear Information System (INIS)

    Song Jianhua; Zhao Jinhua; Qiao Wenli

    2011-01-01

    The emerging and growing in popularity of PET-CT scanner brings us the convenience and cognizes the advantages such as diagnosis, staging, curative effect evaluation and prognosis for malignant tumor. And the PET-MRI installing maybe a new upsurge when the machine gradually mature, because of the MRI examination without the radiation exposure and with the higher soft tissue resolution. This paper summarized the developing course of image fusion technology and some researches of clinical application about PET-MRI at present, in order to help people to understand the functions and know its wide application of the upcoming new instrument, mainly focuses the application on the central nervous system and some soft tissue lesions. And before PET-MRI popularization, people can still carry out some researches of various image fusion and clinical application on the current equipment. (authors)

  18. Vind(x): Using the user through cooperative annotation

    NARCIS (Netherlands)

    Williams, A.D.; Vuurpijl, Louis; Schomaker, Lambert; van den Broek, Egon

    2002-01-01

    In this paper, the image retrieval system Vind(x) is described. The architecture of the system and first user experiences are reported. Using Vind(x), users on the Internet may cooperatively annotate objects in paintings by use of the pen or mouse. The collected data can be searched through

  19. Particle Image Velocimetry Applications of Fluorescent Dye-Doped Particles

    OpenAIRE

    Petrosky, Brian Joseph

    2015-01-01

    Laser flare can often be a major issue in particle image velocimetry (PIV) involving solid boundaries in a flow or a gas-liquid interface. The use of fluorescent light from dye-doped particles has been demonstrated in water applications, but reproducing the technique in an airflow is more difficult due to particle size constraints and safety concerns. The following thesis is formatted in a hybrid manuscript style, including a full paper presenting the applications of fluorescent Kiton R...

  20. Training nuclei detection algorithms with simple annotations

    Directory of Open Access Journals (Sweden)

    Henning Kost

    2017-01-01

    Full Text Available Background: Generating good training datasets is essential for machine learning-based nuclei detection methods. However, creating exhaustive nuclei contour annotations, to derive optimal training data from, is often infeasible. Methods: We compared different approaches for training nuclei detection methods solely based on nucleus center markers. Such markers contain less accurate information, especially with regard to nuclear boundaries, but can be produced much easier and in greater quantities. The approaches use different automated sample extraction methods to derive image positions and class labels from nucleus center markers. In addition, the approaches use different automated sample selection methods to improve the detection quality of the classification algorithm and reduce the run time of the training process. We evaluated the approaches based on a previously published generic nuclei detection algorithm and a set of Ki-67-stained breast cancer images. Results: A Voronoi tessellation-based sample extraction method produced the best performing training sets. However, subsampling of the extracted training samples was crucial. Even simple class balancing improved the detection quality considerably. The incorporation of active learning led to a further increase in detection quality. Conclusions: With appropriate sample extraction and selection methods, nuclei detection algorithms trained on the basis of simple center marker annotations can produce comparable quality to algorithms trained on conventionally created training sets.

  1. Single-pixel imaging by Hadamard transform and its application for hyperspectral imaging

    Science.gov (United States)

    Mizutani, Yasuhiro; Shibuya, Kyuki; Taguchi, Hiroki; Iwata, Tetsuo; Takaya, Yasuhiro; Yasui, Takeshi

    2016-10-01

    In this paper, we report on comparisons of single-pixel imagings using Hadamard Transform (HT) and the ghost imaging (GI) in the view point of the visibility under weak light conditions. For comparing the two methods, we have discussed about qualities of images based on experimental results and numerical analysis. To detect images by the TH method, we have illuminated the Hadamard-pattern mask and calculated by orthogonal transform. On the other hand, the GH method can detect images by illuminating random patterns and a correlation measurement. For comparing two methods under weak light intensity, we have controlled illuminated intensities of a DMD projector about 0.1 in signal-to-noise ratio. Though a process speed of the HT image was faster then an image via the GI, the GI method has an advantage of detection under weak light condition. An essential difference between the HT and the GI method is discussed about reconstruction process. Finally, we also show a typical application of the single-pixel imaging such as hyperspectral images by using dual-optical frequency combs. An optical setup consists of two fiber lasers, spatial light modulated for generating patten illumination, and a single pixel detector. We are successful to detect hyperspectrul images in a range from 1545 to 1555 nm at 0.01nm resolution.

  2. Technical Note: Deformable image registration on partially matched images for radiotherapy applications

    International Nuclear Information System (INIS)

    Yang Deshan; Goddu, S. Murty; Lu Wei; Pechenaya, Olga L.; Wu Yu; Deasy, Joseph O.; El Naqa, Issam; Low, Daniel A.

    2010-01-01

    In radiation therapy applications, deformable image registrations (DIRs) are often carried out between two images that only partially match. Image mismatching could present as superior-inferior coverage differences, field-of-view (FOV) cutoffs, or motion crossing the image boundaries. In this study, the authors propose a method to improve the existing DIR algorithms so that DIR can be carried out in such situations. The basic idea is to extend the image volumes and define the extension voxels (outside the FOV or outside the original image volume) as NaN (not-a-number) values that are transparent to all floating-point computations in the DIR algorithms. Registrations are then carried out with one additional rule that NaN voxels can match any voxels. In this way, the matched sections of the images are registered properly, and the mismatched sections of the images are registered to NaN voxels. This method makes it possible to perform DIR on partially matched images that otherwise are difficult to register. It may also improve DIR accuracy, especially near or in the mismatched image regions.

  3. Saint: a lightweight integration environment for model annotation.

    Science.gov (United States)

    Lister, Allyson L; Pocock, Matthew; Taschuk, Morgan; Wipat, Anil

    2009-11-15

    Saint is a web application which provides a lightweight annotation integration environment for quantitative biological models. The system enables modellers to rapidly mark up models with biological information derived from a range of data sources. Saint is freely available for use on the web at http://www.cisban.ac.uk/saint. The web application is implemented in Google Web Toolkit and Tomcat, with all major browsers supported. The Java source code is freely available for download at http://saint-annotate.sourceforge.net. The Saint web server requires an installation of libSBML and has been tested on Linux (32-bit Ubuntu 8.10 and 9.04).

  4. Opportunities and applications of medical imaging and image processing techniques for nondestructive testing

    International Nuclear Information System (INIS)

    Song, Samuel Moon Ho; Cho, Jung Ho; Son, Sang Rock; Sung, Je Jonng; Ahn, Hyung Keun; Lee, Jeong Soon

    2002-01-01

    Nondestructive testing (NDT) of structures strives to extract all relevant data regarding the state of the structure without altering its form or properties. The success enjoyed by imaging and image processing technologies in the field of modem medicine forecasts similar success of image processing related techniques both in research and practice of NDT. In this paper, we focus on two particular instances of such applications: a modern vision technique for 3-D profile and shape measurement, and ultrasonic imaging with rendering for 3-D visualization. Ultrasonic imaging of 3-D structures for nondestructive evaluation purposes must provide readily recognizable 3-D images with enough details to clearly show various faults that may or may not be present. As a step towards Improving conspicuity and thus detection of faults, we propose a pulse-echo ultrasonic imaging technique to generate a 3-D image of the 3-D object under evaluation through strategic scanning and processing of the pulse-echo data. This three-dimensional processing and display improves conspicuity of faults and in addition, provides manipulation capabilities, such as pan and rotation of the 3-D structure. As a second application, we consider an image based three-dimensional shape determination system. The shape, and thus the three-dimensional coordinate information of the 3-D object, is determined solely from captured images of the 3-D object from a prescribed set of viewpoints. The approach is based on the shape from silhouette (SFS) technique and the efficacy of the SFS method is tested using a sample data set. This system may be used to visualize the 3-D object efficiently, or to quickly generate initial CAD data for reverse engineering purposes. The proposed system potentially may be used in three dimensional design applications such as 3-D animation and 3-D games.

  5. Fractal-Based Image Analysis In Radiological Applications

    Science.gov (United States)

    Dellepiane, S.; Serpico, S. B.; Vernazza, G.; Viviani, R.

    1987-10-01

    We present some preliminary results of a study aimed to assess the actual effectiveness of fractal theory and to define its limitations in the area of medical image analysis for texture description, in particular, in radiological applications. A general analysis to select appropriate parameters (mask size, tolerance on fractal dimension estimation, etc.) has been performed on synthetically generated images of known fractal dimensions. Moreover, we analyzed some radiological images of human organs in which pathological areas can be observed. Input images were subdivided into blocks of 6x6 pixels; then, for each block, the fractal dimension was computed in order to create fractal images whose intensity was related to the D value, i.e., texture behaviour. Results revealed that the fractal images could point out the differences between normal and pathological tissues. By applying histogram-splitting segmentation to the fractal images, pathological areas were isolated. Two different techniques (i.e., the method developed by Pentland and the "blanket" method) were employed to obtain fractal dimension values, and the results were compared; in both cases, the appropriateness of the fractal description of the original images was verified.

  6. Research-grade CMOS image sensors for demanding space applications

    Science.gov (United States)

    Saint-Pé, Olivier; Tulet, Michel; Davancens, Robert; Larnaudie, Franck; Magnan, Pierre; Corbière, Franck; Martin-Gonthier, Philippe; Belliot, Pierre

    2017-11-01

    Imaging detectors are key elements for optical instruments and sensors on board space missions dedicated to Earth observation (high resolution imaging, atmosphere spectroscopy...), Solar System exploration (micro cameras, guidance for autonomous vehicle...) and Universe observation (space telescope focal planes, guiding sensors...). This market has been dominated by CCD technology for long. Since the mid- 90s, CMOS Image Sensors (CIS) have been competing with CCDs for more and more consumer domains (webcams, cell phones, digital cameras...). Featuring significant advantages over CCD sensors for space applications (lower power consumption, smaller system size, better radiations behaviour...), CMOS technology is also expanding in this field, justifying specific R&D and development programs funded by national and European space agencies (mainly CNES, DGA, and ESA). All along the 90s and thanks to their increasingly improving performances, CIS have started to be successfully used for more and more demanding applications, from vision and control functions requiring low-level performances to guidance applications requiring medium-level performances. Recent technology improvements have made possible the manufacturing of research-grade CIS that are able to compete with CCDs in the high-performances arena. After an introduction outlining the growing interest of optical instruments designers for CMOS image sensors, this talk will present the existing and foreseen ways to reach high-level electro-optics performances for CIS. The developments of CIS prototypes built using an imaging CMOS process and of devices based on improved designs will be presented.

  7. Imaging requirements for medical applications of additive manufacturing.

    Science.gov (United States)

    Huotilainen, Eero; Paloheimo, Markku; Salmi, Mika; Paloheimo, Kaija-Stiina; Björkstrand, Roy; Tuomi, Jukka; Markkola, Antti; Mäkitie, Antti

    2014-02-01

    Additive manufacturing (AM), formerly known as rapid prototyping, is steadily shifting its focus from industrial prototyping to medical applications as AM processes, bioadaptive materials, and medical imaging technologies develop, and the benefits of the techniques gain wider knowledge among clinicians. This article gives an overview of the main requirements for medical imaging affected by needs of AM, as well as provides a brief literature review from existing clinical cases concentrating especially on the kind of radiology they required. As an example application, a pair of CT images of the facial skull base was turned into 3D models in order to illustrate the significance of suitable imaging parameters. Additionally, the model was printed into a preoperative medical model with a popular AM device. Successful clinical cases of AM are recognized to rely heavily on efficient collaboration between various disciplines - notably operating surgeons, radiologists, and engineers. The single main requirement separating tangible model creation from traditional imaging objectives such as diagnostics and preoperative planning is the increased need for anatomical accuracy in all three spatial dimensions, but depending on the application, other specific requirements may be present as well. This article essentially intends to narrow the potential communication gap between radiologists and engineers who work with projects involving AM by showcasing the overlap between the two disciplines.

  8. Quantitative imaging features: extension of the oncology medical image database

    Science.gov (United States)

    Patel, M. N.; Looney, P. T.; Young, K. C.; Halling-Brown, M. D.

    2015-03-01

    Radiological imaging is fundamental within the healthcare industry and has become routinely adopted for diagnosis, disease monitoring and treatment planning. With the advent of digital imaging modalities and the rapid growth in both diagnostic and therapeutic imaging, the ability to be able to harness this large influx of data is of paramount importance. The Oncology Medical Image Database (OMI-DB) was created to provide a centralized, fully annotated dataset for research. The database contains both processed and unprocessed images, associated data, and annotations and where applicable expert determined ground truths describing features of interest. Medical imaging provides the ability to detect and localize many changes that are important to determine whether a disease is present or a therapy is effective by depicting alterations in anatomic, physiologic, biochemical or molecular processes. Quantitative imaging features are sensitive, specific, accurate and reproducible imaging measures of these changes. Here, we describe an extension to the OMI-DB whereby a range of imaging features and descriptors are pre-calculated using a high throughput approach. The ability to calculate multiple imaging features and data from the acquired images would be valuable and facilitate further research applications investigating detection, prognosis, and classification. The resultant data store contains more than 10 million quantitative features as well as features derived from CAD predictions. Theses data can be used to build predictive models to aid image classification, treatment response assessment as well as to identify prognostic imaging biomarkers.

  9. Gene therapy imaging in patients for oncological applications

    International Nuclear Information System (INIS)

    Penuelas, Ivan; Haberkorn, Uwe; Yaghoubi, Shahriar; Gambhir, Sanjiv S.

    2005-01-01

    Thus far, traditional methods for evaluating gene transfer and expression have been shown to be of limited value in the clinical arena. Consequently there is a real need to develop new methods that could be repeatedly and safely performed in patients for such purposes. Molecular imaging techniques for gene expression monitoring have been developed and successfully used in animal models, but their sensitivity and reproducibility need to be tested and validated in human studies. In this review, we present the current status of gene therapy-based anticancer strategies and show how molecular imaging, and more specifically radionuclide-based approaches, can be used in gene therapy procedures for oncological applications in humans. The basis of gene expression imaging is described and specific uses of these non-invasive procedures for gene therapy monitoring illustrated. Molecular imaging of transgene expression in humans and evaluation of response to gene-based therapeutic procedures are considered. The advantages of molecular imaging for whole-body monitoring of transgene expression as a way to permit measurement of important parameters in both target and non-target organs are also analyzed. The relevance of this technology for evaluation of the necessary vector dose and how it can be used to improve vector design are also examined. Finally, the advantages of designing a gene therapy-based clinical trial with imaging fully integrated from the very beginning are discussed and future perspectives for the development of these applications outlined. (orig.)

  10. Wavelet-based compression of pathological images for telemedicine applications

    Science.gov (United States)

    Chen, Chang W.; Jiang, Jianfei; Zheng, Zhiyong; Wu, Xue G.; Yu, Lun

    2000-05-01

    In this paper, we present the performance evaluation of wavelet-based coding techniques as applied to the compression of pathological images for application in an Internet-based telemedicine system. We first study how well suited the wavelet-based coding is as it applies to the compression of pathological images, since these images often contain fine textures that are often critical to the diagnosis of potential diseases. We compare the wavelet-based compression with the DCT-based JPEG compression in the DICOM standard for medical imaging applications. Both objective and subjective measures have been studied in the evaluation of compression performance. These studies are performed in close collaboration with expert pathologists who have conducted the evaluation of the compressed pathological images and communication engineers and information scientists who designed the proposed telemedicine system. These performance evaluations have shown that the wavelet-based coding is suitable for the compression of various pathological images and can be integrated well with the Internet-based telemedicine systems. A prototype of the proposed telemedicine system has been developed in which the wavelet-based coding is adopted for the compression to achieve bandwidth efficient transmission and therefore speed up the communications between the remote terminal and the central server of the telemedicine system.

  11. A possible application of magnetic resonance imaging for pharmaceutical research.

    Science.gov (United States)

    Kowalczuk, Joanna; Tritt-Goc, Jadwiga

    2011-03-18

    Magnetic resonance imaging (MRI) is a non-destructive and non-invasive method, the experiment can be conducted in situ and allows the studying of the sample and the different processes in vitro or in vivo. 1D, 2D or 3D imaging can be undertaken. MRI is nowadays most widely used in medicine as a clinical diagnostic tool, but has still seen limited application in the food and pharmaceutical sciences. The different imaging pulse sequences of MRI allow to image the processes that take place in a wide scale range from ms (dissolution of compact tablets) to hours (hydration of drug delivery systems) for mobile as well as for rigid spins, usually protons. The paper gives examples of MRI application of in vitro imaging of pharmaceutical dosage based on hydroxypropyl methylcellulose which have focused on water-penetration, diffusion, polymer swelling, and drug release, characterized with respect to other physical parameters such as pH and the molecular weight of polymer. Tetracycline hydrochloride was used as a model drug. NMR imaging of density distributions and fast kinetics of the dissolution behavior of compact tablets is presented for paracetamol tablets. Copyright © 2010 Elsevier B.V. All rights reserved.

  12. Perspectives on Imaging: Advanced Applications. Introduction and Overview.

    Science.gov (United States)

    Lynch, Clifford A.; Lunin, Lois F.

    1991-01-01

    Provides an overview of six articles that address relationships between electronic imaging technology and information science. Articles discuss the areas of technology; applications in the fields of visual arts, medicine, and textile history; conceptual foundations; and future visions, including work in virtual reality and cyberspace. (LRW)

  13. Application of an image processing software for quantitative autoradiography

    International Nuclear Information System (INIS)

    Sobeslavsky, E.; Bergmann, R.; Kretzschmar, M.; Wenzel, U.

    1993-01-01

    The present communication deals with the utilization of an image processing device for quantitative whole-body autoradiography, cell counting and also for interpretation of chromatograms. It is shown that the system parameters allow an adequate and precise determination of optical density values. Also shown are the main error sources limiting the applicability of the system. (orig.)

  14. Hexabundles: imaging fibre arrays for low-light astronomical applications

    DEFF Research Database (Denmark)

    Bland-Hawthorn, Joss; Bryant, Julie; Robertson, Gordon

    2010-01-01

    We demonstrate for the first time an imaging fibre bundle (“hexabundle”) that is suitable for low-light applications in astronomy. The most successful survey instruments at optical-infrared wavelengths today have obtained data on up to a million celestial sources using hundreds of multimode fibre...

  15. Design and applications of Computed Industrial Tomographic Imaging System (CITIS)

    International Nuclear Information System (INIS)

    Ramakrishna, G.S.; Umesh Kumar; Datta, S.S.; Rao, S.M.

    1996-01-01

    Computed tomographic imaging is an advanced technique for nondestructive testing (NDT) and examination. For the first time in India a computed aided tomography system has been indigenously developed in BARC for testing industrial components and was successfully demonstrated. The system in addition to Computed Tomography (CT) can also perform Digital Radiography (DR) to serve as a powerful tool for NDT applications. It has wider applications in the fields of nuclear, space and allied fields. The authors have developed a computed industrial tomographic imaging system with Cesium 137 gamma radiation source for nondestructive examination of engineering and industrial specimens. This presentation highlights the design and development of a prototype system and its software for image reconstruction, simulation and display. The paper also describes results obtained with several tests specimens, current development and possibility of using neutrons as well as high energy x-rays in computed tomography. (author)

  16. BisQue: cloud-based system for management, annotation, visualization, analysis and data mining of underwater and remote sensing imagery

    Science.gov (United States)

    Fedorov, D.; Miller, R. J.; Kvilekval, K. G.; Doheny, B.; Sampson, S.; Manjunath, B. S.

    2016-02-01

    Logistical and financial limitations of underwater operations are inherent in marine science, including biodiversity observation. Imagery is a promising way to address these challenges, but the diversity of organisms thwarts simple automated analysis. Recent developments in computer vision methods, such as convolutional neural networks (CNN), are promising for automated classification and detection tasks but are typically very computationally expensive and require extensive training on large datasets. Therefore, managing and connecting distributed computation, large storage and human annotations of diverse marine datasets is crucial for effective application of these methods. BisQue is a cloud-based system for management, annotation, visualization, analysis and data mining of underwater and remote sensing imagery and associated data. Designed to hide the complexity of distributed storage, large computational clusters, diversity of data formats and inhomogeneous computational environments behind a user friendly web-based interface, BisQue is built around an idea of flexible and hierarchical annotations defined by the user. Such textual and graphical annotations can describe captured attributes and the relationships between data elements. Annotations are powerful enough to describe cells in fluorescent 4D images, fish species in underwater videos and kelp beds in aerial imagery. Presently we are developing BisQue-based analysis modules for automated identification of benthic marine organisms. Recent experiments with drop-out and CNN based classification of several thousand annotated underwater images demonstrated an overall accuracy above 70% for the 15 best performing species and above 85% for the top 5 species. Based on these promising results, we have extended bisque with a CNN-based classification system allowing continuous training on user-provided data.

  17. Interpretation of medical imaging data with a mobile application: a mobile digital imaging processing environment

    Directory of Open Access Journals (Sweden)

    Meng Kuan eLin

    2013-07-01

    Full Text Available Digital Imaging Processing (DIP requires data extraction and output from a visualization tool to be consistent. Data handling and transmission between the server and a user is a systematic process in service interpretation. The use of integrated medical services for management and viewing of imaging data in combination with a mobile visualization tool can be greatly facilitated by data analysis and interpretation. This paper presents an integrated mobile application and digital imaging processing service, called M-DIP. The objective of the system is to (1 automate the direct data tiling, conversion, pre-tiling of brain images from Medical Imaging NetCDF (MINC, Neuroimaging Informatics Technology Initiative (NIFTI to RAW formats; (2 speed up querying of imaging measurement; and (3 display high level of images with three dimensions in real world coordinates. In addition, M-DIP provides the ability to work on a mobile or tablet device without any software installation using web-based protocols. M-DIP implements three levels of architecture with a relational middle- layer database, a stand-alone DIP server and a mobile application logic middle level realizing user interpretation for direct querying and communication. This imaging software has the ability to display biological imaging data a multiple zoom levels and to increase its quality to meet users expectations. Interpretation of bioimaging data is facilitated by an interface analogous to online mapping services using real world coordinate browsing. This allows mobile devices to display multiple datasets simultaneously from a remote site. M-DIP can be used as a measurement repository that can be accessed by any network environment, such as a portable mobile or tablet device. In addition, this system and combination with mobile applications are establishing a virtualization tool in the neuroinformatics field to speed interpretation services.

  18. Alignment-Annotator web server: rendering and annotating sequence alignments.

    Science.gov (United States)

    Gille, Christoph; Fähling, Michael; Weyand, Birgit; Wieland, Thomas; Gille, Andreas

    2014-07-01

    Alignment-Annotator is a novel web service designed to generate interactive views of annotated nucleotide and amino acid sequence alignments (i) de novo and (ii) embedded in other software. All computations are performed at server side. Interactivity is implemented in HTML5, a language native to web browsers. The alignment is initially displayed using default settings and can be modified with the graphical user interfaces. For example, individual sequences can be reordered or deleted using drag and drop, amino acid color code schemes can be applied and annotations can be added. Annotations can be made manually or imported (BioDAS servers, the UniProt, the Catalytic Site Atlas and the PDB). Some edits take immediate effect while others require server interaction and may take a few seconds to execute. The final alignment document can be downloaded as a zip-archive containing the HTML files. Because of the use of HTML the resulting interactive alignment can be viewed on any platform including Windows, Mac OS X, Linux, Android and iOS in any standard web browser. Importantly, no plugins nor Java are required and therefore Alignment-Anotator represents the first interactive browser-based alignment visualization. http://www.bioinformatics.org/strap/aa/ and http://strap.charite.de/aa/. © The Author(s) 2014. Published by Oxford University Press on behalf of Nucleic Acids Research.

  19. PET imaging in pediatric neuroradiology: current and future applications

    International Nuclear Information System (INIS)

    Kim, Sunhee; Salamon, Noriko; Jackson, Hollie A.; Blueml, Stefan; Panigrahy, Ashok

    2010-01-01

    Molecular imaging with positron emitting tomography (PET) is widely accepted as an essential part of the diagnosis and evaluation of neoplastic and non-neoplastic disease processes. PET has expanded its role from the research domain into clinical application for oncology, cardiology and neuropsychiatry. More recently, PET is being used as a clinical molecular imaging tool in pediatric neuroimaging. PET is considered an accurate and noninvasive method to study brain activity and to understand pediatric neurological disease processes. In this review, specific examples of the clinical use of PET are given with respect to pediatric neuroimaging. The current use of co-registration of PET with MR imaging is exemplified in regard to pediatric epilepsy. The current use of PET/CT in the evaluation of head and neck lymphoma and pediatric brain tumors is also reviewed. Emerging technologies including PET/MRI and neuroreceptor imaging are discussed. (orig.)

  20. PET imaging in pediatric neuroradiology: current and future applications

    Energy Technology Data Exchange (ETDEWEB)

    Kim, Sunhee [Children' s Hospital of Pittsburgh of UPMC, Department of Radiology, Pittsburgh, PA (United States); Salamon, Noriko [UCLA David Geffen School of Medicine at UCLA, Department of Radiology, Ronald Reagan UCLA Medical Center, Los Angeles, CA (United States); Jackson, Hollie A.; Blueml, Stefan [Keck School of Medicine of USC, Department of Radiology, Childrens Hospital Los Angeles, Los Angeles, CA (United States); Panigrahy, Ashok [Children' s Hospital of Pittsburgh of UPMC, Department of Radiology, Pittsburgh, PA (United States); Keck School of Medicine of USC, Department of Radiology, Childrens Hospital Los Angeles, Los Angeles, CA (United States)

    2010-01-15

    Molecular imaging with positron emitting tomography (PET) is widely accepted as an essential part of the diagnosis and evaluation of neoplastic and non-neoplastic disease processes. PET has expanded its role from the research domain into clinical application for oncology, cardiology and neuropsychiatry. More recently, PET is being used as a clinical molecular imaging tool in pediatric neuroimaging. PET is considered an accurate and noninvasive method to study brain activity and to understand pediatric neurological disease processes. In this review, specific examples of the clinical use of PET are given with respect to pediatric neuroimaging. The current use of co-registration of PET with MR imaging is exemplified in regard to pediatric epilepsy. The current use of PET/CT in the evaluation of head and neck lymphoma and pediatric brain tumors is also reviewed. Emerging technologies including PET/MRI and neuroreceptor imaging are discussed. (orig.)

  1. Real-time image mosaicing for medical applications.

    Science.gov (United States)

    Loewke, Kevin E; Camarillo, David B; Jobst, Christopher A; Salisbury, J Kenneth

    2007-01-01

    In this paper we describe the development of a robotically-assisted image mosaicing system for medical applications. The processing occurs in real-time due to a fast initial image alignment provided by robotic position sensing. Near-field imaging, defined by relatively large camera motion, requires translations as well as pan and tilt orientations to be measured. To capture these measurements we use 5-d.o.f. sensing along with a hand-eye calibration to account for sensor offset. This sensor-based approach speeds up the mosaicing, eliminates cumulative errors, and readily handles arbitrary camera motions. Our results have produced visually satisfactory mosaics on a dental model but can be extended to other medical images.

  2. Fast imaging applications in the Nuclear Test Program

    International Nuclear Information System (INIS)

    Lear, R.

    1983-01-01

    Applications of fast imaging employ both streak cameras and fast framing techniques. Image intensifier tubes are gated to provide fast two-dimensional shutters of 2 to 3 ns duration with shatter ratios of greater than 10 6 and resolution greater than 10 4 pixels. Shutters of less than 1 ns have been achieved with experimental tubes. Characterization data demonstrate the importance of tube and pulser design. Streak cameras are used to simultaneously record temporal and intensity information from up to 200 spatial points. Streak cameras are combined with remote readout for downhole uses and are coupled to fiber optic cables for uphole uses. Optical wavelength multiplexing is being studied as a means of compressing additional image data onto optical fibers. Performance data demonstrate trade-offs between image resolution and system sensitivity

  3. Early experiences with crowdsourcing airway annotations in chest CT

    DEFF Research Database (Denmark)

    Cheplygina, Veronika; Perez-Rovira, Adria; Kuo, Wieying

    2016-01-01

    Measuring airways in chest computed tomography (CT) images is important for characterizing diseases such as cystic fibrosis, yet very time-consuming to perform manually. Machine learning algorithms offer an alternative, but need large sets of annotated data to perform well. We investigate whether...... a number of further research directions and provide insight into the challenges of crowdsourcing in medical images from the perspective of first-time users....

  4. Reconstruction of a ring applicator using CT imaging: impact of the reconstruction method and applicator orientation

    International Nuclear Information System (INIS)

    Hellebust, Taran Paulsen; Tanderup, Kari; Bergstrand, Eva Stabell; Knutsen, Bjoern Helge; Roeislien, Jo; Olsen, Dag Rune

    2007-01-01

    The purpose of this study is to investigate whether the method of applicator reconstruction and/or the applicator orientation influence the dose calculation to points around the applicator for brachytherapy of cervical cancer with CT-based treatment planning. A phantom, containing a fixed ring applicator set and six lead pellets representing dose points, was used. The phantom was CT scanned with the ring applicator at four different angles related to the image plane. In each scan the applicator was reconstructed by three methods: (1) direct reconstruction in each image (DR) (2) reconstruction in multiplanar reconstructed images (MPR) and (3) library plans, using pre-defined applicator geometry (LIB). The doses to the lead pellets were calculated. The relative standard deviation (SD) for all reconstruction methods was less than 3.7% in the dose points. The relative SD for the LIB method was significantly lower (p < 0.05) than for the DR and MPR methods for all but two points. All applicator orientations had similar dose calculation reproducibility. Using library plans for applicator reconstruction gives the most reproducible dose calculation. However, with restrictive guidelines for applicator reconstruction the uncertainties for all methods are low compared to other factors influencing the accuracy of brachytherapy

  5. Imaging brain microstructure with diffusion MRI: practicality and applications.

    Science.gov (United States)

    Alexander, Daniel C; Dyrby, Tim B; Nilsson, Markus; Zhang, Hui

    2017-11-29

    This article gives an overview of microstructure imaging of the brain with diffusion MRI and reviews the state of the art. The microstructure-imaging paradigm aims to estimate and map microscopic properties of tissue using a model that links these properties to the voxel scale MR signal. Imaging techniques of this type are just starting to make the transition from the technical research domain to wide application in biomedical studies. We focus here on the practicalities of both implementing such techniques and using them in applications. Specifically, the article summarizes the relevant aspects of brain microanatomy and the range of diffusion-weighted MR measurements that provide sensitivity to them. It then reviews the evolution of mathematical and computational models that relate the diffusion MR signal to brain tissue microstructure, as well as the expanding areas of application. Next we focus on practicalities of designing a working microstructure imaging technique: model selection, experiment design, parameter estimation, validation, and the pipeline of development of this class of technique. The article concludes with some future perspectives on opportunities in this topic and expectations on how the field will evolve in the short-to-medium term. Copyright © 2017 John Wiley & Sons, Ltd.

  6. Laser Speckle Contrast Imaging: theory, instrumentation and applications.

    Science.gov (United States)

    Senarathna, Janaka; Rege, Abhishek; Li, Nan; Thakor, Nitish V

    2013-01-01

    Laser Speckle Contrast Imaging (LSCI) is a wide field of view, non scanning optical technique for observing blood flow. Speckles are produced when coherent light scattered back from biological tissue is diffracted through the limiting aperture of focusing optics. Mobile scatterers cause the speckle pattern to blur; a model can be constructed by inversely relating the degree of blur, termed speckle contrast to the scatterer speed. In tissue, red blood cells are the main source of moving scatterers. Therefore, blood flow acts as a virtual contrast agent, outlining blood vessels. The spatial resolution (~10 μm) and temporal resolution (10 ms to 10 s) of LSCI can be tailored to the application. Restricted by the penetration depth of light, LSCI can only visualize superficial blood flow. Additionally, due to its non scanning nature, LSCI is unable to provide depth resolved images. The simple setup and non-dependence on exogenous contrast agents have made LSCI a popular tool for studying vascular structure and blood flow dynamics. We discuss the theory and practice of LSCI and critically analyze its merit in major areas of application such as retinal imaging, imaging of skin perfusion as well as imaging of neurophysiology.

  7. Secure and Efficient Transmission of Hyperspectral Images for Geosciences Applications

    Science.gov (United States)

    Carpentieri, Bruno; Pizzolante, Raffaele

    2017-12-01

    Hyperspectral images are acquired through air-borne or space-borne special cameras (sensors) that collect information coming from the electromagnetic spectrum of the observed terrains. Hyperspectral remote sensing and hyperspectral images are used for a wide range of purposes: originally, they were developed for mining applications and for geology because of the capability of this kind of images to correctly identify various types of underground minerals by analysing the reflected spectrums, but their usage has spread in other application fields, such as ecology, military and surveillance, historical research and even archaeology. The large amount of data obtained by the hyperspectral sensors, the fact that these images are acquired at a high cost by air-borne sensors and that they are generally transmitted to a base, makes it necessary to provide an efficient and secure transmission protocol. In this paper, we propose a novel framework that allows secure and efficient transmission of hyperspectral images, by combining a reversible invisible watermarking scheme, used in conjunction with digital signature techniques, and a state-of-art predictive-based lossless compression algorithm.

  8. Open-source software platform for medical image segmentation applications

    Science.gov (United States)

    Namías, R.; D'Amato, J. P.; del Fresno, M.

    2017-11-01

    Segmenting 2D and 3D images is a crucial and challenging problem in medical image analysis. Although several image segmentation algorithms have been proposed for different applications, no universal method currently exists. Moreover, their use is usually limited when detection of complex and multiple adjacent objects of interest is needed. In addition, the continually increasing volumes of medical imaging scans require more efficient segmentation software design and highly usable applications. In this context, we present an extension of our previous segmentation framework which allows the combination of existing explicit deformable models in an efficient and transparent way, handling simultaneously different segmentation strategies and interacting with a graphic user interface (GUI). We present the object-oriented design and the general architecture which consist of two layers: the GUI at the top layer, and the processing core filters at the bottom layer. We apply the framework for segmenting different real-case medical image scenarios on public available datasets including bladder and prostate segmentation from 2D MRI, and heart segmentation in 3D CT. Our experiments on these concrete problems show that this framework facilitates complex and multi-object segmentation goals while providing a fast prototyping open-source segmentation tool.

  9. Interpretation of medical imaging data with a mobile application: a mobile digital imaging processing environment.

    Science.gov (United States)

    Lin, Meng Kuan; Nicolini, Oliver; Waxenegger, Harald; Galloway, Graham J; Ullmann, Jeremy F P; Janke, Andrew L

    2013-01-01

    Digital Imaging Processing (DIP) requires data extraction and output from a visualization tool to be consistent. Data handling and transmission between the server and a user is a systematic process in service interpretation. The use of integrated medical services for management and viewing of imaging data in combination with a mobile visualization tool can be greatly facilitated by data analysis and interpretation. This paper presents an integrated mobile application and DIP service, called M-DIP. The objective of the system is to (1) automate the direct data tiling, conversion, pre-tiling of brain images from Medical Imaging NetCDF (MINC), Neuroimaging Informatics Technology Initiative (NIFTI) to RAW formats; (2) speed up querying of imaging measurement; and (3) display high-level of images with three dimensions in real world coordinates. In addition, M-DIP provides the ability to work on a mobile or tablet device without any software installation using web-based protocols. M-DIP implements three levels of architecture with a relational middle-layer database, a stand-alone DIP server, and a mobile application logic middle level realizing user interpretation for direct querying and communication. This imaging software has the ability to display biological imaging data at multiple zoom levels and to increase its quality to meet users' expectations. Interpretation of bioimaging data is facilitated by an interface analogous to online mapping services using real world coordinate browsing. This allows mobile devices to display multiple datasets simultaneously from a remote site. M-DIP can be used as a measurement repository that can be accessed by any network environment, such as a portable mobile or tablet device. In addition, this system and combination with mobile applications are establishing a virtualization tool in the neuroinformatics field to speed interpretation services.

  10. Application of content-based image compression to telepathology

    Science.gov (United States)

    Varga, Margaret J.; Ducksbury, Paul G.; Callagy, Grace

    2002-05-01

    Telepathology is a means of practicing pathology at a distance, viewing images on a computer display rather than directly through a microscope. Without compression, images take too long to transmit to a remote location and are very expensive to store for future examination. However, to date the use of compressed images in pathology remains controversial. This is because commercial image compression algorithms such as JPEG achieve data compression without knowledge of the diagnostic content. Often images are lossily compressed at the expense of corrupting informative content. None of the currently available lossy compression techniques are concerned with what information has been preserved and what data has been discarded. Their sole objective is to compress and transmit the images as fast as possible. By contrast, this paper presents a novel image compression technique, which exploits knowledge of the slide diagnostic content. This 'content based' approach combines visually lossless and lossy compression techniques, judiciously applying each in the appropriate context across an image so as to maintain 'diagnostic' information while still maximising the possible compression. Standard compression algorithms, e.g. wavelets, can still be used, but their use in a context sensitive manner can offer high compression ratios and preservation of diagnostically important information. When compared with lossless compression the novel content-based approach can potentially provide the same degree of information with a smaller amount of data. When compared with lossy compression it can provide more information for a given amount of compression. The precise gain in the compression performance depends on the application (e.g. database archive or second opinion consultation) and the diagnostic content of the images.

  11. Public Relations: Selected, Annotated Bibliography.

    Science.gov (United States)

    Demo, Penny

    Designed for students and practitioners of public relations (PR), this annotated bibliography focuses on recent journal articles and ERIC documents. The 34 citations include the following: (1) surveys of public relations professionals on career-related education; (2) literature reviews of research on measurement and evaluation of PR and…

  12. Persuasion: A Selected, Annotated Bibliography.

    Science.gov (United States)

    McDermott, Steven T.

    Designed to reflect the diversity of approaches to persuasion, this annotated bibliography cites materials selected for their contribution to that diversity as well as for being relatively current and/or especially significant representatives of particular approaches. The bibliography starts with a list of 17 general textbooks on approaches to…

  13. The surplus value of semantic annotations

    NARCIS (Netherlands)

    Marx, M.

    2010-01-01

    We compare the costs of semantic annotation of textual documents to its benefits for information processing tasks. Semantic annotation can improve the performance of retrieval tasks and facilitates an improved search experience through faceted search, focused retrieval, better document summaries,

  14. Systems Theory and Communication. Annotated Bibliography.

    Science.gov (United States)

    Covington, William G., Jr.

    This annotated bibliography presents annotations of 31 books and journal articles dealing with systems theory and its relation to organizational communication, marketing, information theory, and cybernetics. Materials were published between 1963 and 1992 and are listed alphabetically by author. (RS)

  15. High speed electronic imaging application in aeroballistic research

    International Nuclear Information System (INIS)

    Brown, R.R.; Parker, J.R.

    1984-01-01

    Physical and temporal restrictions imposed by modern aeroballistics have pushed imaging technology to the point where special photoconductive surfaces and high-speed support electronics are dictated. Specifications for these devices can be formulated by a methodical analysis of critical parameters and how they interact. In terms of system theory, system transfer functions and state equations can be used in optimal coupling of devices to maximize system performance. Application of these methods to electronic imaging at the Eglin Aeroballistics Research Facility is described in this report. 7 references, 14 figures, 1 table

  16. Potential Applications of PET/MR Imaging in Cardiology.

    Science.gov (United States)

    Ratib, Osman; Nkoulou, René

    2014-06-01

    Recent advances in hybrid PET/MR imaging have opened new perspectives for cardiovascular applications. Although cardiac MR imaging has gained wider adoption for routine clinical applications, PET images remain the reference in many applications for which objective analysis of metabolic and physiologic parameters is needed. In particular, in cardiovascular diseases-more specifically, coronary artery disease-the use of quantitative and measurable parameters in a reproducible way is essential for the management of therapeutic decisions and patient follow-up. Functional MR images and dynamic assessment of myocardial perfusion from transit of intravascular contrast medium can provide useful criteria for identifying areas of decreased myocardial perfusion or for assessing tissue viability from late contrast enhancement of scar tissue. PET images, however, will provide more quantitative data on true tissue perfusion and metabolism. Quantitative myocardial flow can also lead to accurate assessment of coronary flow reserve. The combination of both modalities will therefore provide complementary data that can be expected to improve the accuracy and reproducibility of diagnostic procedures. But the true potential of hybrid PET/MR imaging may reside in applications beyond the domain of coronary artery disease. The combination of both modalities in assessment of other cardiac diseases such as inflammation and of other systemic diseases can also be envisioned. It is also predicted that the 2 modalities combined could help characterize atherosclerotic plaques and differentiate plaques with a high risk of rupture from stable plaques. In the future, the development of new tracers will also open new perspectives in evaluating myocardial remodeling and in assessing the kinetics of stem cell therapy in myocardial infarction. New tracers will also provide new means for evaluating alterations in cardiac innervation, angiogenesis, and even the assessment of reporter gene technologies

  17. Modeling & imaging of bioelectrical activity principles and applications

    CERN Document Server

    He, Bin

    2010-01-01

    Over the past several decades, much progress has been made in understanding the mechanisms of electrical activity in biological tissues and systems, and for developing non-invasive functional imaging technologies to aid clinical diagnosis of dysfunction in the human body. The book will provide full basic coverage of the fundamentals of modeling of electrical activity in various human organs, such as heart and brain. It will include details of bioelectromagnetic measurements and source imaging technologies, as well as biomedical applications. The book will review the latest trends in

  18. FRACTAL IMAGE FEATURE VECTORS WITH APPLICATIONS IN FRACTOGRAPHY

    Directory of Open Access Journals (Sweden)

    Hynek Lauschmann

    2011-05-01

    Full Text Available The morphology of fatigue fracture surface (caused by constant cycle loading is strictly related to crack growth rate. This relation may be expressed, among other methods, by means of fractal analysis. Fractal dimension as a single numerical value is not sufficient. Two types of fractal feature vectors are discussed: multifractal and multiparametric. For analysis of images, the box-counting method for 3D is applied with respect to the non-homogeneity of dimensions (two in space, one in brightness. Examples of application are shown: images of several fracture surfaces are analyzed and related to crack growth rate.

  19. Industrial application of thermal image processing and thermal control

    Science.gov (United States)

    Kong, Lingxue

    2001-09-01

    Industrial application of infrared thermography is virtually boundless as it can be used in any situations where there are temperature differences. This technology has particularly been widely used in automotive industry for process evaluation and system design. In this work, thermal image processing technique will be introduced to quantitatively calculate the heat stored in a warm/hot object and consequently, a thermal control system will be proposed to accurately and actively manage the thermal distribution within the object in accordance with the heat calculated from the thermal images.

  20. Various dedicated imaging systems for routine nuclear medical applications

    International Nuclear Information System (INIS)

    Bela Kari; Tamas Gyorke; Erno Mako; Laszlo Nagy; Jozsef Turak; Oszkar Partos

    2004-01-01

    The most essential problems of nuclear medical imaging are resolution, signal/noise ratio (S/N) and sensitivity. Nowadays, the vast majority of the Anger system gamma cameras in clinical application are using parallel projection. The main problem of this projection method is the highly dependence of the image quality on the distance from the collimator surface as well as any improvement in the resolution with the distance -i.e. reduction of image blur- significantly reduces sensitivity. The aim of our research and development work was to create imaging geometry, collimator and detector constructions optimized to particular organs (brain, heart, thyroid), where it is simultaneously possible to increase the resolution and sensitivity. Main concept of the imaging geometry construction is based on the size, location and shape of a particular organ. In case of brain SPECT imaging a multiple head (4 heads in cylinder symmetric approximation) arrangement with extra high intrinsic resolution (<2.5 mm) dedicated detector design provide feasible solution for routine clinical application. The imaging system was essentially designed for Tc-99m and I-123 isotopes. The application field can be easily extended for functional small animal research and new born baby studies. Very positive feedbacks were received from both technical (stability and reproducibility of the technical parameters) and clinical sides in the past 2 years routine applications. A unique, novel conception ultra compact dedicated dual head SPECT system has been created only for 2D, 3D nuclear cardiac applications for Tc-99m and T1-201 labeled radio-pharmaceuticals. The two rectangular detectors (with <2.6 mm intrinsic resolution) are mounted fix in 90 degree geometry and move inside the special formed gantry arrangement. The unique and unusual gantry is designed to keep the detector heads as close as possible to the patient, while the patient is not exposed by any moving part. This special construction also

  1. The clinical application of the digital imaging in urography

    International Nuclear Information System (INIS)

    Zhu Yuelong; Xie Sumin; Zhang Li; Li Huayu

    2003-01-01

    Objective: To evaluate the clinical application of the digital imaging in the urography. Methods: In total 112 patients underwent digital urography, including intravenous pyelography (IVP) in 38 cases and retrograde pyelography in 74 cases. Results: the entire urinary tract was better shown on digital imaging, which was accurate in locating the obstruction of urinary tract and helped the qualitative diagnosis. Digital urography was especially valuable in detecting urinary calculus. In 38 cases of IVP, the results were normal in 5 patients, renal stone in 12, ureteral stone in 13, ureteral stenosis in 6 and nephroblastom in 2. In the 74 cases of retrograde pyelography, benign ureteral stenosis was found in 31 patients, ureteral stone in 27, ureteral polyp in 2, urethral stone in 8 and benign urethral stenosis in 6. Conclusion: Digital imaging technique is of big value in the diagnosis of urinary tract lesions

  2. Imaging of the hip and bony pelvis. Techniques and applications

    Energy Technology Data Exchange (ETDEWEB)

    Davies, A.M. [Royal Orthopaedic Hospital, Birmingham (United Kingdom). MRI Centre; Johnson, K.J. [Princess of Wales Birmingham Children' s Hospital (United Kingdom); Whitehouse, R.W. (eds.) [Manchester Royal Infirmary (United Kingdom). Dept. of Clinical Radiology

    2006-07-01

    This is a comprehensive textbook on imaging of the bony pelvis and hip joint that provides a detailed description of the techniques and imaging findings relevant to this complex anatomical region. In the first part of the book, the various techniques and procedures employed for imaging the pelvis and hip are discussed in detail. The second part of the book documents the application of these techniques to the diverse clinical problems and diseases encountered. Among the many topics addressed are congenital and developmental disorders including developmental dysplasia of the hip, irritable hip and septic arthritis, Perthes' disease and avascular necrosis, slipped upper femoral epiphysis, bony and soft tissue trauma, arthritis, tumours and hip prostheses. Each chapter is written by an acknowledged expert in the field, and a wealth of illustrative material is included. This book will be of great value to musculoskeletal and general radiologists, orthopaedic surgeons and rheumatologists. (orig.)

  3. Application of Super-Resolution Image Reconstruction to Digital Holography

    Directory of Open Access Journals (Sweden)

    Zhang Shuqun

    2006-01-01

    Full Text Available We describe a new application of super-resolution image reconstruction to digital holography which is a technique for three-dimensional information recording and reconstruction. Digital holography has suffered from the low resolution of CCD sensors, which significantly limits the size of objects that can be recorded. The existing solution to this problem is to use optics to bandlimit the object to be recorded, which can cause the loss of details. Here super-resolution image reconstruction is proposed to be applied in enhancing the spatial resolution of digital holograms. By introducing a global camera translation before sampling, a high-resolution hologram can be reconstructed from a set of undersampled hologram images. This permits the recording of larger objects and reduces the distance between the object and the hologram. Practical results from real and simulated holograms are presented to demonstrate the feasibility of the proposed technique.

  4. Optical Imaging Sensors and Systems for Homeland Security Applications

    CERN Document Server

    Javidi, Bahram

    2006-01-01

    Optical and photonic systems and devices have significant potential for homeland security. Optical Imaging Sensors and Systems for Homeland Security Applications presents original and significant technical contributions from leaders of industry, government, and academia in the field of optical and photonic sensors, systems and devices for detection, identification, prevention, sensing, security, verification and anti-counterfeiting. The chapters have recent and technically significant results, ample illustrations, figures, and key references. This book is intended for engineers and scientists in the relevant fields, graduate students, industry managers, university professors, government managers, and policy makers. Advanced Sciences and Technologies for Security Applications focuses on research monographs in the areas of -Recognition and identification (including optical imaging, biometrics, authentication, verification, and smart surveillance systems) -Biological and chemical threat detection (including bios...

  5. Quantum dots in imaging, drug delivery and sensor applications.

    Science.gov (United States)

    Matea, Cristian T; Mocan, Teodora; Tabaran, Flaviu; Pop, Teodora; Mosteanu, Ofelia; Puia, Cosmin; Iancu, Cornel; Mocan, Lucian

    2017-01-01

    Quantum dots (QDs), also known as nanoscale semiconductor crystals, are nanoparticles with unique optical and electronic properties such as bright and intensive fluorescence. Since most conventional organic label dyes do not offer the near-infrared (>650 nm) emission possibility, QDs, with their tunable optical properties, have gained a lot of interest. They possess characteristics such as good chemical and photo-stability, high quantum yield and size-tunable light emission. Different types of QDs can be excited with the same light wavelength, and their narrow emission bands can be detected simultaneously for multiple assays. There is an increasing interest in the development of nano-theranostics platforms for simultaneous sensing, imaging and therapy. QDs have great potential for such applications, with notable results already published in the fields of sensors, drug delivery and biomedical imaging. This review summarizes the latest developments available in literature regarding the use of QDs for medical applications.

  6. Electric Potential and Electric Field Imaging with Dynamic Applications & Extensions

    Science.gov (United States)

    Generazio, Ed

    2017-01-01

    The technology and methods for remote quantitative imaging of electrostatic potentials and electrostatic fields in and around objects and in free space is presented. Electric field imaging (EFI) technology may be applied to characterize intrinsic or existing electric potentials and electric fields, or an externally generated electrostatic field made be used for volumes to be inspected with EFI. The baseline sensor technology (e-Sensor) and its construction, optional electric field generation (quasi-static generator), and current e- Sensor enhancements (ephemeral e-Sensor) are discussed. Critical design elements of current linear and real-time two-dimensional (2D) measurement systems are highlighted, and the development of a three dimensional (3D) EFI system is presented. Demonstrations for structural, electronic, human, and memory applications are shown. Recent work demonstrates that phonons may be used to create and annihilate electric dipoles within structures. Phonon induced dipoles are ephemeral and their polarization, strength, and location may be quantitatively characterized by EFI providing a new subsurface Phonon-EFI imaging technology. Results from real-time imaging of combustion and ion flow, and their measurement complications, will be discussed. Extensions to environment, Space and subterranean applications will be presented, and initial results for quantitative characterizing material properties are shown. A wearable EFI system has been developed by using fundamental EFI concepts. These new EFI capabilities are demonstrated to characterize electric charge distribution creating a new field of study embracing areas of interest including electrostatic discharge (ESD) mitigation, manufacturing quality control, crime scene forensics, design and materials selection for advanced sensors, combustion science, on-orbit space potential, container inspection, remote characterization of electronic circuits and level of activation, dielectric morphology of

  7. Tensor valuations and their applications in stochastic geometry and imaging

    CERN Document Server

    Kiderlen, Markus

    2017-01-01

    The purpose of this volume is to give an up-to-date introduction to tensor valuations and their applications. Starting with classical results concerning scalar-valued valuations on the families of convex bodies and convex polytopes, it proceeds to the modern theory of tensor valuations. Product and Fourier-type transforms are introduced and various integral formulae are derived. New and well-known results are presented, together with generalizations in several directions, including extensions to the non-Euclidean setting and to non-convex sets. A variety of applications of tensor valuations to models in stochastic geometry, to local stereology and to imaging are also discussed.

  8. Grid Computing Application for Brain Magnetic Resonance Image Processing

    International Nuclear Information System (INIS)

    Valdivia, F; Crépeault, B; Duchesne, S

    2012-01-01

    This work emphasizes the use of grid computing and web technology for automatic post-processing of brain magnetic resonance images (MRI) in the context of neuropsychiatric (Alzheimer's disease) research. Post-acquisition image processing is achieved through the interconnection of several individual processes into pipelines. Each process has input and output data ports, options and execution parameters, and performs single tasks such as: a) extracting individual image attributes (e.g. dimensions, orientation, center of mass), b) performing image transformations (e.g. scaling, rotation, skewing, intensity standardization, linear and non-linear registration), c) performing image statistical analyses, and d) producing the necessary quality control images and/or files for user review. The pipelines are built to perform specific sequences of tasks on the alphanumeric data and MRIs contained in our database. The web application is coded in PHP and allows the creation of scripts to create, store and execute pipelines and their instances either on our local cluster or on high-performance computing platforms. To run an instance on an external cluster, the web application opens a communication tunnel through which it copies the necessary files, submits the execution commands and collects the results. We present result on system tests for the processing of a set of 821 brain MRIs from the Alzheimer's Disease Neuroimaging Initiative study via a nonlinear registration pipeline composed of 10 processes. Our results show successful execution on both local and external clusters, and a 4-fold increase in performance if using the external cluster. However, the latter's performance does not scale linearly as queue waiting times and execution overhead increase with the number of tasks to be executed.

  9. The image of the airport through mobile applications

    OpenAIRE

    Lázaro Florido-Benítez

    2016-01-01

    The image airports project via their applications (apps) affects -- directly or indirectly-- passengers’ satisfaction. Today, airports are competing to attract more airlines and passengers to improve commercial revenues. Airport apps (as mobile marketing tools) are offering a broad range of opportunities to both passengers and airports. Apps are the best solution if airports want to improve the passenger experience as well as differentiate themselves from their competitors. The results of thi...

  10. A hyperspectral image data exploration workbench for environmental science applications

    International Nuclear Information System (INIS)

    Woyna, M.A.; Christiansen, J.H.; Zawada, D.G.; Simunich, K.L.

    1994-01-01

    The Hyperspectral Image Data Exploration Workbench (HIDEW) software system has been developed by Argonne National Laboratory to enable analysts at Unix workstations to conveniently access and manipulate high-resolution imagery data for analysis, mapping purposes, and input to environmental modeling applications. HIDEW is fully object-oriented, including the underlying database. This system was developed as an aid to site characterization work and atmospheric research projects

  11. A hyperspectral image data exploration workbench for environmental science applications

    Energy Technology Data Exchange (ETDEWEB)

    Woyna, M.A.; Christiansen, J.H.; Zawada, D.G.; Simunich, K.L.

    1994-08-01

    The Hyperspectral Image Data Exploration Workbench (HIDEW) software system has been developed by Argonne National Laboratory to enable analysts at Unix workstations to conveniently access and manipulate high-resolution imagery data for analysis, mapping purposes, and input to environmental modeling applications. HIDEW is fully object-oriented, including the underlying database. This system was developed as an aid to site characterization work and atmospheric research projects.

  12. Functional imaging in oncology. Clinical applications. Vol. 2

    International Nuclear Information System (INIS)

    Luna, Antonio; Vilanova, Joan C.

    2014-01-01

    Easy-to-read manual on new functional imaging techniques in oncology. Explains current clinical applications and outlines future avenues. Includes numerous high-quality illustrations to highlight the major teaching points. In the new era of functional and molecular imaging, both currently available imaging biomarkers and biomarkers under development are expected to lead to major changes in the management of oncological patients. This two-volume book is a practical manual on the various imaging techniques capable of delivering functional information on cancer, including diffusion MRI, perfusion CT and MRI, dual-energy CT, spectroscopy, dynamic contrast-enhanced ultrasonography, PET, and hybrid modalities. This second volume considers the applications and benefits of these techniques in a wide range of tumor types, including their role in diagnosis, prediction of treatment outcome, and early evaluation of treatment response. Each chapter addresses a specific malignancy and is written by one or more acclaimed experts. The lucid text is complemented by numerous high-quality illustrations that highlight key features and major teaching points.

  13. Functional imaging in oncology. Clinical applications. Vol. 2

    Energy Technology Data Exchange (ETDEWEB)

    Luna, Antonio [Case Western Reserve Univ., Cleveland, OH (United States). Dept. of Radiology; MRI Health Time Group, Jaen (Spain); Vilanova, Joan C. [Girona Univ. (Spain). Clinica Girona - Hospital Sta. Caterina; Hygino da Cruz, L. Celso Jr. (ed.) [CDPI and IRM, Rio de Janeiro (Brazil). Dept. of Radiology; Rossi, Santiago E. [Centro de Diagnostico, Buenos Aires (Argentina)

    2014-06-01

    Easy-to-read manual on new functional imaging techniques in oncology. Explains current clinical applications and outlines future avenues. Includes numerous high-quality illustrations to highlight the major teaching points. In the new era of functional and molecular imaging, both currently available imaging biomarkers and biomarkers under development are expected to lead to major changes in the management of oncological patients. This two-volume book is a practical manual on the various imaging techniques capable of delivering functional information on cancer, including diffusion MRI, perfusion CT and MRI, dual-energy CT, spectroscopy, dynamic contrast-enhanced ultrasonography, PET, and hybrid modalities. This second volume considers the applications and benefits of these techniques in a wide range of tumor types, including their role in diagnosis, prediction of treatment outcome, and early evaluation of treatment response. Each chapter addresses a specific malignancy and is written by one or more acclaimed experts. The lucid text is complemented by numerous high-quality illustrations that highlight key features and major teaching points.

  14. Ten steps to get started in Genome Assembly and Annotation [version 1; referees: 2 approved

    Directory of Open Access Journals (Sweden)

    Victoria Dominguez Del Angel

    2018-02-01

    Full Text Available As a part of the ELIXIR-EXCELERATE efforts in capacity building, we present here 10 steps to facilitate researchers getting started in genome assembly and genome annotation. The guidelines given are broadly applicable, intended to be stable over time, and cover all aspects from start to finish of a general assembly and annotation project. Intrinsic properties of genomes are discussed, as is the importance of using high quality DNA. Different sequencing technologies and generally applicable workflows for genome assembly are also detailed. We cover structural and functional annotation and encourage readers to also annotate transposable elements, something that is often omitted from annotation workflows. The importance of data management is stressed, and we give advice on where to submit data and how to make your results Findable, Accessible, Interoperable, and Reusable (FAIR.

  15. Dictionary-driven protein annotation.

    Science.gov (United States)

    Rigoutsos, Isidore; Huynh, Tien; Floratos, Aris; Parida, Laxmi; Platt, Daniel

    2002-09-01

    Computational methods seeking to automatically determine the properties (functional, structural, physicochemical, etc.) of a protein directly from the sequence have long been the focus of numerous research groups. With the advent of advanced sequencing methods and systems, the number of amino acid sequences that are being deposited in the public databases has been increasing steadily. This has in turn generated a renewed demand for automated approaches that can annotate individual sequences and complete genomes quickly, exhaustively and objectively. In this paper, we present one such approach that is centered around and exploits the Bio-Dictionary, a collection of amino acid patterns that completely covers the natural sequence space and can capture functional and structural signals that have been reused during evolution, within and across protein families. Our annotation approach also makes use of a weighted, position-specific scoring scheme that is unaffected by the over-representation of well-conserved proteins and protein fragments in the databases used. For a given query sequence, the method permits one to determine, in a single pass, the following: local and global similarities between the query and any protein already present in a public database; the likeness of the query to all available archaeal/ bacterial/eukaryotic/viral sequences in the database as a function of amino acid position within the query; the character of secondary structure of the query as a function of amino acid position within the query; the cytoplasmic, transmembrane or extracellular behavior of the query; the nature and position of binding domains, active sites, post-translationally modified sites, signal peptides, etc. In terms of performance, the proposed method is exhaustive, objective and allows for the rapid annotation of individual sequences and full genomes. Annotation examples are presented and discussed in Results, including individual queries and complete genomes that were

  16. Application of Simulated Three Dimensional CT Image in Orthognathic Surgery

    Energy Technology Data Exchange (ETDEWEB)

    Kim, Hyun Don; Park, Chang Seo [Dept. of Dental Radiology, College of Dentistry, Yensei University, Seoul (Korea, Republic of); Yoo, Sun Kook; Lee, Kyoung Sang [Dept. of Medical Engineering, College of Medicine, Yensei University, Seoul (Korea, Republic of)

    1998-08-15

    In orthodontics and orthognathic surgery, cephalogram has been routine practice in diagnosis and treatment evaluation of craniofacial deformity. But its inherent distortion of actual length and angles during projecting three dimensional object to two dimensional plane might cause errors in quantitative analysis of shape and size. Therefore, it is desirable that three dimensional object is diagnosed and evaluated three dimensionally and three dimensional CT image is best for three dimensional analysis. Development of clinic necessitates evaluation of result of treatment and comparison before and after surgery. It is desirable that patient that was diagnosed and planned by three dimensional computed tomography before surgery is evaluated by three dimensional computed tomography after surgery, too. But Because there is no standardized normal values in three dimension now and three dimensional Computed Tomography needs expensive equipment and because of its expenses and amount of exposure to radiation, limitations still remain to be solved in its application to routine practice. If postoperative three dimensional image is constructed by pre and postoperative lateral and postero-anterior cephalograms and preoperative three dimensional computed tomogram, pre and postoperative image will be compared and evaluated three dimensionally without three dimensional computed tomography after surgery and that will contribute to standardize normal values in three dimension. This study introduced new method that computer-simulated three dimensional image was constructed by preoperative three dimensional computed tomogram and pre and postoperative lateral and postero-anterior cephalograms, and for validation of new method, in four cases of dry skull that position of mandible was displaced and four patients of orthognathic surgery, computer-simulated three dimensional image and actual postoperative three dimensional image were compared. The results were as follows. 1. In four cases of

  17. Application of Simulated Three Dimensional CT Image in Orthognathic Surgery

    International Nuclear Information System (INIS)

    Kim, Hyun Don; Park, Chang Seo; Yoo, Sun Kook; Lee, Kyoung Sang

    1998-01-01

    In orthodontics and orthognathic surgery, cephalogram has been routine practice in diagnosis and treatment evaluation of craniofacial deformity. But its inherent distortion of actual length and angles during projecting three dimensional object to two dimensional plane might cause errors in quantitative analysis of shape and size. Therefore, it is desirable that three dimensional object is diagnosed and evaluated three dimensionally and three dimensional CT image is best for three dimensional analysis. Development of clinic necessitates evaluation of result of treatment and comparison before and after surgery. It is desirable that patient that was diagnosed and planned by three dimensional computed tomography before surgery is evaluated by three dimensional computed tomography after surgery, too. But Because there is no standardized normal values in three dimension now and three dimensional Computed Tomography needs expensive equipment and because of its expenses and amount of exposure to radiation, limitations still remain to be solved in its application to routine practice. If postoperative three dimensional image is constructed by pre and postoperative lateral and postero-anterior cephalograms and preoperative three dimensional computed tomogram, pre and postoperative image will be compared and evaluated three dimensionally without three dimensional computed tomography after surgery and that will contribute to standardize normal values in three dimension. This study introduced new method that computer-simulated three dimensional image was constructed by preoperative three dimensional computed tomogram and pre and postoperative lateral and postero-anterior cephalograms, and for validation of new method, in four cases of dry skull that position of mandible was displaced and four patients of orthognathic surgery, computer-simulated three dimensional image and actual postoperative three dimensional image were compared. The results were as follows. 1. In four cases of

  18. Phenex: ontological annotation of phenotypic diversity.

    Directory of Open Access Journals (Sweden)

    James P Balhoff

    2010-05-01

    Full Text Available Phenotypic differences among species have long been systematically itemized and described by biologists in the process of investigating phylogenetic relationships and trait evolution. Traditionally, these descriptions have been expressed in natural language within the context of individual journal publications or monographs. As such, this rich store of phenotype data has been largely unavailable for statistical and computational comparisons across studies or integration with other biological knowledge.Here we describe Phenex, a platform-independent desktop application designed to facilitate efficient and consistent annotation of phenotypic similarities and differences using Entity-Quality syntax, drawing on terms from community ontologies for anatomical entities, phenotypic qualities, and taxonomic names. Phenex can be configured to load only those ontologies pertinent to a taxonomic group of interest. The graphical user interface was optimized for evolutionary biologists accustomed to working with lists of taxa, characters, character states, and character-by-taxon matrices.Annotation of phenotypic data using ontologies and globally unique taxonomic identifiers will allow biologists to integrate phenotypic data from different organisms and studies, leveraging decades of work in systematics and comparative morphology.

  19. Phenex: ontological annotation of phenotypic diversity.

    Science.gov (United States)

    Balhoff, James P; Dahdul, Wasila M; Kothari, Cartik R; Lapp, Hilmar; Lundberg, John G; Mabee, Paula; Midford, Peter E; Westerfield, Monte; Vision, Todd J

    2010-05-05

    Phenotypic differences among species have long been systematically itemized and described by biologists in the process of investigating phylogenetic relationships and trait evolution. Traditionally, these descriptions have been expressed in natural language within the context of individual journal publications or monographs. As such, this rich store of phenotype data has been largely unavailable for statistical and computational comparisons across studies or integration with other biological knowledge. Here we describe Phenex, a platform-independent desktop application designed to facilitate efficient and consistent annotation of phenotypic similarities and differences using Entity-Quality syntax, drawing on terms from community ontologies for anatomical entities, phenotypic qualities, and taxonomic names. Phenex can be configured to load only those ontologies pertinent to a taxonomic group of interest. The graphical user interface was optimized for evolutionary biologists accustomed to working with lists of taxa, characters, character states, and character-by-taxon matrices. Annotation of phenotypic data using ontologies and globally unique taxonomic identifiers will allow biologists to integrate phenotypic data from different organisms and studies, leveraging decades of work in systematics and comparative morphology.

  20. Evaluating Hierarchical Structure in Music Annotations.

    Science.gov (United States)

    McFee, Brian; Nieto, Oriol; Farbood, Morwaread M; Bello, Juan Pablo

    2017-01-01

    Music exhibits structure at multiple scales, ranging from motifs to large-scale functional components. When inferring the structure of a piece, different listeners may attend to different temporal scales, which can result in disagreements when they describe the same piece. In the field of music informatics research (MIR), it is common to use corpora annotated with structural boundaries at different levels. By quantifying disagreements between multiple annotators, previous research has yielded several insights relevant to the study of music cognition. First, annotators tend to agree when structural boundaries are ambiguous. Second, this ambiguity seems to depend on musical features, time scale, and genre. Furthermore, it is possible to tune current annotation evaluation metrics to better align with these perceptual differences. However, previous work has not directly analyzed the effects of hierarchical structure because the existing methods for comparing structural annotations are designed for "flat" descriptions, and do not readily generalize to hierarchical annotations. In this paper, we extend and generalize previous work on the evaluation of hierarchical descriptions of musical structure. We derive an evaluation metric which can compare hierarchical annotations holistically across multiple levels. sing this metric, we investigate inter-annotator agreement on the multilevel annotations of two different music corpora, investigate the influence of acoustic properties on hierarchical annotations, and evaluate existing hierarchical segmentation algorithms against the distribution of inter-annotator agreement.

  1. Evaluating Hierarchical Structure in Music Annotations

    Directory of Open Access Journals (Sweden)

    Brian McFee

    2017-08-01

    Full Text Available Music exhibits structure at multiple scales, ranging from motifs to large-scale functional components. When inferring the structure of a piece, different listeners may attend to different temporal scales, which can result in disagreements when they describe the same piece. In the field of music informatics research (MIR, it is common to use corpora annotated with structural boundaries at different levels. By quantifying disagreements between multiple annotators, previous research has yielded several insights relevant to the study of music cognition. First, annotators tend to agree when structural boundaries are ambiguous. Second, this ambiguity seems to depend on musical features, time scale, and genre. Furthermore, it is possible to tune current annotation evaluation metrics to better align with these perceptual differences. However, previous work has not directly analyzed the effects of hierarchical structure because the existing methods for comparing structural annotations are designed for “flat” descriptions, and do not readily generalize to hierarchical annotations. In this paper, we extend and generalize previous work on the evaluation of hierarchical descriptions of musical structure. We derive an evaluation metric which can compare hierarchical annotations holistically across multiple levels. sing this metric, we investigate inter-annotator agreement on the multilevel annotations of two different music corpora, investigate the influence of acoustic properties on hierarchical annotations, and evaluate existing hierarchical segmentation algorithms against the distribution of inter-annotator agreement.

  2. Deformable image registration using convolutional neural networks

    Science.gov (United States)

    Eppenhof, Koen A. J.; Lafarge, Maxime W.; Moeskops, Pim; Veta, Mitko; Pluim, Josien P. W.

    2018-03-01

    Deformable image registration can be time-consuming and often needs extensive parameterization to perform well on a specific application. We present a step towards a registration framework based on a three-dimensional convolutional neural network. The network directly learns transformations between pairs of three-dimensional images. The outputs of the network are three maps for the x, y, and z components of a thin plate spline transformation grid. The network is trained on synthetic random transformations, which are applied to a small set of representative images for the desired application. Training therefore does not require manually annotated ground truth deformation information. The methodology is demonstrated on public data sets of inspiration-expiration lung CT image pairs, which come with annotated corresponding landmarks for evaluation of the registration accuracy. Advantages of this methodology are its fast registration times and its minimal parameterization.

  3. Bioanalytical Applications of Real-Time ATP Imaging Via Bioluminescence

    Energy Technology Data Exchange (ETDEWEB)

    Gruenhagen, Jason Alan [Iowa State Univ., Ames, IA (United States)

    2003-01-01

    The research discussed within involves the development of novel applications of real-time imaging of adenosine 5'-triphosphate (ATP). ATP was detected via bioluminescence and the firefly luciferase-catalyzed reaction of ATP and luciferin. The use of a microscope and an imaging detector allowed for spatially resolved quantitation of ATP release. Employing this method, applications in both biological and chemical systems were developed. First, the mechanism by which the compound 48/80 induces release of ATP from human umbilical vein endothelial cells (HUVECs) was investigated. Numerous enzyme activators and inhibitors were utilized to probe the second messenger systems involved in release. Compound 48/80 activated a G{sub q}-type protein to initiate ATP release from HUVECs. Ca2+ imaging along with ATP imaging revealed that activation of phospholipase C and induction of intracellular Ca2+ signaling were necessary for release of ATP. Furthermore, activation of protein kinase C inhibited the activity of phospholipase C and thus decreased the magnitude of ATP release. This novel release mechanism was compared to the existing theories of extracellular release of ATP. Bioluminescence imaging was also employed to examine the role of ATP in the field of neuroscience. The central nervous system (CNS) was dissected from the freshwater snail Lymnaea stagnalis. Electrophysiological experiments demonstrated that the neurons of the Lymnaea were not damaged by any of the components of the imaging solution. ATP was continuously released by the ganglia of the CNS for over eight hours and varied from ganglion to ganglion and within individual ganglia. Addition of the neurotransmitters K+ and serotonin increased release of ATP in certain regions of the Lymnaea CNS. Finally, the ATP imaging technique was investigated for the study of drug release systems. MCM-41-type mesoporous nanospheres were loaded with ATP and end-capped with mercaptoethanol

  4. Application of forensic image analysis in accident investigations.

    Science.gov (United States)

    Verolme, Ellen; Mieremet, Arjan

    2017-09-01

    Forensic investigations are primarily meant to obtain objective answers that can be used for criminal prosecution. Accident analyses are usually performed to learn from incidents and to prevent similar events from occurring in the future. Although the primary goal may be different, the steps in which information is gathered, interpreted and weighed are similar in both types of investigations, implying that forensic techniques can be of use in accident investigations as well. The use in accident investigations usually means that more information can be obtained from the available information than when used in criminal investigations, since the latter require a higher evidence level. In this paper, we demonstrate the applicability of forensic techniques for accident investigations by presenting a number of cases from one specific field of expertise: image analysis. With the rapid spread of digital devices and new media, a wealth of image material and other digital information has become available for accident investigators. We show that much information can be distilled from footage by using forensic image analysis techniques. These applications show that image analysis provides information that is crucial for obtaining the sequence of events and the two- and three-dimensional geometry of an accident. Since accident investigation focuses primarily on learning from accidents and prevention of future accidents, and less on the blame that is crucial for criminal investigations, the field of application of these forensic tools may be broader than would be the case in purely legal sense. This is an important notion for future accident investigations. Copyright © 2017 Elsevier B.V. All rights reserved.

  5. Methods and applications in high flux neutron imaging

    International Nuclear Information System (INIS)

    Ballhausen, H.

    2007-01-01

    This treatise develops new methods for high flux neutron radiography and high flux neutron tomography and describes some of their applications in actual experiments. Instead of single images, time series can be acquired with short exposure times due to the available high intensity. To best use the increased amount of information, new estimators are proposed, which extract accurate results from the recorded ensembles, even if the individual piece of data is very noisy and in addition severely affected by systematic errors such as an influence of gamma background radiation. The spatial resolution of neutron radiographies, usually limited by beam divergence and inherent resolution of the scintillator, can be significantly increased by scanning the sample with a pinhole-micro-collimator. This technique circumvents any limitations in present detector design and, due to the available high intensity, could be successfully tested. Imaging with scattered neutrons as opposed to conventional total attenuation based imaging determines separately the absorption and scattering cross sections within the sample. For the first time even coherent angle dependent scattering could be visualized space-resolved. New applications of high flux neutron imaging are presented, such as materials engineering experiments on innovative metal joints, time-resolved tomography on multilayer stacks of fuel cells under operation, and others. A new implementation of an algorithm for the algebraic reconstruction of tomography data executes even in case of missing information, such as limited angle tomography, and returns quantitative reconstructions. The setup of the world-leading high flux radiography and tomography facility at the Institut Laue-Langevin is presented. A comprehensive appendix covers the physical and technical foundations of neutron imaging. (orig.)

  6. Use of Annotations for Component and Framework Interoperability

    Science.gov (United States)

    David, O.; Lloyd, W.; Carlson, J.; Leavesley, G. H.; Geter, F.

    2009-12-01

    The popular programming languages Java and C# provide annotations, a form of meta-data construct. Software frameworks for web integration, web services, database access, and unit testing now take advantage of annotations to reduce the complexity of APIs and the quantity of integration code between the application and framework infrastructure. Adopting annotation features in frameworks has been observed to lead to cleaner and leaner application code. The USDA Object Modeling System (OMS) version 3.0 fully embraces the annotation approach and additionally defines a meta-data standard for components and models. In version 3.0 framework/model integration previously accomplished using API calls is now achieved using descriptive annotations. This enables the framework to provide additional functionality non-invasively such as implicit multithreading, and auto-documenting capabilities while achieving a significant reduction in the size of the model source code. Using a non-invasive methodology leads to models and modeling components with only minimal dependencies on the modeling framework. Since models and modeling components are not directly bound to framework by the use of specific APIs and/or data types they can more easily be reused both within the framework as well as outside of it. To study the effectiveness of an annotation based framework approach with other modeling frameworks, a framework-invasiveness study was conducted to evaluate the effects of framework design on model code quality. A monthly water balance model was implemented across several modeling frameworks and several software metrics were collected. The metrics selected were measures of non-invasive design methods for modeling frameworks from a software engineering perspective. It appears that the use of annotations positively impacts several software quality measures. In a next step, the PRMS model was implemented in OMS 3.0 and is currently being implemented for water supply forecasting in the

  7. Autonomous control systems: applications to remote sensing and image processing

    Science.gov (United States)

    Jamshidi, Mohammad

    2001-11-01

    One of the main challenges of any control (or image processing) paradigm is being able to handle complex systems under unforeseen uncertainties. A system may be called complex here if its dimension (order) is too high and its model (if available) is nonlinear, interconnected, and information on the system is uncertain such that classical techniques cannot easily handle the problem. Examples of complex systems are power networks, space robotic colonies, national air traffic control system, and integrated manufacturing plant, the Hubble Telescope, the International Space Station, etc. Soft computing, a consortia of methodologies such as fuzzy logic, neuro-computing, genetic algorithms and genetic programming, has proven to be powerful tools for adding autonomy and semi-autonomy to many complex systems. For such systems the size of soft computing control architecture will be nearly infinite. In this paper new paradigms using soft computing approaches are utilized to design autonomous controllers and image enhancers for a number of application areas. These applications are satellite array formations for synthetic aperture radar interferometry (InSAR) and enhancement of analog and digital images.

  8. Electric Potential and Electric Field Imaging with Applications

    Science.gov (United States)

    Generazio, Ed

    2016-01-01

    The technology and techniques for remote quantitative imaging of electrostatic potentials and electrostatic fields in and around objects and in free space is presented. Electric field imaging (EFI) technology may be applied to characterize intrinsic or existing electric potentials and electric fields, or an externally generated electrostatic field may be used for (illuminating) volumes to be inspected with EFI. The baseline sensor technology, electric field sensor (e-sensor), and its construction, optional electric field generation (quasistatic generator), and current e-sensor enhancements (ephemeral e-sensor) are discussed. Demonstrations for structural, electronic, human, and memory applications are shown. This new EFI capability is demonstrated to reveal characterization of electric charge distribution, creating a new field of study that embraces areas of interest including electrostatic discharge mitigation, crime scene forensics, design and materials selection for advanced sensors, dielectric morphology of structures, inspection of containers, inspection for hidden objects, tether integrity, organic molecular memory, and medical diagnostic and treatment efficacy applications such as cardiac polarization wave propagation and electromyography imaging.

  9. Functional annotation of hierarchical modularity.

    Directory of Open Access Journals (Sweden)

    Kanchana Padmanabhan

    Full Text Available In biological networks of molecular interactions in a cell, network motifs that are biologically relevant are also functionally coherent, or form functional modules. These functionally coherent modules combine in a hierarchical manner into larger, less cohesive subsystems, thus revealing one of the essential design principles of system-level cellular organization and function-hierarchical modularity. Arguably, hierarchical modularity has not been explicitly taken into consideration by most, if not all, functional annotation systems. As a result, the existing methods would often fail to assign a statistically significant functional coherence score to biologically relevant molecular machines. We developed a methodology for hierarchical functional annotation. Given the hierarchical taxonomy of functional concepts (e.g., Gene Ontology and the association of individual genes or proteins with these concepts (e.g., GO terms, our method will assign a Hierarchical Modularity Score (HMS to each node in the hierarchy of functional modules; the HMS score and its p-value measure functional coherence of each module in the hierarchy. While existing methods annotate each module with a set of "enriched" functional terms in a bag of genes, our complementary method provides the hierarchical functional annotation of the modules and their hierarchically organized components. A hierarchical organization of functional modules often comes as a bi-product of cluster analysis of gene expression data or protein interaction data. Otherwise, our method will automatically build such a hierarchy by directly incorporating the functional taxonomy information into the hierarchy search process and by allowing multi-functional genes to be part of more than one component in the hierarchy. In addition, its underlying HMS scoring metric ensures that functional specificity of the terms across different levels of the hierarchical taxonomy is properly treated. We have evaluated our

  10. EBLAST: an efficient high-compression image transformation 3. application to Internet image and video transmission

    Science.gov (United States)

    Schmalz, Mark S.; Ritter, Gerhard X.; Caimi, Frank M.

    2001-12-01

    A wide variety of digital image compression transforms developed for still imaging and broadcast video transmission are unsuitable for Internet video applications due to insufficient compression ratio, poor reconstruction fidelity, or excessive computational requirements. Examples include hierarchical transforms that require all, or large portion of, a source image to reside in memory at one time, transforms that induce significant locking effect at operationally salient compression ratios, and algorithms that require large amounts of floating-point computation. The latter constraint holds especially for video compression by small mobile imaging devices for transmission to, and compression on, platforms such as palmtop computers or personal digital assistants (PDAs). As Internet video requirements for frame rate and resolution increase to produce more detailed, less discontinuous motion sequences, a new class of compression transforms will be needed, especially for small memory models and displays such as those found on PDAs. In this, the third series of papers, we discuss the EBLAST compression transform and its application to Internet communication. Leading transforms for compression of Internet video and still imagery are reviewed and analyzed, including GIF, JPEG, AWIC (wavelet-based), wavelet packets, and SPIHT, whose performance is compared with EBLAST. Performance analysis criteria include time and space complexity and quality of the decompressed image. The latter is determined by rate-distortion data obtained from a database of realistic test images. Discussion also includes issues such as robustness of the compressed format to channel noise. EBLAST has been shown to perform superiorly to JPEG and, unlike current wavelet compression transforms, supports fast implementation on embedded processors with small memory models.

  11. High Throughput Multispectral Image Processing with Applications in Food Science.

    Directory of Open Access Journals (Sweden)

    Panagiotis Tsakanikas

    Full Text Available Recently, machine vision is gaining attention in food science as well as in food industry concerning food quality assessment and monitoring. Into the framework of implementation of Process Analytical Technology (PAT in the food industry, image processing can be used not only in estimation and even prediction of food quality but also in detection of adulteration. Towards these applications on food science, we present here a novel methodology for automated image analysis of several kinds of food products e.g. meat, vanilla crème and table olives, so as to increase objectivity, data reproducibility, low cost information extraction and faster quality assessment, without human intervention. Image processing's outcome will be propagated to the downstream analysis. The developed multispectral image processing method is based on unsupervised machine learning approach (Gaussian Mixture Models and a novel unsupervised scheme of spectral band selection for segmentation process optimization. Through the evaluation we prove its efficiency and robustness against the currently available semi-manual software, showing that the developed method is a high throughput approach appropriate for massive data extraction from food samples.

  12. Application of the EM algorithm to radiographic images.

    Science.gov (United States)

    Brailean, J C; Little, D; Giger, M L; Chen, C T; Sullivan, B J

    1992-01-01

    The expectation maximization (EM) algorithm has received considerable attention in the area of positron emitted tomography (PET) as a restoration and reconstruction technique. In this paper, the restoration capabilities of the EM algorithm when applied to radiographic images is investigated. This application does not involve reconstruction. The performance of the EM algorithm is quantitatively evaluated using a "perceived" signal-to-noise ratio (SNR) as the image quality metric. This perceived SNR is based on statistical decision theory and includes both the observer's visual response function and a noise component internal to the eye-brain system. For a variety of processing parameters, the relative SNR (ratio of the processed SNR to the original SNR) is calculated and used as a metric to compare quantitatively the effects of the EM algorithm with two other image enhancement techniques: global contrast enhancement (windowing) and unsharp mask filtering. The results suggest that the EM algorithm's performance is superior when compared to unsharp mask filtering and global contrast enhancement for radiographic images which contain objects smaller than 4 mm.

  13. High Throughput Multispectral Image Processing with Applications in Food Science.

    Science.gov (United States)

    Tsakanikas, Panagiotis; Pavlidis, Dimitris; Nychas, George-John

    2015-01-01

    Recently, machine vision is gaining attention in food science as well as in food industry concerning food quality assessment and monitoring. Into the framework of implementation of Process Analytical Technology (PAT) in the food industry, image processing can be used not only in estimation and even prediction of food quality but also in detection of adulteration. Towards these applications on food science, we present here a novel methodology for automated image analysis of several kinds of food products e.g. meat, vanilla crème and table olives, so as to increase objectivity, data reproducibility, low cost information extraction and faster quality assessment, without human intervention. Image processing's outcome will be propagated to the downstream analysis. The developed multispectral image processing method is based on unsupervised machine learning approach (Gaussian Mixture Models) and a novel unsupervised scheme of spectral band selection for segmentation process optimization. Through the evaluation we prove its efficiency and robustness against the currently available semi-manual software, showing that the developed method is a high throughput approach appropriate for massive data extraction from food samples.

  14. Hierarchical Bayesian sparse image reconstruction with application to MRFM.

    Science.gov (United States)

    Dobigeon, Nicolas; Hero, Alfred O; Tourneret, Jean-Yves

    2009-09-01

    This paper presents a hierarchical Bayesian model to reconstruct sparse images when the observations are obtained from linear transformations and corrupted by an additive white Gaussian noise. Our hierarchical Bayes model is well suited to such naturally sparse image applications as it seamlessly accounts for properties such as sparsity and positivity of the image via appropriate Bayes priors. We propose a prior that is based on a weighted mixture of a positive exponential distribution and a mass at zero. The prior has hyperparameters that are tuned automatically by marginalization over the hierarchical Bayesian model. To overcome the complexity of the posterior distribution, a Gibbs sampling strategy is proposed. The Gibbs samples can be used to estimate the image to be recovered, e.g., by maximizing the estimated posterior distribution. In our fully Bayesian approach, the posteriors of all the parameters are available. Thus, our algorithm provides more information than other previously proposed sparse reconstruction methods that only give a point estimate. The performance of the proposed hierarchical Bayesian sparse reconstruction method is illustrated on synthetic data and real data collected from a tobacco virus sample using a prototype MRFM instrument.

  15. Applications of wavelets in morphometric analysis of medical images

    Science.gov (United States)

    Davatzikos, Christos; Tao, Xiaodong; Shen, Dinggang

    2003-11-01

    Morphometric analysis of medical images is playing an increasingly important role in understanding brain structure and function, as well as in understanding the way in which these change during development, aging and pathology. This paper presents three wavelet-based methods with related applications in morphometric analysis of magnetic resonance (MR) brain images. The first method handles cases where very limited datasets are available for the training of statistical shape models in the deformable segmentation. The method is capable of capturing a larger range of shape variability than the standard active shape models (ASMs) can, by using the elegant spatial-frequency decomposition of the shape contours provided by wavelet transforms. The second method addresses the difficulty of finding correspondences in anatomical images, which is a key step in shape analysis and deformable registration. The detection of anatomical correspondences is completed by using wavelet-based attribute vectors as morphological signatures of voxels. The third method uses wavelets to characterize the morphological measurements obtained from all voxels in a brain image, and the entire set of wavelet coefficients is further used to build a brain classifier. Since the classification scheme operates in a very-high-dimensional space, it can determine subtle population differences with complex spatial patterns. Experimental results are provided to demonstrate the performance of the proposed methods.

  16. Application of DIRI dynamic infrared imaging in reconstructive surgery

    Science.gov (United States)

    Pawlowski, Marek; Wang, Chengpu; Jin, Feng; Salvitti, Matthew; Tenorio, Xavier

    2006-04-01

    We have developed the BioScanIR System based on QWIP (Quantum Well Infrared Photodetector). Data collected by this sensor are processed using the DIRI (Dynamic Infrared Imaging) algorithms. The combination of DIRI data processing methods with the unique characteristics of the QWIP sensor permit the creation of a new imaging modality capable of detecting minute changes in temperature at the surface of the tissue and organs associated with blood perfusion due to certain diseases such as cancer, vascular disease and diabetes. The BioScanIR System has been successfully applied in reconstructive surgery to localize donor flap feeding vessels (perforators) during the pre-surgical planning stage. The device is also used in post-surgical monitoring of skin flap perfusion. Since the BioScanIR is mobile; it can be moved to the bedside for such monitoring. In comparison to other modalities, the BioScanIR can localize perforators in a single, 20 seconds scan with definitive results available in minutes. The algorithms used include (FFT) Fast Fourier Transformation, motion artifact correction, spectral analysis and thermal image scaling. The BioScanIR is completely non-invasive and non-toxic, requires no exogenous contrast agents and is free of ionizing radiation. In addition to reconstructive surgery applications, the BioScanIR has shown promise as a useful functional imaging modality in neurosurgery, drug discovery in pre-clinical animal models, wound healing and peripheral vascular disease management.

  17. Prewarping techniques in imaging: applications in nanotechnology and biotechnology

    Science.gov (United States)

    Poonawala, Amyn; Milanfar, Peyman

    2005-03-01

    In all imaging systems, the underlying process introduces undesirable distortions that cause the output signal to be a warped version of the input. When the input to such systems can be controlled, pre-warping techniques can be employed which consist of systematically modifying the input such that it cancels out (or compensates for) the process losses. In this paper, we focus on the mask (reticle) design problem for 'optical micro-lithography', a process similar to photographic printing used for transferring binary circuit patterns onto silicon wafers. We use a pixel-based mask representation and model the above process as a cascade of convolution (aerial image formation) and thresholding (high-contrast recording) operations. The pre-distorted mask is obtained by minimizing the norm of the difference between the 'desired' output image and the 'reproduced' output image. We employ the regularization framework to ensure that the resulting masks are close-to-binary as well as simple and easy to fabricate. Finally, we provide insight into two additional applications of pre-warping techniques. First is 'e-beam lithography', used for fabricating nano-scale structures, and second is 'electronic visual prosthesis' which aims at providing limited vision to the blind by using a prosthetic retinally implanted chip capable of electrically stimulating the retinal neuron cells.

  18. Automated detection of leakage in fluorescein angiography images with application to malarial retinopathy.

    Science.gov (United States)

    Zhao, Yitian; MacCormick, Ian J C; Parry, David G; Leach, Sophie; Beare, Nicholas A V; Harding, Simon P; Zheng, Yalin

    2015-06-01

    The detection and assessment of leakage in retinal fluorescein angiogram images is important for the management of a wide range of retinal diseases. We have developed a framework that can automatically detect three types of leakage (large focal, punctate focal, and vessel segment leakage) and validated it on images from patients with malarial retinopathy. This framework comprises three steps: vessel segmentation, saliency feature generation and leakage detection. We tested the effectiveness of this framework by applying it to images from 20 patients with large focal leak, 10 patients with punctate focal leak, and 5,846 vessel segments from 10 patients with vessel leakage. The sensitivity in detecting large focal, punctate focal and vessel segment leakage are 95%, 82% and 81%, respectively, when compared to manual annotation by expert human observers. Our framework has the potential to become a powerful new tool for studying malarial retinopathy, and other conditions involving retinal leakage.

  19. System overview and applications of a panoramic imaging perimeter sensor

    International Nuclear Information System (INIS)

    Pritchard, D.A.

    1995-01-01

    This paper presents an overview of the design and potential applications of a 360-degree scanning, multi-spectral intrusion detection sensor. This moderate-resolution, true panoramic imaging sensor is intended for exterior use at ranges from 50 to 1,500 meters. This Advanced Exterior Sensor (AES) simultaneously uses three sensing technologies (infrared, visible, and radar) along with advanced data processing methods to provide low false-alarm intrusion detection, tracking, and immediate visual assessment. The images from the infrared and visible detector sets and the radar range data are updated as the sensors rotate once per second. The radar provides range data with one-meter resolution. This sensor has been designed for easy use and rapid deployment to cover wide areas beyond or in place of typical perimeters, and tactical applications around fixed or temporary high-value assets. AES prototypes are in development. Applications discussed in this paper include replacements, augmentations, or new installations at fixed sites where topological features, atmospheric conditions, environmental restrictions, ecological regulations, and archaeological features limit the use of conventional security components and systems

  20. The application of terahertz spectroscopy and imaging in biomedicine

    International Nuclear Information System (INIS)

    Liu Shangjian; Yu Fei; Li Kai; Zhou Jing

    2013-01-01

    Terahertz (THz) science and technology is gaining increasing attention in the biomedical field. Compared with traditional medical diagnosis methods using infrared radiation, nuclear magnetic resonance, X-rays or ultrasound, THz radiation has low energy, high spatial resolution, a broad spectral range, and is a reliable means of imaging for the human body. Terahertz waves have strong penetration and high fingerprint specificity, so they can play an important role in drug detection and identification. This paper reviews the special techniques based on conventional THz time-domain setups in disease detection and drug identification. With regard to the biomedical fields, we focus on the application of THz radiation in studies of skin tissue, gene expression, cells, cancer imaging, the quantitative analysis of drugs, and so on. We also present an overview of the future challenges and prospects of THz research in medicine. (authors)

  1. Turbulent structure of concentration plumes through application of video imaging

    Energy Technology Data Exchange (ETDEWEB)

    Dabberdt, W.F.; Martin, C. [National Center for Atmospheric Research, Boulder, CO (United States); Hoydysh, W.G.; Holynskyj, O. [Environmental Science & Services Corp., Long Island City, NY (United States)

    1994-12-31

    Turbulent flows and dispersion in the presence of building wakes and terrain-induced local circulations are particularly difficult to simulate with numerical models or measure with conventional fluid modeling and ambient measurement techniques. The problem stems from the complexity of the kinematics and the difficulty in making representative concentration measurements. New laboratory video imaging techniques are able to overcome many of these limitations and are being applied to study a range of difficult problems. Here the authors apply {open_quotes}tomographic{close_quotes} video imaging techniques to the study of the turbulent structure of an ideal elevated plume and the relationship of short-period peak concentrations to long-period average values. A companion paper extends application of the technique to characterization of turbulent plume-concentration fields in the wake of a complex building configuration.

  2. Application of imaging plate neutron detector to neutron radiography

    CERN Document Server

    Fujine, S; Kamata, M; Etoh, M

    1999-01-01

    As an imaging plate neutron detector (IP-ND) has been available for thermal neutron radiography (TNR) which has high resolution, high sensitivity and wide range, some basic characteristics of the IP-ND system were measured at the E-2 facility of the KUR. After basic performances of the IP were studied, images with high quality were obtained at a neutron fluence of 2 to 7x10 sup 8 n cm sup - sup 2. It was found that the IP-ND system with Gd sub 2 O sub 3 as a neutron converter material has a higher sensitivity to gamma-ray than that of a conventional film method. As a successful example, clear radiographs of the flat view for the fuel side plates with boron burnable poison were obtained. An application of the IP-ND system to neutron radiography (NR) is presented in this paper.

  3. Multi-target molecular imaging and its progress in research and application

    International Nuclear Information System (INIS)

    Tang Ganghua

    2011-01-01

    Multi-target molecular imaging (MMI) is an important field of research in molecular imaging. It includes multi-tracer multi-target molecular imaging(MTMI), fusion-molecule multi-target imaging (FMMI), coupling-molecule multi-target imaging (CMMI), and multi-target multifunctional molecular imaging(MMMI). In this paper,imaging modes of MMI are reviewed, and potential applications of positron emission tomography MMI in near future are discussed. (author)

  4. Reconstruction of a ring applicator using CT imaging: impact of the reconstruction method and applicator orientation

    DEFF Research Database (Denmark)

    Hellebust, Taran Paulsen; Tanderup, Kari; Bergstrand, Eva Stabell

    2007-01-01

    in multiplanar reconstructed images (MPR) and (3) library plans, using pre-defined applicator geometry (LIB). The doses to the lead pellets were calculated. The relative standard deviation (SD) for all reconstruction methods was less than 3.7% in the dose points. The relative SD for the LIB method...

  5. Near-field three-dimensional radar imaging techniques and applications.

    Science.gov (United States)

    Sheen, David; McMakin, Douglas; Hall, Thomas

    2010-07-01

    Three-dimensional radio frequency imaging techniques have been developed for a variety of near-field applications, including radar cross-section imaging, concealed weapon detection, ground penetrating radar imaging, through-barrier imaging, and nondestructive evaluation. These methods employ active radar transceivers that operate at various frequency ranges covering a wide range, from less than 100 MHz to in excess of 350 GHz, with the frequency range customized for each application. Computational wavefront reconstruction imaging techniques have been developed that optimize the resolution and illumination quality of the images. In this paper, rectilinear and cylindrical three-dimensional imaging techniques are described along with several application results.

  6. Coherent Raman scattering: Applications in imaging and sensing

    Science.gov (United States)

    Cui, Meng

    In this thesis, I discuss the theory, implementation and applications of coherent Raman scattering to imaging and sensing. A time domain interferometric method has been developed to collect high resolution shot-noise-limited Raman spectra over the Raman fingerprint regime and completely remove the electronic background signal in coherent Raman scattering. Compared with other existing coherent Raman microscopy methods, this time domain approach is proved to be simpler and more robust in rejecting background signal. We apply this method to image polymers and biological samples and demonstrate that the same setup can be used to collect two photon fluorescence and self phase modulation signals. A signal to noise ratio analysis is performed to show that this time domain method has a comparable signal to noise ratio to spectral domain methods, which we confirm experimentally. The coherent Raman method is also compared with spontaneous Raman scattering. The conditions under which coherent methods provide signal enhancement are discussed and experiments are performed to compare coherent Raman scattering with spontaneous Raman scattering under typical biological imaging conditions. A critical power, above which coherent Raman scattering is more sensitive than spontaneous Raman scattering, is experimentally determined to be ˜1mW in samples of high molecule concentration with a 75MHz laser system. This finding is contrary to claims that coherent methods provide many orders of magnitude enhancement under comparable conditions. In addition to the far field applications, I also discuss the combination of our time domain coherent Raman method with near field enhancement to explore the possibility of sensing and near field imaging. We report the first direct time-resolved coherent Raman measurement performed on a nanostructured substrate for molecule sensing. The preliminary results demonstrate that sub 20 fs pulses can be used to obtain coherent Raman spectra from a small number

  7. Scaling images using their background ratio. An application in statistical comparisons of images

    International Nuclear Information System (INIS)

    Kalemis, A; Binnie, D; Bailey, D L; Flower, M A; Ott, R J

    2003-01-01

    Comparison of two medical images often requires image scaling as a pre-processing step. This is usually done with the scaling-to-the-mean or scaling-to-the-maximum techniques which, under certain circumstances, in quantitative applications may contribute a significant amount of bias. In this paper, we present a simple scaling method which assumes only that the most predominant values in the corresponding images belong to their background structure. The ratio of the two images to be compared is calculated and its frequency histogram is plotted. The scaling factor is given by the position of the peak in this histogram which belongs to the background structure. The method was tested against the traditional scaling-to-the-mean technique on simulated planar gamma-camera images which were compared using pixelwise statistical parametric tests. Both sensitivity and specificity for each condition were measured over a range of different contrasts and sizes of inhomogeneity for the two scaling techniques. The new method was found to preserve sensitivity in all cases while the traditional technique resulted in significant degradation of sensitivity in certain cases

  8. Scaling images using their background ratio. An application in statistical comparisons of images.

    Science.gov (United States)

    Kalemis, A; Binnie, D; Bailey, D L; Flower, M A; Ott, R J

    2003-06-07

    Comparison of two medical images often requires image scaling as a pre-processing step. This is usually done with the scaling-to-the-mean or scaling-to-the-maximum techniques which, under certain circumstances, in quantitative applications may contribute a significant amount of bias. In this paper, we present a simple scaling method which assumes only that the most predominant values in the corresponding images belong to their background structure. The ratio of the two images to be compared is calculated and its frequency histogram is plotted. The scaling factor is given by the position of the peak in this histogram which belongs to the background structure. The method was tested against the traditional scaling-to-the-mean technique on simulated planar gamma-camera images which were compared using pixelwise statistical parametric tests. Both sensitivity and specificity for each condition were measured over a range of different contrasts and sizes of inhomogeneity for the two scaling techniques. The new method was found to preserve sensitivity in all cases while the traditional technique resulted in significant degradation of sensitivity in certain cases.

  9. MAKER2: an annotation pipeline and genome-database management tool for second-generation genome projects.

    Science.gov (United States)

    Holt, Carson; Yandell, Mark

    2011-12-22

    Second-generation sequencing technologies are precipitating major shifts with regards to what kinds of genomes are being sequenced and how they are annotated. While the first generation of genome projects focused on well-studied model organisms, many of today's projects involve exotic organisms whose genomes are largely terra incognita. This complicates their annotation, because unlike first-generation projects, there are no pre-existing 'gold-standard' gene-models with which to train gene-finders. Improvements in genome assembly and the wide availability of mRNA-seq data are also creating opportunities to update and re-annotate previously published genome annotations. Today's genome projects are thus in need of new genome annotation tools that can meet the challenges and opportunities presented by second-generation sequencing technologies. We present MAKER2, a genome annotation and data management tool designed for second-generation genome projects. MAKER2 is a multi-threaded, parallelized application that can process second-generation datasets of virtually any size. We show that MAKER2 can produce accurate annotations for novel genomes where training-data are limited, of low quality or even non-existent. MAKER2 also provides an easy means to use mRNA-seq data to improve annotation quality; and it can use these data to update legacy annotations, significantly improving their quality. We also show that MAKER2 can evaluate the quality of genome annotations, and identify and prioritize problematic annotations for manual review. MAKER2 is the first annotation engine specifically designed for second-generation genome projects. MAKER2 scales to datasets of any size, requires little in the way of training data, and can use mRNA-seq data to improve annotation quality. It can also update and manage legacy genome annotation datasets.

  10. Large-Scale medical image analytics: Recent methodologies, applications and Future directions.

    Science.gov (United States)

    Zhang, Shaoting; Metaxas, Dimitris

    2016-10-01

    Despite the ever-increasing amount and complexity of annotated medical image data, the development of large-scale medical image analysis algorithms has not kept pace with the need for methods that bridge the semantic gap between images and diagnoses. The goal of this position paper is to discuss and explore innovative and large-scale data science techniques in medical image analytics, which will benefit clinical decision-making and facilitate efficient medical data management. Particularly, we advocate that the scale of image retrieval systems should be significantly increased at which interactive systems can be effective for knowledge discovery in potentially large databases of medical images. For clinical relevance, such systems should return results in real-time, incorporate expert feedback, and be able to cope with the size, quality, and variety of the medical images and their associated metadata for a particular domain. The design, development, and testing of the such framework can significantly impact interactive mining in medical image databases that are growing rapidly in size and complexity and enable novel methods of analysis at much larger scales in an efficient, integrated fashion. Copyright © 2016. Published by Elsevier B.V.

  11. SAS- Semantic Annotation Service for Geoscience resources on the web

    Science.gov (United States)

    Elag, M.; Kumar, P.; Marini, L.; Li, R.; Jiang, P.

    2015-12-01

    There is a growing need for increased integration across the data and model resources that are disseminated on the web to advance their reuse across different earth science applications. Meaningful reuse of resources requires semantic metadata to realize the semantic web vision for allowing pragmatic linkage and integration among resources. Semantic metadata associates standard metadata with resources to turn them into semantically-enabled resources on the web. However, the lack of a common standardized metadata framework as well as the uncoordinated use of metadata fields across different geo-information systems, has led to a situation in which standards and related Standard Names abound. To address this need, we have designed SAS to provide a bridge between the core ontologies required to annotate resources and information systems in order to enable queries and analysis over annotation from a single environment (web). SAS is one of the services that are provided by the Geosematnic framework, which is a decentralized semantic framework to support the integration between models and data and allow semantically heterogeneous to interact with minimum human intervention. Here we present the design of SAS and demonstrate its application for annotating data and models. First we describe how predicates and their attributes are extracted from standards and ingested in the knowledge-base of the Geosemantic framework. Then we illustrate the application of SAS in annotating data managed by SEAD and annotating simulation models that have web interface. SAS is a step in a broader approach to raise the quality of geoscience data and models that are published on the web and allow users to better search, access, and use of the existing resources based on standard vocabularies that are encoded and published using semantic technologies.

  12. Application of FPGA's in Flexible Analogue Electronic Image Generator Design

    Directory of Open Access Journals (Sweden)

    Peter Kulla

    2006-01-01

    Full Text Available This paper focuses on usage of the FPGAs (Field Programmable Gate Arrays Xilinx as a part of our more complex workdedicated to design of flexible analogue electronic images generator for application in TV measurement technique or/and TV servicetechnique or/and education process. The FPGAs performs here the role of component colour R, G, B, synchronization and blanking signals source. These signals are next processed and amplified in other parts of the generator as NTSC/PAL source encoder and RF modulator. The main aim of this paper is to show the possibilities how with suitable development software use a FPGAs in analog TV technology.

  13. Development of biosensor based on imaging ellipsometry and biomedical applications

    Energy Technology Data Exchange (ETDEWEB)

    Jin, G., E-mail: gajin@imech.ac.c [NML, Institute of Mechanics, Chinese Academy of Sciences, 15 Bei-si-huan west Rd., Beijing 100190 (China); Meng, Y.H.; Liu, L.; Niu, Y.; Chen, S. [NML, Institute of Mechanics, Chinese Academy of Sciences, 15 Bei-si-huan west Rd., Beijing 100190 (China); Cai, Q.; Jiang, T.J. [Institute of Biophysics, Chinese Academy of Sciences, Beijing 100101 (China)

    2011-02-28

    So far, combined with a microfluidic reactor array system, an engineering system of biosensor based on imaging ellipsometry is installed for biomedical applications, such as antibody screen, hepatitis B markers detection, cancer markers spectrum and virus recognition, etc. Furthermore, the biosensor in total internal reflection (TIR) mode has be improved by a spectroscopic light, optimization settings of polarization and low noise CCD which brings an obvious improvement of 10 time increase in the sensitivity and SNR, and 50 times lower concentration in the detection limit with a throughput of 48 independent channels and the time resolution of 0.04 S.

  14. Applications of radionuclide myocardial perfusion imaging in acute coronary syndrome

    International Nuclear Information System (INIS)

    Han Pingping; Tian Yueqin

    2008-01-01

    In recent years, acute coronary syndrome(ACS) has been getting more and more attentions. Radionuclide myocardial perfusion imaging (MPI) can make a quick accurate diagnosis for patients with acute chest pain who cann't be diagnosed by conventional methods. The sensitivity and negative predictive value of MPI are relatively high. Besides, MPI can be applicated in the detection of ischemic and infarct size and degree, the risk stratification and the assessment of prognosis of the patients with ACS, and the appraisal of the effect of strategies. (authors)

  15. The application of magnetic resonance imaging in temporomandibular joint pathology

    International Nuclear Information System (INIS)

    Ehmedov, E.T.; Qahramanov, E.T.

    2007-01-01

    The diseases and damages of temporomandibular joint have compleceted diagnostic unlike other bone-joint pathologies. In 2005 for the first time in history it was implemented the magnetic resonance imaging in diagnostics of patients with with temporomandibular joints pathology. The current researches are in place till today. Being the golden standart the application of magnetic resonance tomography has a great role in differential diagnostics of the chronic arthritis, sclerosanse, deformanse arthrosis and arthrosis with internal derancement. This method guaranteed the correct valuation of the bone, disc and muscle structures of the joint and therefore brought full clearance into the problem

  16. Applications of digital image analysis capability in Idaho

    Science.gov (United States)

    Johnson, K. A.

    1981-01-01

    The use of digital image analysis of LANDSAT imagery in water resource assessment is discussed. The data processing systems employed are described. The determination of urban land use conversion of agricultural land in two southwestern Idaho counties involving estimation and mapping of crop types and of irrigated land is described. The system was also applied to an inventory of irrigated cropland in the Snake River basin and establishment of a digital irrigation water source/service area data base for the basin. Application of the system to a determination of irrigation development in the Big Lost River basin as part of a hydrologic survey of the basin is also described.

  17. Application of object modeling technique to medical image retrieval system

    International Nuclear Information System (INIS)

    Teshima, Fumiaki; Abe, Takeshi

    1993-01-01

    This report describes the results of discussions on the object-oriented analysis methodology, which is one of the object-oriented paradigms. In particular, we considered application of the object modeling technique (OMT) to the analysis of a medical image retrieval system. The object-oriented methodology places emphasis on the construction of an abstract model from real-world entities. The effectiveness of and future improvements to OMT are discussed from the standpoint of the system's expandability. These discussions have elucidated that the methodology is sufficiently well-organized and practical to be applied to commercial products, provided that it is applied to the appropriate problem domain. (author)

  18. Si and gaas pixel detectors for medical imaging applications

    International Nuclear Information System (INIS)

    Bisogni, M. G.

    2001-01-01

    As the use of digital radiographic equipment in the morphological imaging field is becoming the more and more diffuse, the research of new and more performing devices from public institutions and industrial companies is in constant progress. Most of these devices are based on solid-state detectors as X-ray sensors. Semiconductor pixel detectors, originally developed in the high energy physics environment, have been then proposed as digital detector for medical imaging applications. In this paper a digital single photon counting device, based on silicon and GaAs pixel detector, is presented. The detector is a thin slab of semiconductor crystal where an array of 64 by 64 square pixels, 170- m side, has been built on one side. The data read-out is performed by a VLSI integrated circuit named Photon Counting Chip (PCC), developed within the MEDIPIX collaboration. Each chip cell geometrically matches the sensor pixel. It contains a charge preamplifier, a threshold comparator and a 15 bits pseudo-random counter and it is coupled to the detector by means of bump bonding. Most important advantages of such system, with respect to a traditional X-rays film/screen device, are the wider linear dynamic range (3x104) and the higher performance in terms of MTF and DQE. Besides the single photon counting architecture allows to detect image contrasts lower than 3%. Electronics read-out performance as well as imaging capabilities of the digital device will be presented. Images of mammographic phantoms acquired with a standard Mammographic tube will be compared with radiographs obtained with traditional film/screen systems

  19. Digital image processing for real-time neutron radiography and its applications

    International Nuclear Information System (INIS)

    Fujine, Shigenori

    1989-01-01

    The present paper describes several digital image processing approaches for the real-time neutron radiography (neutron television-NTV), such as image integration, adaptive smoothing and image enhancement, which have beneficial effects on image improvements, and also describes how to use these techniques for applications. Details invisible in direct images of NTV are able to be revealed by digital image processing, such as reversed image, gray level correction, gray scale transformation, contoured image, subtraction technique, pseudo color display and so on. For real-time application a contouring operation and an averaging approach can also be utilized effectively. (author)

  20. Particle image velocimetry new developments and recent applications

    CERN Document Server

    Willert, Christian E

    2008-01-01

    Particle Image Velocimetry (PIV) is a non-intrusive optical measurement technique which allows capturing several thousand velocity vectors within large flow fields instantaneously. Today, the PIV technique has spread widely and differentiated into many distinct applications, from micro flows over combustion to supersonic flows for both industrial needs and research. Over the past decade the measurement technique and the hard- and software have been improved continuously so that PIV has become a reliable and accurate method for "real life" investigations. Nevertheless there is still an ongoing process of improvements and extensions of the PIV technique towards 3D, time resolution, higher accuracy, measurements under harsh conditions and micro- and macroscales. This book gives a synopsis of the main results achieved during the EC-funded network PivNet 2 as well as a survey of the state-of-the-art of scientific research using PIV techniques in different fields of application.

  1. Microwave imaging for plasma diagnostics and its applications

    International Nuclear Information System (INIS)

    Mase, A.; Kogi, Y.; Ito, N.

    2007-01-01

    Microwave to millimeter-wave diagnostic techniques such as interferometry, reflectometry, scattering, and radiometry have been powerful tools for diagnosing magnetically confined plasmas. Important plasma parameters were measured to clarify the physics issues such as stability, wave phenomena, and fluctuation-induced transport. Recent advances in microwave and millimeter-wave technology together with computer technology have enabled the development of advanced diagnostics for visualization of 2D and 3D structures of plasmas. Microwave/millimeter-wave imaging is expected to be one of the most promising diagnostic methods for this purpose. We report here on the representative microwave diagnostics and their industrial applications as well as application to magnetically-confined plasmas. (author)

  2. VAT: a computational framework to functionally annotate variants in personal genomes within a cloud-computing environment.

    Science.gov (United States)

    Habegger, Lukas; Balasubramanian, Suganthi; Chen, David Z; Khurana, Ekta; Sboner, Andrea; Harmanci, Arif; Rozowsky, Joel; Clarke, Declan; Snyder, Michael; Gerstein, Mark

    2012-09-01

    The functional annotation of variants obtained through sequencing projects is generally assumed to be a simple intersection of genomic coordinates with genomic features. However, complexities arise for several reasons, including the differential effects of a variant on alternatively spliced transcripts, as well as the difficulty in assessing the impact of small insertions/deletions and large structural variants. Taking these factors into consideration, we developed the Variant Annotation Tool (VAT) to functionally annotate variants from multiple personal genomes at the transcript level as well as obtain summary statistics across genes and individuals. VAT also allows visualization of the effects of different variants, integrates allele frequencies and genotype data from the underlying individuals and facilitates comparative analysis between different groups of individuals. VAT can either be run through a command-line interface or as a web application. Finally, in order to enable on-demand access and to minimize unnecessary transfers of large data files, VAT can be run as a virtual machine in a cloud-computing environment. VAT is implemented in C and PHP. The VAT web service, Amazon Machine Image, source code and detailed documentation are available at vat.gersteinlab.org.

  3. Applications of image processing and visualization in the evaluation of murder and assault

    Science.gov (United States)

    Oliver, William R.; Rosenman, Julian G.; Boxwala, Aziz; Stotts, David; Smith, John; Soltys, Mitchell; Symon, James; Cullip, Tim; Wagner, Glenn

    1994-09-01

    Recent advances in image processing and visualization are of increasing use in the investigation of violent crime. The Digital Image Processing Laboratory at the Armed Forces Institute of Pathology in collaboration with groups at the University of North Carolina at Chapel Hill are actively exploring visualization applications including image processing of trauma images, 3D visualization, forensic database management and telemedicine. Examples of recent applications are presented. Future directions of effort include interactive consultation and image manipulation tools for forensic data exploration.

  4. Quantitative imaging biomarkers: the application of advanced image processing and analysis to clinical and preclinical decision making.

    Science.gov (United States)

    Prescott, Jeffrey William

    2013-02-01

    The importance of medical imaging for clinical decision making has been steadily increasing over the last four decades. Recently, there has also been an emphasis on medical imaging for preclinical decision making, i.e., for use in pharamaceutical and medical device development. There is also a drive towards quantification of imaging findings by using quantitative imaging biomarkers, which can improve sensitivity, specificity, accuracy and reproducibility of imaged characteristics used for diagnostic and therapeutic decisions. An important component of the discovery, characterization, validation and application of quantitative imaging biomarkers is the extraction of information and meaning from images through image processing and subsequent analysis. However, many advanced image processing and analysis methods are not applied directly to questions of clinical interest, i.e., for diagnostic and therapeutic decision making, which is a consideration that should be closely linked to the development of such algorithms. This article is meant to address these concerns. First, quantitative imaging biomarkers are introduced by providing definitions and concepts. Then, potential applications of advanced image processing and analysis to areas of quantitative imaging biomarker research are described; specifically, research into osteoarthritis (OA), Alzheimer's disease (AD) and cancer is presented. Then, challenges in quantitative imaging biomarker research are discussed. Finally, a conceptual framework for integrating clinical and preclinical considerations into the development of quantitative imaging biomarkers and their computer-assisted methods of extraction is presented.

  5. Pipeline to upgrade the genome annotations

    Directory of Open Access Journals (Sweden)

    Lijin K. Gopi

    2017-12-01

    Full Text Available Current era of functional genomics is enriched with good quality draft genomes and annotations for many thousands of species and varieties with the support of the advancements in the next generation sequencing technologies (NGS. Around 25,250 genomes, of the organisms from various kingdoms, are submitted in the NCBI genome resource till date. Each of these genomes was annotated using various tools and knowledge-bases that were available during the period of the annotation. It is obvious that these annotations will be improved if the same genome is annotated using improved tools and knowledge-bases. Here we present a new genome annotation pipeline, strengthened with various tools and knowledge-bases that are capable of producing better quality annotations from the consensus of the predictions from different tools. This resource also perform various additional annotations, apart from the usual gene predictions and functional annotations, which involve SSRs, novel repeats, paralogs, proteins with transmembrane helices, signal peptides etc. This new annotation resource is trained to evaluate and integrate all the predictions together to resolve the overlaps and ambiguities of the boundaries. One of the important highlights of this resource is the capability of predicting the phylogenetic relations of the repeats using the evolutionary trace analysis and orthologous gene clusters. We also present a case study, of the pipeline, in which we upgrade the genome annotation of Nelumbo nucifera (sacred lotus. It is demonstrated that this resource is capable of producing an improved annotation for a better understanding of the biology of various organisms.

  6. (reprocessed)CAGE_peaks_annotation - FANTOM5 | LSDB Archive [Life Science Database Archive metadata

    Lifescience Database Archive (English)

    Full Text Available switchLanguage; BLAST Search Image Search Home About Archive Update History Data List Contact us FANTOM...: ftp://ftp.biosciencedbc.jp/archive/fantom5/datafiles/reprocessed/hg38_latest/extra/CAGE_peaks_annotation/ ...e URL: ftp://ftp.biosciencedbc.jp/archive/fantom5/datafiles/reprocessed/mm10_latest/extra/CAGE_peaks_annotat...te History of This Database Site Policy | Contact Us (reprocessed)CAGE_peaks_annotation - FANTOM5 | LSDB Archive ...

  7. Clinical applications of SPECT/CT in imaging the extremities

    International Nuclear Information System (INIS)

    Huellner, Martin W.; Strobel, Klaus

    2014-01-01

    Today, SPECT/CT is increasingly used and available in the majority of larger nuclear medicine departments. Several applications of SPECT/CT as a supplement to or replacement for traditional conventional bone scintigraphy have been established in recent years. SPECT/CT of the upper and lower extremities is valuable in many conditions with abnormal bone turnover due to trauma, inflammation, infection, degeneration or tumour. SPECT/CT is often used in patients if conventional radiographs are insufficient, if MR image quality is impaired due to metal implants or in patients with contraindications to MR. In complex joints such as those in the foot and wrist, SPECT/CT provides exact anatomical correlation of pathological uptake. In many cases SPECT increases the sensitivity and CT the specificity of the study, increasing confidence in the final diagnosis compared to planar images alone. The CT protocol should be adapted to the clinical question and may vary from very low-dose (e.g. attenuation correction only), to low-dose for anatomical correlation, to normal-dose protocols enabling precise anatomical resolution. The aim of this review is to give an overview of SPECT/CT imaging of the extremities with a focus on the hand and wrist, knee and foot, and for evaluation of patients after joint arthroplasty. (orig.)

  8. Magnetic Resonance Imaging Dosimetry application to chemical ferrous gels

    International Nuclear Information System (INIS)

    Calmet, Ch.

    2000-10-01

    MRI dosimetry is based on the determination of relaxation parameters (T1, T2). Chemical detectors whose NMR properties are sensitive to irradiation are used. Difficulties in absolute relaxation times measure limit the use of this technique. The aim of this work first consists in the development of a quantitative method to determine T, relaxation time on irradiated ferrous gels. So, we can study processes and parameters which affect the technique sensibility. The method sensitivity first depends on imaging instrumentation. Quantitative MRI method used is able to eliminate variable imager factors. The study of instrumental parameters (coil, sequence parameters) permits to define an imaging protocol which is a function of the considered application (volume size, spatial resolution and accuracy). The method sensitivity depends on the detector sensibility too. The best composition of ferrous gel has been determined. Dose distributions are obtained in three minutes. Comparison between MRI results and conventional dosimetry methods (specially ionisation chamber and films) shows a deviation of about 5% for single irradiation with energy fields in the range of 300 keV to 25 MeV. So, the proposed method forms a suitable technique for 3D dosimetry. (author)

  9. Clinical applications of SPECT/CT in imaging the extremities

    Energy Technology Data Exchange (ETDEWEB)

    Huellner, Martin W. [University Hospital Zurich, Department of Medical Radiology, Division of Nuclear Medicine, Zurich (Switzerland); Strobel, Klaus [Lucerne Cantonal Hospital, Department of Nuclear Medicine and Radiology, Lucerne (Switzerland)

    2014-05-15

    Today, SPECT/CT is increasingly used and available in the majority of larger nuclear medicine departments. Several applications of SPECT/CT as a supplement to or replacement for traditional conventional bone scintigraphy have been established in recent years. SPECT/CT of the upper and lower extremities is valuable in many conditions with abnormal bone turnover due to trauma, inflammation, infection, degeneration or tumour. SPECT/CT is often used in patients if conventional radiographs are insufficient, if MR image quality is impaired due to metal implants or in patients with contraindications to MR. In complex joints such as those in the foot and wrist, SPECT/CT provides exact anatomical correlation of pathological uptake. In many cases SPECT increases the sensitivity and CT the specificity of the study, increasing confidence in the final diagnosis compared to planar images alone. The CT protocol should be adapted to the clinical question and may vary from very low-dose (e.g. attenuation correction only), to low-dose for anatomical correlation, to normal-dose protocols enabling precise anatomical resolution. The aim of this review is to give an overview of SPECT/CT imaging of the extremities with a focus on the hand and wrist, knee and foot, and for evaluation of patients after joint arthroplasty. (orig.)

  10. Design and Applications of a Multimodality Image Data Warehouse Framework

    Science.gov (United States)

    Wong, Stephen T.C.; Hoo, Kent Soo; Knowlton, Robert C.; Laxer, Kenneth D.; Cao, Xinhau; Hawkins, Randall A.; Dillon, William P.; Arenson, Ronald L.

    2002-01-01

    A comprehensive data warehouse framework is needed, which encompasses imaging and non-imaging information in supporting disease management and research. The authors propose such a framework, describe general design principles and system architecture, and illustrate a multimodality neuroimaging data warehouse system implemented for clinical epilepsy research. The data warehouse system is built on top of a picture archiving and communication system (PACS) environment and applies an iterative object-oriented analysis and design (OOAD) approach and recognized data interface and design standards. The implementation is based on a Java CORBA (Common Object Request Broker Architecture) and Web-based architecture that separates the graphical user interface presentation, data warehouse business services, data staging area, and backend source systems into distinct software layers. To illustrate the practicality of the data warehouse system, the authors describe two distinct biomedical applications—namely, clinical diagnostic workup of multimodality neuroimaging cases and research data analysis and decision threshold on seizure foci lateralization. The image data warehouse framework can be modified and generalized for new application domains. PMID:11971885

  11. Biochemical imaging of tissues by SIMS for biomedical applications

    International Nuclear Information System (INIS)

    Lee, Tae Geol; Park, Ji-Won; Shon, Hyun Kyong; Moon, Dae Won; Choi, Won Woo; Li, Kapsok; Chung, Jin Ho

    2008-01-01

    With the development of optimal surface cleaning techniques by cluster ion beam sputtering, certain applications of SIMS for analyzing cells and tissues have been actively investigated. For this report, we collaborated with bio-medical scientists to study bio-SIMS analyses of skin and cancer tissues for biomedical diagnostics. We pay close attention to the setting up of a routine procedure for preparing tissue specimens and treating the surface before obtaining the bio-SIMS data. Bio-SIMS was used to study two biosystems, skin tissues for understanding the effects of photoaging and colon cancer tissues for insight into the development of new cancer diagnostics for cancer. Time-of-flight SIMS imaging measurements were taken after surface cleaning with cluster ion bombardment by Bi n or C 60 under varying conditions. The imaging capability of bio-SIMS with a spatial resolution of a few microns combined with principal component analysis reveal biologically meaningful information, but the lack of high molecular weight peaks even with cluster ion bombardment was a problem. This, among other problems, shows that discourse with biologists and medical doctors are critical to glean any meaningful information from SIMS mass spectrometric and imaging data. For SIMS to be accepted as a routine, daily analysis tool in biomedical laboratories, various practical sample handling methodology such as surface matrix treatment, including nano-metal particles and metal coating, in addition to cluster sputtering, should be studied

  12. Ultra-thin infrared metamaterial detector for multicolor imaging applications.

    Science.gov (United States)

    Montoya, John A; Tian, Zhao-Bing; Krishna, Sanjay; Padilla, Willie J

    2017-09-18

    The next generation of infrared imaging systems requires control of fundamental electromagnetic processes - absorption, polarization, spectral bandwidth - at the pixel level to acquire desirable information about the environment with low system latency. Metamaterial absorbers have sparked interest in the infrared imaging community for their ability to enhance absorption of incoming radiation with color, polarization and/or phase information. However, most metamaterial-based sensors fail to focus incoming radiation into the active region of a ultra-thin detecting element, thus achieving poor detection metrics. Here our multifunctional metamaterial absorber is directly integrated with a novel mid-wave infrared (MWIR) and long-wave infrared (LWIR) detector with an ultra-thin (~λ/15) InAs/GaSb Type-II superlattice (T2SL) interband cascade detector. The deep sub-wavelength metamaterial detector architecture proposed and demonstrated here, thus significantly improves the detection quantum efficiency (QE) and absorption of incoming radiation in a regime typically dominated by Fabry-Perot etalons. Our work evinces the ability of multifunctional metamaterials to realize efficient wavelength selective detection across the infrared spectrum for enhanced multispectral infrared imaging applications.

  13. Diffusion weighted imaging demystified. The technique and potential clinical applications for soft tissue imaging

    Energy Technology Data Exchange (ETDEWEB)

    Ahlawat, Shivani [The Johns Hopkins Medical Institutions, The Russell H. Morgan Department of Radiology and Radiological Science, Baltimore, MD (United States); Fayad, Laura M. [The Johns Hopkins Medical Institutions, The Russell H. Morgan Department of Radiology and Radiological Science, Baltimore, MD (United States); The Johns Hopkins Medical Institutions, Department of Oncology, Baltimore, MD (United States); The Johns Hopkins Medical Institutions, Department of Orthopaedic Surgery, Baltimore, MD (United States)

    2018-03-15

    Diffusion-weighted imaging (DWI) is a fast, non-contrast technique that is readily available and easy to integrate into an existing imaging protocol. DWI with apparent diffusion coefficient (ADC) mapping offers a quantitative metric for soft tissue evaluation and provides information regarding the cellularity of a region of interest. There are several available methods of performing DWI, and artifacts and pitfalls must be considered when interpreting DWI studies. This review article will review the various techniques of DWI acquisition and utility of qualitative as well as quantitative methods of image interpretation, with emphasis on optimal methods for ADC measurement. The current clinical applications for DWI are primarily related to oncologic evaluation: For the assessment of de novo soft tissue masses, ADC mapping can serve as a useful adjunct technique to routine anatomic sequences for lesion characterization as cyst or solid and, if solid, benign or malignant. For treated soft tissue masses, the role of DWI/ADC mapping in the assessment of treatment response as well as recurrent or residual neoplasm in the setting of operative management is discussed, especially when intravenous contrast medium cannot be given. Emerging DWI applications for non-neoplastic clinical indications are also reviewed. (orig.)

  14. Diffusion weighted imaging demystified. The technique and potential clinical applications for soft tissue imaging

    International Nuclear Information System (INIS)

    Ahlawat, Shivani; Fayad, Laura M.

    2018-01-01

    Diffusion-weighted imaging (DWI) is a fast, non-contrast technique that is readily available and easy to integrate into an existing imaging protocol. DWI with apparent diffusion coefficient (ADC) mapping offers a quantitative metric for soft tissue evaluation and provides information regarding the cellularity of a region of interest. There are several available methods of performing DWI, and artifacts and pitfalls must be considered when interpreting DWI studies. This review article will review the various techniques of DWI acquisition and utility of qualitative as well as quantitative methods of image interpretation, with emphasis on optimal methods for ADC measurement. The current clinical applications for DWI are primarily related to oncologic evaluation: For the assessment of de novo soft tissue masses, ADC mapping can serve as a useful adjunct technique to routine anatomic sequences for lesion characterization as cyst or solid and, if solid, benign or malignant. For treated soft tissue masses, the role of DWI/ADC mapping in the assessment of treatment response as well as recurrent or residual neoplasm in the setting of operative management is discussed, especially when intravenous contrast medium cannot be given. Emerging DWI applications for non-neoplastic clinical indications are also reviewed. (orig.)

  15. Applications of Chemical Shift Imaging to Marine Sciences

    Directory of Open Access Journals (Sweden)

    Haakil Lee

    2010-08-01

    Full Text Available The successful applications of magnetic resonance imaging (MRI in medicine are mostly due to the non-invasive and non-destructive nature of MRI techniques. Longitudinal studies of humans and animals are easily accomplished, taking advantage of the fact that MRI does not use harmful radiation that would be needed for plain film radiographic, computerized tomography (CT or positron emission (PET scans. Routine anatomic and functional studies using the strong signal from the most abundant magnetic nucleus, the proton, can also provide metabolic information when combined with in vivo magnetic resonance spectroscopy (MRS. MRS can be performed using either protons or hetero-nuclei (meaning any magnetic nuclei other than protons or 1H including carbon (13C or phosphorus (31P. In vivo MR spectra can be obtained from single region ofinterest (ROI or voxel or multiple ROIs simultaneously using the technique typically called chemical shift imaging (CSI. Here we report applications of CSI to marine samples and describe a technique to study in vivo glycine metabolism in oysters using 13C MRS 12 h after immersion in a sea water chamber dosed with [2-13C]-glycine. This is the first report of 13C CSI in a marine organism.

  16. Application of radiological imaging methods to radioactive waste characterization

    Energy Technology Data Exchange (ETDEWEB)

    Tessaro, Ana Paula Gimenes; Souza, Daiane Cristini B. de; Vicente, Roberto, E-mail: aptessaro@ipen.br [Instituto de Pesquisas Energeticas e Nucleares (IPEN/CNEN-SP), Sao Paulo, SP (Brazil)

    2013-07-01

    Radiological imaging technologies are most frequently used for medical diagnostic purposes but are also useful in materials characterization and other non-medical applications in research and industry. The characterization of radioactive waste packages or waste samples can also benefit from these techniques. In this paper, the application of some imaging methods is examined for the physical characterization of radioactive wastes constituted by spent ion-exchange resins and activated charcoal beds stored at the Radioactive Waste Management Department of IPEN. These wastes are generated when the filter media of the water polishing system of the IEA-R1 Nuclear Research Reactor is no longer able to maintain the required water quality and are replaced. The IEA-R1 is a 5MW pool-type reactor, moderated and cooled by light water, and fission and activation products released from the reactor core must be continuously removed to prevent activity buildup in the water. The replacement of the sorbents is carried out by pumping from the filter tanks into several 200 L drums, each drum getting a variable amount of water. Considering that the results of radioanalytical methods to determine the concentrations of radionuclides are usually expressed on dry basis,the amount of water must be known to calculate the total activity of each package. At first sight this is a trivial problem that demanded, however some effort to be solved. The findings on this subject are reported in this paper. (author)

  17. Energy minimization in medical image analysis: Methodologies and applications.

    Science.gov (United States)

    Zhao, Feng; Xie, Xianghua

    2016-02-01

    Energy minimization is of particular interest in medical image analysis. In the past two decades, a variety of optimization schemes have been developed. In this paper, we present a comprehensive survey of the state-of-the-art optimization approaches. These algorithms are mainly classified into two categories: continuous method and discrete method. The former includes Newton-Raphson method, gradient descent method, conjugate gradient method, proximal gradient method, coordinate descent method, and genetic algorithm-based method, while the latter covers graph cuts method, belief propagation method, tree-reweighted message passing method, linear programming method, maximum margin learning method, simulated annealing method, and iterated conditional modes method. We also discuss the minimal surface method, primal-dual method, and the multi-objective optimization method. In addition, we review several comparative studies that evaluate the performance of different minimization techniques in terms of accuracy, efficiency, or complexity. These optimization techniques are widely used in many medical applications, for example, image segmentation, registration, reconstruction, motion tracking, and compressed sensing. We thus give an overview on those applications as well. Copyright © 2015 John Wiley & Sons, Ltd.

  18. Development of scintillation materials for medical imaging and other applications

    International Nuclear Information System (INIS)

    Melcher, C. L.

    2013-01-01

    Scintillation materials that produce pulses of visible light in response to the absorption of energetic photons, neutrons, and charged particles, are widely used in various applications that require the detection of radiation. The discovery and development of new scintillators has accelerated in recent years, due in large part to their importance in medical imaging as well as in security and high energy physics applications. Better understanding of fundamental scintillation mechanisms as well as the roles played by defects and impurities have aided the development of new high performance scintillators for both gamma-ray and neutron detection. Although single crystals continue to dominate gamma-ray based imaging techniques, composite materials and transparent optical ceramics potentially offer advantages in terms of both synthesis processes and scintillation performance. A number of promising scintillator candidates have been identified during the last few years, and several are currently being actively developed for commercial production. Purification and control of raw materials and cost effective crystal growth processes can present significant challenges to the development of practical new scintillation materials.

  19. Annotating temporal information in clinical narratives.

    Science.gov (United States)

    Sun, Weiyi; Rumshisky, Anna; Uzuner, Ozlem

    2013-12-01

    Temporal information in clinical narratives plays an important role in patients' diagnosis, treatment and prognosis. In order to represent narrative information accurately, medical natural language processing (MLP) systems need to correctly identify and interpret temporal information. To promote research in this area, the Informatics for Integrating Biology and the Bedside (i2b2) project developed a temporally annotated corpus of clinical narratives. This corpus contains 310 de-identified discharge summaries, with annotations of clinical events, temporal expressions and temporal relations. This paper describes the process followed for the development of this corpus and discusses annotation guideline development, annotation methodology, and corpus quality. Copyright © 2013 Elsevier Inc. All rights reserved.

  20. Estimating the annotation error rate of curated GO database sequence annotations

    Directory of Open Access Journals (Sweden)

    Brown Alfred L

    2007-05-01

    Full Text Available Abstract Background Annotations that describe the function of sequences are enormously important to researchers during laboratory investigations and when making computational inferences. However, there has been little investigation into the data quality of sequence function annotations. Here we have developed a new method of estimating the error rate of curated sequence annotations, and applied this to the Gene Ontology (GO sequence database (GOSeqLite. This method involved artificially adding errors to sequence annotations at known rates, and used regression to model the impact on the precision of annotations based on BLAST matched sequences. Results We estimated the error rate of curated GO sequence annotations in the GOSeqLite database (March 2006 at between 28% and 30%. Annotations made without use of sequence similarity based methods (non-ISS had an estimated error rate of between 13% and 18%. Annotations made with the use of sequence similarity methodology (ISS had an estimated error rate of 49%. Conclusion While the overall error rate is reasonably low, it would be prudent to treat all ISS annotations with caution. Electronic annotators that use ISS annotations as the basis of predictions are likely to have higher false prediction rates, and for this reason designers of these systems should consider avoiding ISS annotations where possible. Electronic annotators that use ISS annotations to make predictions should be viewed sceptically. We recommend that curators thoroughly review ISS annotations before accepting them as valid. Overall, users of curated sequence annotations from the GO database should feel assured that they are using a comparatively high quality source of information.

  1. Image slicer manufacturing: from space application to mass production

    Science.gov (United States)

    Bonneville, Christophe; Cagnat, Jean-François; Laurent, Florence; Prieto, Eric; Ancourt, Gérard

    2004-09-01

    This presentation aims to show technical and industrial inputs to be taking into account for Image Slicer systems design and development for different types of projects from space application to mass production for multi-IFU instruments. Cybernétix has a strong experience of precision optics assembled thanks to molecular adhesion and have already manufactured 6 prototypes of image slicer subsystem (prototypes of NIRSPEC-IFU, IFS for JWST, MUSE ...) in collaboration with the Laboratoire d"Astrophysique de Marseille (LAM) and the Centre de Recherche Astronomique de Lyon (CRAL). After a brief presentation of the principle of manufacturing and assembly, we will focus on the different performances achieved in our prototypes of slicer mirrors, pupil and slit mirrors lines: an accuracy on centre of curvature position better than 15 arsec has been obtained for a stack of 30 slices. The contribution of the slice stacking to this error is lower than 4 arcsec. In spite of very thin surfaces (~ 0.9 x 40 mm for instance), a special process allows to guarantee a surface roughness about 5 nm and very few digs on the slice borders. The WFE of the mini-mirror can also be measured at a stage of the manufacturing. Different environmental tests have shown the withstanding of these assemblies to cryogenic temperature (30 K). Then, we will describe the different solutions (spherical, flat, cylindrical surfaces) and characteristics of an image slicer that can influence difficulties of manufacturing and metrology, cost, schedule and risks with regard to fabrication. Finally, the study of a mass production plan for MUSE (CRAL) composed of 24 Image Slicers of 38 slices, that"s to say 912 slices, will be exposed as an example of what can be do for multi-module instruments.

  2. Development and application of efficient portal imaging solutions

    International Nuclear Information System (INIS)

    Boer, J.C.J. de

    2003-01-01

    This thesis describes the theoretical derivation and clinical application of methods to measure and improve patient setup in radiotherapy by means of electronic portal imaging devices (EPIDs). The focus is on methods that (1) are simple to implement and (2) add minimal workload. First, the relation between setup errors and treatment planning margins is quantified in a population-statistics approach. A major result is that systematic errors (recurring each treatment fraction) require about three times larger margins than random errors (fluctuating from fraction to fraction). Therefore, the emphasis is on reduction of systematic setup errors using off-line correction protocols. The new no action level (NAL) protocol, aimed at significant reduction of systematic errors using a small number of imaged fractions, is proposed and investigated in detail. It is demonstrated that the NAL protocol provides final distributions of residue systematic errors at least as good as the most widely applied comparable protocol, the shrinking action level (SAL) protocol, but uses only 3 imaged fractions per patient instead of the 8-10 required by SAL. The efficacy of NAL is demonstrated retrospectively on a database of measured setup errors involving 600 patients with weekly set-up measurements and prospectively in a group of 30 patients. The general properties of NAL are investigated using both analytical and Monte Carlo calculations. As an add-on to NAL, a correction verification (COVER) protocol has been developed using computer simulations combined with a risk analysis. With COVER, a single additional imaged fraction per patient is sufficient to reduce the detrimental effect of possible systematic mistakes in the execution of setup corrections to negligible levels. The high accuracy achieved with off-line setup corrections (yielding SDs of systematic errors ∼1 mm) is demonstrated in clinical studies involving 60 lung cancer patients and 31 head-and-neck patients. Furthermore

  3. Developing national on-line services to annotate and analyse underwater imagery in a research cloud

    Science.gov (United States)

    Proctor, R.; Langlois, T.; Friedman, A.; Davey, B.

    2017-12-01

    Fish image annotation data is currently collected by various research, management and academic institutions globally (+100,000's hours of deployments) with varying degrees of standardisation and limited formal collaboration or data synthesis. We present a case study of how national on-line services, developed within a domain-oriented research cloud, have been used to annotate habitat images and synthesise fish annotation data sets collected using Autonomous Underwater Vehicles (AUVs) and baited remote underwater stereo-video (stereo-BRUV). Two developing software tools have been brought together in the marine science cloud to provide marine biologists with a powerful service for image annotation. SQUIDLE+ is an online platform designed for exploration, management and annotation of georeferenced images & video data. It provides a flexible annotation framework allowing users to work with their preferred annotation schemes. We have used SQUIDLE+ to sample the habitat composition and complexity of images of the benthos collected using stereo-BRUV. GlobalArchive is designed to be a centralised repository of aquatic ecological survey data with design principles including ease of use, secure user access, flexible data import, and the collection of any sampling and image analysis information. To easily share and synthesise data we have implemented data sharing protocols, including Open Data and synthesis Collaborations, and a spatial map to explore global datasets and filter to create a synthesis. These tools in the science cloud, together with a virtual desktop analysis suite offering python and R environments offer an unprecedented capability to deliver marine biodiversity information of value to marine managers and scientists alike.

  4. New amorphous-silicon image sensor for x-ray diagnostic medical imaging applications

    Science.gov (United States)

    Weisfield, Richard L.; Hartney, Mark A.; Street, Robert A.; Apte, Raj B.

    1998-07-01

    This paper introduces new high-resolution amorphous Silicon (a-Si) image sensors specifically configured for demonstrating film-quality medical x-ray imaging capabilities. The devices utilizes an x-ray phosphor screen coupled to an array of a-Si photodiodes for detecting visible light, and a-Si thin-film transistors (TFTs) for connecting the photodiodes to external readout electronics. We have developed imagers based on a pixel size of 127 micrometer X 127 micrometer with an approximately page-size imaging area of 244 mm X 195 mm, and array size of 1,536 data lines by 1,920 gate lines, for a total of 2.95 million pixels. More recently, we have developed a much larger imager based on the same pixel pattern, which covers an area of approximately 406 mm X 293 mm, with 2,304 data lines by 3,200 gate lines, for a total of nearly 7.4 million pixels. This is very likely to be the largest image sensor array and highest pixel count detector fabricated on a single substrate. Both imagers connect to a standard PC and are capable of taking an image in a few seconds. Through design rule optimization we have achieved a light sensitive area of 57% and optimized quantum efficiency for x-ray phosphor output in the green part of the spectrum, yielding an average quantum efficiency between 500 and 600 nm of approximately 70%. At the same time, we have managed to reduce extraneous leakage currents on these devices to a few fA per pixel, which allows for very high dynamic range to be achieved. We have characterized leakage currents as a function of photodiode bias, time and temperature to demonstrate high stability over these large sized arrays. At the electronics level, we have adopted a new generation of low noise, charge- sensitive amplifiers coupled to 12-bit A/D converters. Considerable attention was given to reducing electronic noise in order to demonstrate a large dynamic range (over 4,000:1) for medical imaging applications. Through a combination of low data lines capacitance

  5. Annotated bibliography of human factors applications literature

    Energy Technology Data Exchange (ETDEWEB)

    McCafferty, D.B.

    1984-09-30

    This bibliography was prepared as part of the Human Factors Technology Project, FY 1984, sponsored by the Office of Nuclear Safety, US Department of Energy. The project was conducted by Lawrence Livermore National Laboratory, with Essex Corporation as a subcontractor. The material presented here is a revision and expansion of the bibliographic material developed in FY 1982 as part of a previous Human Factors Technology Project. The previous bibliography was published September 30, 1982, as Attachment 1 to the FY 1982 Project Status Report.

  6. Annotated bibliography of human factors applications literature

    International Nuclear Information System (INIS)

    McCafferty, D.B.

    1984-01-01

    This bibliography was prepared as part of the Human Factors Technology Project, FY 1984, sponsored by the Office of Nuclear Safety, US Department of Energy. The project was conducted by Lawrence Livermore National Laboratory, with Essex Corporation as a subcontractor. The material presented here is a revision and expansion of the bibliographic material developed in FY 1982 as part of a previous Human Factors Technology Project. The previous bibliography was published September 30, 1982, as Attachment 1 to the FY 1982 Project Status Report

  7. Experimental-confirmation and functional-annotation of predicted proteins in the chicken genome

    Directory of Open Access Journals (Sweden)

    McCarthy Fiona M

    2007-11-01

    Full Text Available Abstract Background The chicken genome was sequenced because of its phylogenetic position as a non-mammalian vertebrate, its use as a biomedical model especially to study embryology and development, its role as a source of human disease organisms and its importance as the major source of animal derived food protein. However, genomic sequence data is, in itself, of limited value; generally it is not equivalent to understanding biological function. The benefit of having a genome sequence is that it provides a basis for functional genomics. However, the sequence data currently available is poorly structurally and functionally annotated and many genes do not have standard nomenclature assigned. Results We analysed eight chicken tissues and improved the chicken genome structural annotation by providing experimental support for the in vivo expression of 7,809 computationally predicted proteins, including 30 chicken proteins that were only electronically predicted or hypothetical translations in human. To improve functional annotation (based on Gene Ontology, we mapped these identified proteins to their human and mouse orthologs and used this orthology to transfer Gene Ontology (GO functional annotations to the chicken proteins. The 8,213 orthology-based GO annotations that we produced represent an 8% increase in currently available chicken GO annotations. Orthologous chicken products were also assigned standardized nomenclature based on current chicken nomenclature guidelines. Conclusion We demonstrate the utility of high-throughput expression proteomics for rapid experimental structural annotation of a newly sequenced eukaryote genome. These experimentally-supported predicted proteins were further annotated by assigning the proteins with standardized nomenclature and functional annotation. This method is widely applicable to a diverse range of species. Moreover, information from one genome can be used to improve the annotation of other genomes and

  8. Solar occultation images analysis using Zernike polynomials ­— an ALTIUS imaging spectrometer application

    Science.gov (United States)

    Dekemper, Emmanuel; Fussen, Didier; Loodts, Nicolas; Neefs, Eddy

    The ALTIUS (Atmospheric Limb Tracker for the Investigation of the Upcoming Stratosphere) instrument is a major project of the Belgian Institute for Space Aeronomy (BIRA-IASB) in Brussels, Belgium. It has been designed to profit from the benefits of the limb scattering ge-ometry (vertical resolution, global coverage,...), while providing better accuracy on the tangent height knowledge than classical "knee" methods used by scanning spectrometers. The optical concept is based on 3 AOTF's (UV-Vis-NIR) responsible for the instantaneous spectral filtering of the incoming image (complete FOV larger than 100km x 100km at tangent point), ranging from 250nm to 1800nm, with a moderate resolution of a few nm and a typical acquisition time of 1-10s per image. While the primary goal of the instrument is the measurement of ozone with a good vertical resolution, the ability to record full images of the limb can lead to other applications, like solar occultations. With a pixel FOV of 200rad, the full high-sun image is formed of 45x45 pixels, which is sufficient for pattern recognition using moments analysis for instance. The Zernike polynomials form a complete othogonal set of functions over the unit circle. It is well suited for images showing circular shape. Any such image can then be decomposed into a finite set of weighted polynomials, the weighting is called the moments. Due to atmospheric refraction, the sun shape is modified during apparent sunsets and sunrises. The sun appears more flattened which leads to a modification of its zernike moment description. A link between the pressure or the temperature profile (equivalent to air density through the perfect gas law and the hydrostatic equation) and the Zernike moments of a given image can then be made and used to retrieve these atmospheric parameters, with the advantage that the whole sun is used and not only central or edge pixels. Some retrievals will be performed for different conditions and the feasibility of the method

  9. Vind(x): Using the user through cooperative annotation

    OpenAIRE

    Williams, A.D.; Vuurpijl, Louis; Schomaker, Lambert; van den Broek, Egon

    2002-01-01

    In this paper, the image retrieval system Vind(x) is described. The architecture of the system and first user experiences are reported. Using Vind(x), users on the Internet may cooperatively annotate objects in paintings by use of the pen or mouse. The collected data can be searched through query-by-drawing techniques, but can also serve as an (ever-growing) training and benchmark set for the development of automated image retrieval systems of the future. Several other examples of cooperative...

  10. Ground Truth Annotation in T Analyst

    DEFF Research Database (Denmark)

    2015-01-01

    This video shows how to annotate the ground truth tracks in the thermal videos. The ground truth tracks are produced to be able to compare them to tracks obtained from a Computer Vision tracking approach. The program used for annotation is T-Analyst, which is developed by Aliaksei Laureshyn, Ph...

  11. Annotation of regular polysemy and underspecification

    DEFF Research Database (Denmark)

    Martínez Alonso, Héctor; Pedersen, Bolette Sandford; Bel, Núria

    2013-01-01

    We present the result of an annotation task on regular polysemy for a series of seman- tic classes or dot types in English, Dan- ish and Spanish. This article describes the annotation process, the results in terms of inter-encoder agreement, and the sense distributions obtained with two methods...

  12. Black English Annotations for Elementary Reading Programs.

    Science.gov (United States)

    Prasad, Sandre

    This report describes a program that uses annotations in the teacher's editions of existing reading programs to indicate the characteristics of black English that may interfere with the reading process of black children. The first part of the report provides a rationale for the annotation approach, explaining that the discrepancy between written…

  13. Harnessing Collaborative Annotations on Online Formative Assessments

    Science.gov (United States)

    Lin, Jian-Wei; Lai, Yuan-Cheng

    2013-01-01

    This paper harnesses collaborative annotations by students as learning feedback on online formative assessments to improve the learning achievements of students. Through the developed Web platform, students can conduct formative assessments, collaboratively annotate, and review historical records in a convenient way, while teachers can generate…

  14. Towards Viral Genome Annotation Standards, Report from the 2010 NCBI Annotation Workshop.

    Science.gov (United States)

    Brister, James Rodney; Bao, Yiming; Kuiken, Carla; Lefkowitz, Elliot J; Le Mercier, Philippe; Leplae, Raphael; Madupu, Ramana; Scheuermann, Richard H; Schobel, Seth; Seto, Donald; Shrivastava, Susmita; Sterk, Peter; Zeng, Qiandong; Klimke, William; Tatusova, Tatiana

    2010-10-01

    Improvements in DNA sequencing technologies portend a new era in virology and could possibly lead to a giant leap in our understanding of viral evolution and ecology. Yet, as viral genome sequences begin to fill the world's biological databases, it is critically important to recognize that the scientific promise of this era is dependent on consistent and comprehensive genome annotation. With this in mind, the NCBI Genome Annotation Workshop recently hosted a study group tasked with developing sequence, function, and metadata annotation standards for viral genomes. This report describes the issues involved in viral genome annotation and reviews policy recommendations presented at the NCBI Annotation Workshop.

  15. Towards Viral Genome Annotation Standards, Report from the 2010 NCBI Annotation Workshop

    Directory of Open Access Journals (Sweden)

    Qiandong Zeng

    2010-10-01

    Full Text Available Improvements in DNA sequencing technologies portend a new era in virology and could possibly lead to a giant leap in our understanding of viral evolution and ecology. Yet, as viral genome sequences begin to fill the world’s biological databases, it is critically important to recognize that the scientific promise of this era is dependent on consistent and comprehensive genome annotation. With this in mind, the NCBI Genome Annotation Workshop recently hosted a study group tasked with developing sequence, function, and metadata annotation standards for viral genomes. This report describes the issues involved in viral genome annotation and reviews policy recommendations presented at the NCBI Annotation Workshop.

  16. Raman spectroscopy and imaging: applications in human breast cancer diagnosis.

    Science.gov (United States)

    Brozek-Pluska, Beata; Musial, Jacek; Kordek, Radzislaw; Bailo, Elena; Dieing, Thomas; Abramczyk, Halina

    2012-08-21

    The applications of spectroscopic methods in cancer detection open new possibilities in early stage diagnostics. Raman spectroscopy and Raman imaging represent novel and rapidly developing tools in cancer diagnosis. In the study described in this paper Raman spectroscopy has been employed to examine noncancerous and cancerous human breast tissues of the same patient. The most significant differences between noncancerous and cancerous tissues were found in regions characteristic for the vibrations of carotenoids, lipids and proteins. Particular attention was paid to the role played by unsaturated fatty acids in the differentiation between the noncancerous and the cancerous tissues. Comparison of Raman spectra of the noncancerous and the cancerous tissues with the spectra of oleic, linoleic, α-linolenic, γ-linolenic, docosahexaenoic and eicosapentaenoic acids has been presented. The role of sample preparation in the determination of cancer markers is also discussed in this study.

  17. The application of image processing software: Photoshop in environmental design

    Science.gov (United States)

    Dong, Baohua; Zhang, Chunmi; Zhuo, Chen

    2011-02-01

    In the process of environmental design and creation, the design sketch holds a very important position in that it not only illuminates the design's idea and concept but also shows the design's visual effects to the client. In the field of environmental design, computer aided design has made significant improvement. Many types of specialized design software for environmental performance of the drawings and post artistic processing have been implemented. Additionally, with the use of this software, working efficiency has greatly increased and drawings have become more specific and more specialized. By analyzing the application of photoshop image processing software in environmental design and comparing and contrasting traditional hand drawing and drawing with modern technology, this essay will further explore the way for computer technology to play a bigger role in environmental design.

  18. Development of TMA-based imaging system for hyperspectral application

    Science.gov (United States)

    Choi, Young-Wan; Yang, Seung-Uk; Kang, Myung-Seok; Kim, Ee-Eul

    2017-11-01

    Funded by the Ministry of Commerce, Industry, and Energy of Korea, SI initiated the development of the prototype model of TMA-based electro-optical system as part of the national space research and development program. Its optical aperture diameter is 120 mm, the effective focal length is 462 mm, and its full field-of-view is 5.08 degrees. The dimension is of about 600 mm × 400 mm × 400 mm and the weight is less than 15 kg. To demonstrate its performance, hyper-spectral imaging based on linear spectral filter is selected for the application of the prototype. The spectral resolution will be less than 10 nm and the number of channels will be more than 40 in visible and nearinfrared region. In this paper, the progress made so far on the prototype development will be presented

  19. Application of multimedia image technology in engineering report demonstration system

    Science.gov (United States)

    Lili, Jiang

    2018-03-01

    With the rapid development of global economic integration, people’s strong desire for a wide range of global exchanges and interactions has been promoted, and there are more unprecedented convenient means for people to know the world and even to transform the world. At this stage, we realize that the traditional mode of work has become difficult to adapt to the changing trends of the world and informatization, multimedia, science and technology have become the mainstream of the times. Therefore, this paper will mainly analyze the present situation of the project report demonstration system and the key points of the work and put forward with pertinence specific application strategy of the integration with multimedia image technology.

  20. Efficient and compact hyperspectral imager for space-borne applications

    Science.gov (United States)

    Pisani, Marco; Zucco, Massimo

    2017-11-01

    In the last decades Hyperspectral Imager (HI) have become irreplaceable space-borne instruments for an increasing number of applications. A number of HIs are now operative onboard (e.g. CHRIS on PROBA), others are going to be launched (e.g. PRISMA, EnMAP, HyspIRI), many others are at the breadboard level. The researchers goal is to realize HI with high spatial and spectral resolution, having low weight and contained dimensions. The most common HI technique is based on the use of a dispersive mean (a grating or a prism) or on the use of band pass filters (tunable or linear variable). These approaches have the advantages of allowing compact devices. Another approach is based on the use of interferometer based spectrometers (Michelson or Sagnac type). The advantage of the latter is a very high efficiency in light collection because of the well-known Felgett and Jaquinot principles.

  1. Applications of three-dimensional image correlation in conformal radiotherapy

    International Nuclear Information System (INIS)

    Van Herk, M.; Gilhuijs, K.; Kwa, S.; Lebesque, J.; Muller, S.; De Munck, J.; Touw, A.; Kooy, H.

    1995-01-01

    The development of techniques for the registration of CT, MRI and SPECT creates new possibilities for improved target volume definition and quantitative image analysis. The discussed technique is based on chamfer matching and is suitable for automatic 3-D matching of CT with CT, CT with MRI, CT with SPECT and MRI with SPECT. By integrating CT with MRI, the diagnostic qualities of MRI are combined with the geometric accuracy of the planning CT. Significant differences in the delineation of the target volume for brain, head and neck and prostate tumors have been demonstrated when using integrated CT and MRI compared with using CT alone. In addition, integration of the planning CT with pre-operative scans improves knowledge of possible tumor extents. By first matching scans based on the bony anatomy and subsequently matching on an organ of study, relative motion of the organ is quantified accurately. In a study with 42 CT scans of 11 patients, magnitude and causes of prostate motion have been analysed. The most important motion of the prostate is a forward-backward rotation around a point near the apex caused by rectal volume difference. Significant correlations were also found between motion of the legs and the prostate. By integrating functional images made before and after radiotherapy with the planning CT, the relation between local change of lung function and delivered dose has been quantified accurately. The technique of chamfer matching is a convenient and more accurate alternative for the use of external markers in a CT/SPECT lung damage study. Also, damage visible in diagnostic scans can be related to radiation dose, thereby improving follow-up diagnostics. It can be concluded that 3-D image integration plays an important role in assessing and improving the accuracy of radiotherapy and is therefore indispensable for conformal therapy. However, user-friendly implementation of these techniques remains to be done to facilitate clinical application on a large

  2. Applications of three-dimensional image correlation in conformal radiotherapy

    Energy Technology Data Exchange (ETDEWEB)

    Van Herk, M; Gilhuijs, K; Kwa, S; Lebesque, J; Muller, S; De Munck, J; Touw, A [Nederlands Kanker Inst. ` Antoni van Leeuwenhoekhuis` , Amsterdam (Netherlands); Kooy, H [Harvard Medical School, Boston, MA (United States)

    1995-12-01

    The development of techniques for the registration of CT, MRI and SPECT creates new possibilities for improved target volume definition and quantitative image analysis. The discussed technique is based on chamfer matching and is suitable for automatic 3-D matching of CT with CT, CT with MRI, CT with SPECT and MRI with SPECT. By integrating CT with MRI, the diagnostic qualities of MRI are combined with the geometric accuracy of the planning CT. Significant differences in the delineation of the target volume for brain, head and neck and prostate tumors were demonstrated when using integrated CT and MRI compared with using CT alone. In addition, integration of the planning CT with pre-operative scans improves knowledge of possible tumor extents. By first matching scans based on the bony anatomy and subsequently matching on an organ of study, relative motion of the organ is quantified accurately. In a study with 42 CT scans of 11 patients, magnitude and causes of prostate motion were analysed. The most important motion of the prostate is a forward-backward rotation around a point near the apex caused by rectal volume difference. Significant correlations were also found between motion of the legs and the prostate. By integrating functional images made before and after radiotherapy with the planning CT, the relation between local change of lung function and delivered dose has been quantified accurately. The technique of chamfer matching is a convenient and more accurate alternative for the use of external markers in a CT/SPECT lung damage study. Also, damage visible in diagnostic scans can be related to radiation dose, thereby improving follow-up diagnostics. It can be concluded that 3-D image integration plays an important role in assessing and improving the accuracy of radiotherapy and is therefore indispensable for conformal therapy. However, user-friendly implementation of these techniques remains to be done to facilitate clinical application on a large scale.

  3. Clinical applications of cobalt-radionuclides in neuro-imaging

    Energy Technology Data Exchange (ETDEWEB)

    Jansen, H.M.L

    1998-04-01

    The aim of the studies embodied in this thesis was to investigate the clinical applicability of Co in euro-imaging using positron emission tomography (PET). To this purpose, a set of closely related pilot studies were performed in patients suffering from several neurological diseases affecting the brain. Chapter 2 discusses the physiological role of Co and both indications and complications of Co-administration in the past. The probable deposition mechanism of Co is described, potential (absence of) evidence of Co mimicking Ca in vivo is discussed, a comparison is made with other tracer-analogues (Ga, TI, Rb) and several hypotheses with respect to the pharmacokinetic behaviour of Co and the role of (inflammatory) proteins and cells are forwarded. The etiologic mechanism(s), clinical symptoms, Ca-related pathophysiology and (most recent) imaging techniques are reviewed of Multiple Sclerosis, cerebrovascular stroke, traumatic brain injury and primary brain tumours. The major goal of these respective reviews is both a rough outline of present insights and near-future developments and an assessment of the (im)possibilities in visualising the actual substrate of disease. Since Co is assumed to reflect (the common pathway of) Ca, an application of Co (based on cell-decay and inflammation) may be hypothesised in all of the diseases mentioned. These considerations served as a theoretical basis for our further studies in clinical practice. Chapter 3 (Original reprints) presents the actual results, whil Chapter 4 (General discussion) reflects on lessons that can be learned from the present work and consequently formulates some suggestions for future (extended) studies. The contours of possible new emerging areas of interest (dementia of the Alzheimer type; vascular dementia; stunned myocardium) are drawn in continuation of the foregoing studies. 47 refs.

  4. The clinical application of nuclide bone imaging in malignant lymphomas

    International Nuclear Information System (INIS)

    Jin Xing; Tang Mingdeng; Lin Duanyu; Ni Leichun

    2006-01-01

    Objective: To evaluate the clinical application value of nuclide bone imaging in malignant lymphoma. Methods: 71 cases of patients were diagnosed by pathology as malignant lymphoma, among whom there were 8 cases of Hodgkin disease (HL) and 63 cases of non-Hodgkin disease (NHL). The examinations were performed from 2.5 to 6 hours later after the intravenous injection of 99m Tc-MDP (555-925 MBq). Results: 31 cases were bone-infiltrating lesions, including 3 cases of HL and 28 cases of NHL. The total number of the focus was 103, except 2 cases of bone lack, including 35 foci in vertebral column (34.65%), 30 foci in limb and joint (29.70%), 14 foci in rib (13.86%), 13 foci in elvis (12.0%), 5 foci in skull (4.95%) and 4 foci in sternum (3.96%). Conclusion: The nuclide bone imaging has a high value in the clinical stage, therapeutic observation and prognosis of bone-infiltrating malignant lymphoma. (authors)

  5. Fluorine-18-labelled molecules: synthesis and application in medical imaging

    International Nuclear Information System (INIS)

    Dolle, F.; Perrio, C.; Barre, L.; Lasne, M.C.; Le Bars, D.

    2006-01-01

    Positron emission tomography (PET) is one of the more powerful available techniques for medical imaging. It relies on the use of molecules labelled with a positron emitter (β + ). Among those emitters, fluorine-18, available from a cyclotron, is a radionuclide of choice because of its relatively long-half-life (109.8 min) and the relatively low energy of the emitted-positron. The electrophilic form of fluorine-18 ([ 18 F]F 2 or reagents derived from [ 18 F]F 2 ) is mainly used for hydrogen or metal substitutions on aromatic or vinylic carbons. The presence of the stable isotope (fluorine-19) in the radiotracers limits their use in medical imaging. The nucleophilic form of fluorine-18 (alkaline mono-fluoride, K[ 18 F]F, the most used), obtained from irradiation of enriched water, is widely used in aliphatic and (hetero)aromatic substitutions for the synthesis of radiotracers with high specific radioactivity. Some examples of radio-fluorinated tracers used in PET are presented, as well as some of their in vivo applications in human. (authors)

  6. High performance graphics processors for medical imaging applications

    International Nuclear Information System (INIS)

    Goldwasser, S.M.; Reynolds, R.A.; Talton, D.A.; Walsh, E.S.

    1989-01-01

    This paper describes a family of high- performance graphics processors with special hardware for interactive visualization of 3D human anatomy. The basic architecture expands to multiple parallel processors, each processor using pipelined arithmetic and logical units for high-speed rendering of Computed Tomography (CT), Magnetic Resonance (MR) and Positron Emission Tomography (PET) data. User-selectable display alternatives include multiple 2D axial slices, reformatted images in sagittal or coronal planes and shaded 3D views. Special facilities support applications requiring color-coded display of multiple datasets (such as radiation therapy planning), or dynamic replay of time- varying volumetric data (such as cine-CT or gated MR studies of the beating heart). The current implementation is a single processor system which generates reformatted images in true real time (30 frames per second), and shaded 3D views in a few seconds per frame. It accepts full scale medical datasets in their native formats, so that minimal preprocessing delay exists between data acquisition and display

  7. Imaging monitoring techniques applications in the transient gratings detection

    Science.gov (United States)

    Zhao, Qing-ming

    2009-07-01

    Experimental studies of Degenerate four-wave mixing (DFWM) in iodine vapor at atmospheric pressure and 0℃ and 25℃ are reported. The Laser-induced grating (LIG) studies are carried out by generating the thermal grating using a pulsed, narrow bandwidth, dye laser .A new image processing system for detecting forward DFWM spectroscopy on iodine vapor is reported. This system is composed of CCD camera, imaging processing card and the related software. With the help of the detecting system, phase matching can be easily achieved in the optical arrangement by crossing the two pumps and the probe as diagonals linking opposite corners of a rectangular box ,and providing a way to position the PhotoMultiplier Tube (PMT) . Also it is practical to know the effect of the pointing stability on the optical path by monitoring facula changing with the laser beam pointing and disturbs of the environment. Finally the effects of Photostability of dye laser on the ration of signal to noise in DFWM using forward geometries have been investigated in iodine vapor. This system makes it feasible that the potential application of FG-DFWM is used as a diagnostic tool in combustion research and environment monitoring.

  8. Essential Requirements for Digital Annotation Systems

    Directory of Open Access Journals (Sweden)

    ADRIANO, C. M.

    2012-06-01

    Full Text Available Digital annotation systems are usually based on partial scenarios and arbitrary requirements. Accidental and essential characteristics are usually mixed in non explicit models. Documents and annotations are linked together accidentally according to the current technology, allowing for the development of disposable prototypes, but not to the support of non-functional requirements such as extensibility, robustness and interactivity. In this paper we perform a careful analysis on the concept of annotation, studying the scenarios supported by digital annotation tools. We also derived essential requirements based on a classification of annotation systems applied to existing tools. The analysis performed and the proposed classification can be applied and extended to other type of collaborative systems.

  9. MIPS bacterial genomes functional annotation benchmark dataset.

    Science.gov (United States)

    Tetko, Igor V; Brauner, Barbara; Dunger-Kaltenbach, Irmtraud; Frishman, Goar; Montrone, Corinna; Fobo, Gisela; Ruepp, Andreas; Antonov, Alexey V; Surmeli, Dimitrij; Mewes, Hans-Wernen

    2005-05-15

    Any development of new methods for automatic functional annotation of proteins according to their sequences requires high-quality data (as benchmark) as well as tedious preparatory work to generate sequence parameters required as input data for the machine learning methods. Different program settings and incompatible protocols make a comparison of the analyzed methods difficult. The MIPS Bacterial Functional Annotation Benchmark dataset (MIPS-BFAB) is a new, high-quality resource comprising four bacterial genomes manually annotated according to the MIPS functional catalogue (FunCat). These resources include precalculated sequence parameters, such as sequence similarity scores, InterPro domain composition and other parameters that could be used to develop and benchmark methods for functional annotation of bacterial protein sequences. These data are provided in XML format and can be used by scientists who are not necessarily experts in genome annotation. BFAB is available at http://mips.gsf.de/proj/bfab

  10. Interoperable Multimedia Annotation and Retrieval for the Tourism Sector

    NARCIS (Netherlands)

    Chatzitoulousis, Antonios; Efraimidis, Pavlos S.; Athanasiadis, I.N.

    2015-01-01

    The Atlas Metadata System (AMS) employs semantic web annotation techniques in order to create an interoperable information annotation and retrieval platform for the tourism sector. AMS adopts state-of-the-art metadata vocabularies, annotation techniques and semantic web technologies.

  11. Concept annotation in the CRAFT corpus.

    Science.gov (United States)

    Bada, Michael; Eckert, Miriam; Evans, Donald; Garcia, Kristin; Shipley, Krista; Sitnikov, Dmitry; Baumgartner, William A; Cohen, K Bretonnel; Verspoor, Karin; Blake, Judith A; Hunter, Lawrence E

    2012-07-09

    Manually annotated corpora are critical for the training and evaluation of automated methods to identify concepts in biomedical text. This paper presents the concept annotations of the Colorado Richly Annotated Full-Text (CRAFT) Corpus, a collection of 97 full-length, open-access biomedical journal articles that have been annotated both semantically and syntactically to serve as a research resource for the biomedical natural-language-processing (NLP) community. CRAFT identifies all mentions of nearly all concepts from nine prominent biomedical ontologies and terminologies: the Cell Type Ontology, the Chemical Entities of Biological Interest ontology, the NCBI Taxonomy, the Protein Ontology, the Sequence Ontology, the entries of the Entrez Gene database, and the three subontologies of the Gene Ontology. The first public release includes the annotations for 67 of the 97 articles, reserving two sets of 15 articles for future text-mining competitions (after which these too will be released). Concept annotations were created based on a single set of guidelines, which has enabled us to achieve consistently high interannotator agreement. As the initial 67-article release contains more than 560,000 tokens (and the full set more than 790,000 tokens), our corpus is among the largest gold-standard annotated biomedical corpora. Unlike most others, the journal articles that comprise the corpus are drawn from diverse biomedical disciplines and are marked up in their entirety. Additionally, with a concept-annotation count of nearly 100,000 in the 67-article subset (and more than 140,000 in the full collection), the scale of conceptual markup is also among the largest of comparable corpora. The concept annotations of the CRAFT Corpus have the potential to significantly advance biomedical text mining by providing a high-quality gold standard for NLP systems. The corpus, annotation guidelines, and other associated resources are freely available at http://bionlp-corpora.sourceforge.net/CRAFT/index.shtml.

  12. Facilitating functional annotation of chicken microarray data

    Directory of Open Access Journals (Sweden)

    Gresham Cathy R

    2009-10-01

    Full Text Available Abstract Background Modeling results from chicken microarray studies is challenging for researchers due to little functional annotation associated with these arrays. The Affymetrix GenChip chicken genome array, one of the biggest arrays that serve as a key research tool for the study of chicken functional genomics, is among the few arrays that link gene products to Gene Ontology (GO. However the GO annotation data presented by Affymetrix is incomplete, for example, they do not show references linked to manually annotated functions. In addition, there is no tool that facilitates microarray researchers to directly retrieve functional annotations for their datasets from the annotated arrays. This costs researchers amount of time in searching multiple GO databases for functional information. Results We have improved the breadth of functional annotations of the gene products associated with probesets on the Affymetrix chicken genome array by 45% and the quality of annotation by 14%. We have also identified the most significant diseases and disorders, different types of genes, and known drug targets represented on Affymetrix chicken genome array. To facilitate functional annotation of other arrays and microarray experimental datasets we developed an Array GO Mapper (AGOM tool to help researchers to quickly retrieve corresponding functional information for their dataset. Conclusion Results from this study will directly facilitate annotation of other chicken arrays and microarray experimental datasets. Researchers will be able to quickly model their microarray dataset into more reliable biological functional information by using AGOM tool. The disease, disorders, gene types and drug targets revealed in the study will allow researchers to learn more about how genes function in complex biological systems and may lead to new drug discovery and development of therapies. The GO annotation data generated will be available for public use via AgBase website and

  13. Linking Disparate Datasets of the Earth Sciences with the SemantEco Annotator

    Science.gov (United States)

    Seyed, P.; Chastain, K.; McGuinness, D. L.

    2013-12-01

    Use of Semantic Web technologies for data management in the Earth sciences (and beyond) has great potential but is still in its early stages, since the challenges of translating data into a more explicit or semantic form for immediate use within applications has not been fully addressed. In this abstract we help address this challenge by introducing the SemantEco Annotator, which enables anyone, regardless of expertise, to semantically annotate tabular Earth Science data and translate it into linked data format, while applying the logic inherent in community-standard vocabularies to guide the process. The Annotator was conceived under a desire to unify dataset content from a variety of sources under common vocabularies, for use in semantically-enabled web applications. Our current use case employs linked data generated by the Annotator for use in the SemantEco environment, which utilizes semantics to help users explore, search, and visualize water or air quality measurement and species occurrence data through a map-based interface. The generated data can also be used immediately to facilitate discovery and search capabilities within 'big data' environments. The Annotator provides a method for taking information about a dataset, that may only be known to its maintainers, and making it explicit, in a uniform and machine-readable fashion, such that a person or information system can more easily interpret the underlying structure and meaning. Its primary mechanism is to enable a user to formally describe how columns of a tabular dataset relate and/or describe entities. For example, if a user identifies columns for latitude and longitude coordinates, we can infer the data refers to a point that can be plotted on a map. Further, it can be made explicit that measurements of 'nitrate' and 'NO3-' are of the same entity through vocabulary assignments, thus more easily utilizing data sets that use different nomenclatures. The Annotator provides an extensive and searchable

  14. Automatic annotation of head velocity and acceleration in Anvil

    DEFF Research Database (Denmark)

    Jongejan, Bart

    2012-01-01

    We describe an automatic face tracker plugin for the ANVIL annotation tool. The face tracker produces data for velocity and for acceleration in two dimensions. We compare the annotations generated by the face tracking algorithm with independently made manual annotations for head movements....... The annotations are a useful supplement to manual annotations and may help human annotators to quickly and reliably determine onset of head movements and to suggest which kind of head movement is taking place....

  15. Image processing using pulse-coupled neural networks applications in Python

    CERN Document Server

    Lindblad, Thomas

    2013-01-01

    Image processing algorithms based on the mammalian visual cortex are powerful tools for extraction information and manipulating images. This book reviews the neural theory and translates them into digital models. Applications are given in areas of image recognition, foveation, image fusion and information extraction. The third edition reflects renewed international interest in pulse image processing with updated sections presenting several newly developed applications. This edition also introduces a suite of Python scripts that assist readers in replicating results presented in the text and to further develop their own applications.

  16. Combining rules, background knowledge and change patterns to maintain semantic annotations.

    Science.gov (United States)

    Cardoso, Silvio Domingos; Chantal, Reynaud-Delaître; Da Silveira, Marcos; Pruski, Cédric

    2017-01-01

    Knowledge Organization Systems (KOS) play a key role in enriching biomedical information in order to make it machine-understandable and shareable. This is done by annotating medical documents, or more specifically, associating concept labels from KOS with pieces of digital information, e.g., images or texts. However, the dynamic nature of KOS may impact the annotations, thus creating a mismatch between the evolved concept and the associated information. To solve this problem, methods to maintain the quality of the annotations are required. In this paper, we define a framework based on rules, background knowledge and change patterns to drive the annotation adaption process. We evaluate experimentally the proposed approach in realistic cases-studies and demonstrate the overall performance of our approach in different KOS considering the precision, recall, F1-score and AUC value of the system.

  17. [Application of Imaging Mass Spectrometry for Drug Discovery].

    Science.gov (United States)

    Hayasaka, Takahiro

    2016-01-01

    Imaging mass spectrometry (IMS) can reveal the distribution of biomolecules on tissue sections. In this process, the biomolecules are directly ionized within tissue sections using matrix-assisted laser desorption/ionization, and then their distribution is visualized by pseudo-color based on the relative signal intensity. The biomolecules, such as fatty acids, phospholipids, glycolipids, peptides, proteins, and neurotransmitters, have been analyzed at a spatial resolution of 5 μm. A special instrument for IMS analysis was developed by Shimadzu. The IMS analysis does not require the labeling of biomolecules and is capable of analyzing all the ionized biomolecules. Interest in this method has expanded to many research fields, including biology, agriculture, medicine, and pharmacology. The technique is especially relevant to the drug discovery process. As practiced currently, drug discovery is expensive and time consuming, requiring the preparation of probes for each drug and its metabolites, followed by systematic probe tracking in animal models. The IMS technique is expected to overcome these drawbacks by revealing the distribution of drugs and their metabolites using only a single analysis. In this symposium, I introduced the methodology and applications of IMS and discussed the feasibility of its application to drug discovery in the near future.

  18. DAE-BRNS workshop on applications of image processing in plant sciences and agriculture: lecture notes

    International Nuclear Information System (INIS)

    1998-10-01

    Images form important data and information in biological sciences. Until recently photography was the only method to reproduce and report such data. It is difficult to quantify or treat the photographic data mathematically. Digital image processing and image analysis technology based on recent advances in microelectronics and computers circumvents these problems associated with traditional photography. WIPSA (Workshop on Applications of Image Processing in Plant Sciences and Agriculture) will feature topics on the basic aspects of computers, imaging hardware and software as well advanced aspects such as colour image processing, high performance computing, neural networks, 3-D imaging and virtual reality. Imaging done using ultrasound, thermal, x-rays and γ rays, neutron radiography and the film-less phosphor-imager technology will also be discussed. Additionally application of image processing/analysis in plant sciences, medicine and satellite imagery are discussed. Papers relevant to INIS are indexed separately

  19. Particle Image Velocimetry Applications Using Fluorescent Dye-Doped Particles

    Science.gov (United States)

    Petrosky, Brian J.; Maisto, Pietro; Lowe, K. Todd; Andre, Matthieu A.; Bardet, Philippe M.; Tiemsin, Patsy I.; Wohl, Christopher J.; Danehy, Paul M.

    2015-01-01

    Polystyrene latex sphere particles are widely used to seed flows for velocimetry techniques such as Particle Image Velocimetry (PIV) and Laser Doppler Velocimetry (LDV). These particles may be doped with fluorescent dyes such that signals spectrally shifted from the incident laser wavelength may be detected via Laser Induced Fluorescence (LIF). An attractive application of the LIF signal is achieving velocimetry in the presence of strong interference from laser scatter, opening up new research possibilities very near solid surfaces or at liquid/gas interfaces. Additionally, LIF signals can be used to tag different fluid streams to study mixing. While fluorescence-based PIV has been performed by many researchers for particles dispersed in water flows, the current work is among the first in applying the technique to micron-scale particles dispersed in a gas. A key requirement for such an application is addressing potential health hazards from fluorescent dyes; successful doping of Kiton Red 620 (KR620) has enabled the use of this relatively safe dye for fluorescence PIV for the first time. In this paper, basic applications proving the concept of PIV using the LIF signal from KR620-doped particles are exhibited for a free jet and a twophase flow apparatus. Results indicate that while the fluorescence PIV techniques are roughly 2 orders of magnitude weaker than Mie scattering, they provide a viable method for obtaining data in flow regions previously inaccessible via standard PIV. These techniques have the potential to also complement Mie scattering signals, for example in multi-stream and/or multi-phase experiments.

  20. Fundamentals and applications of neutron imaging. Applications part 5. Application of neutron imaging to fluid engineering-1

    International Nuclear Information System (INIS)

    Takenaka, Nobuyuki; Asano, Hitoshi; Umekawa, Hisashi; Matsubayashi, Masahito

    2007-01-01

    Characteristics of the neutron beam attenuation vary with elements constituting the object and it attenuates with hydrogen and a specific element greatly and penetrates most metal well. Normal liquid such as water, oil, the organic liquid includes a lot of hydrogen, and a neutron beam attenuates, but attenuation characteristics of the metal well used industrially such as iron, copper, aluminum are smaller than normal liquid. Because most machines are made of metal, and liquid behavior of the machine inside can be seen through neutron radiography, it is possible to be used as the X-rays of the machine. As an application of neutron radiography to the fluid engineering, fluid behavior in the metal pipe and container, especially two phase flow mingled with each phase of gas/liquid/solid, has been visible and measurable which is difficult to be performed by other methods, and in late years the industry use of neutron radiography attracts attention particularly. This serial course describes overviews of two-phase flow visualization and measurement and freezing/cooling machinery as the first example of recent application to the machinery. (T. Tanaka)

  1. AGORA : Organellar genome annotation from the amino acid and nucleotide references.

    Science.gov (United States)

    Jung, Jaehee; Kim, Jong Im; Jeong, Young-Sik; Yi, Gangman

    2018-03-29

    Next-generation sequencing (NGS) technologies have led to the accumulation of highthroughput sequence data from various organisms in biology. To apply gene annotation of organellar genomes for various organisms, more optimized tools for functional gene annotation are required. Almost all gene annotation tools are mainly focused on the chloroplast genome of land plants or the mitochondrial genome of animals.We have developed a web application AGORA for the fast, user-friendly, and improved annotations of organellar genomes. AGORA annotates genes based on a BLAST-based homology search and clustering with selected reference sequences from the NCBI database or user-defined uploaded data. AGORA can annotate the functional genes in almost all mitochondrion and plastid genomes of eukaryotes. The gene annotation of a genome with an exon-intron structure within a gene or inverted repeat region is also available. It provides information of start and end positions of each gene, BLAST results compared with the reference sequence, and visualization of gene map by OGDRAW. Users can freely use the software, and the accessible URL is https://bigdata.dongguk.edu/gene_project/AGORA/.The main module of the tool is implemented by the python and php, and the web page is built by the HTML and CSS to support all browsers. gangman@dongguk.edu.

  2. Annotation of phenotypic diversity: decoupling data curation and ontology curation using Phenex.

    Science.gov (United States)

    Balhoff, James P; Dahdul, Wasila M; Dececchi, T Alexander; Lapp, Hilmar; Mabee, Paula M; Vision, Todd J

    2014-01-01

    Phenex (http://phenex.phenoscape.org/) is a desktop application for semantically annotating the phenotypic character matrix datasets common in evolutionary biology. Since its initial publication, we have added new features that address several major bottlenecks in the efficiency of the phenotype curation process: allowing curators during the data curation phase to provisionally request terms that are not yet available from a relevant ontology; supporting quality control against annotation guidelines to reduce later manual review and revision; and enabling the sharing of files for collaboration among curators. We decoupled data annotation from ontology development by creating an Ontology Request Broker (ORB) within Phenex. Curators can use the ORB to request a provisional term for use in data annotation; the provisional term can be automatically replaced with a permanent identifier once the term is added to an ontology. We added a set of annotation consistency checks to prevent common curation errors, reducing the need for later correction. We facilitated collaborative editing by improving the reliability of Phenex when used with online folder sharing services, via file change monitoring and continual autosave. With the addition of these new features, and in particular the Ontology Request Broker, Phenex users have been able to focus more effectively on data annotation. Phenoscape curators using Phenex have reported a smoother annotation workflow, with much reduced interruptions from ontology maintenance and file management issues.

  3. Semantic annotation of consumer health questions.

    Science.gov (United States)

    Kilicoglu, Halil; Ben Abacha, Asma; Mrabet, Yassine; Shooshan, Sonya E; Rodriguez, Laritza; Masterton, Kate; Demner-Fushman, Dina

    2018-02-06

    Consumers increasingly use online resources for their health information needs. While current search engines can address these needs to some extent, they generally do not take into account that most health information needs are complex and can only fully be expressed in natural language. Consumer health question answering (QA) systems aim to fill this gap. A major challenge in developing consumer health QA systems is extracting relevant semantic content from the natural language questions (question understanding). To develop effective question understanding tools, question corpora semantically annotated for relevant question elements are needed. In this paper, we present a two-part consumer health question corpus annotated with several semantic categories: named entities, question triggers/types, question frames, and question topic. The first part (CHQA-email) consists of relatively long email requests received by the U.S. National Library of Medicine (NLM) customer service, while the second part (CHQA-web) consists of shorter questions posed to MedlinePlus search engine as queries. Each question has been annotated by two annotators. The annotation methodology is largely the same between the two parts of the corpus; however, we also explain and justify the differences between them. Additionally, we provide information about corpus characteristics, inter-annotator agreement, and our attempts to measure annotation confidence in the absence of adjudication of annotations. The resulting corpus consists of 2614 questions (CHQA-email: 1740, CHQA-web: 874). Problems are the most frequent named entities, while treatment and general information questions are the most common question types. Inter-annotator agreement was generally modest: question types and topics yielded highest agreement, while the agreement for more complex frame annotations was lower. Agreement in CHQA-web was consistently higher than that in CHQA-email. Pairwise inter-annotator agreement proved most

  4. An open annotation ontology for science on web 3.0.

    Science.gov (United States)

    Ciccarese, Paolo; Ocana, Marco; Garcia Castro, Leyla Jael; Das, Sudeshna; Clark, Tim

    2011-05-17

    There is currently a gap between the rich and expressive collection of published biomedical ontologies, and the natural language expression of biomedical papers consumed on a daily basis by scientific researchers. The purpose of this paper is to provide an open, shareable structure for dynamic integration of biomedical domain ontologies with the scientific document, in the form of an Annotation Ontology (AO), thus closing this gap and enabling application of formal biomedical ontologies directly to the literature as it emerges. Initial requirements for AO were elicited by analysis of integration needs between biomedical web communities, and of needs for representing and integrating results of biomedical text mining. Analysis of strengths and weaknesses of previous efforts in this area was also performed. A series of increasingly refined annotation tools were then developed along with a metadata model in OWL, and deployed for feedback and additional requirements the ontology to users at a major pharmaceutical company and a major academic center. Further requirements and critiques of the model were also elicited through discussions with many colleagues and incorporated into the work. This paper presents Annotation Ontology (AO), an open ontology in OWL-DL for annotating scientific documents on the web. AO supports both human and algorithmic content annotation. It enables "stand-off" or independent metadata anchored to specific positions in a web document by any one of several methods. In AO, the document may be annotated but is not required to be under update control of the annotator. AO contains a provenance model to support versioning, and a set model for specifying groups and containers of annotation. AO is freely available under open source license at http://purl.org/ao/, and extensive documentation including screencasts is available on AO's Google Code page: http://code.google.com/p/annotation-ontology/ . The Annotation Ontology meets critical requirements for

  5. Application and Analysis of Wavelet Transform in Image Edge Detection

    Institute of Scientific and Technical Information of China (English)

    Jianfang gao[1

    2016-01-01

    For the image processing technology, technicians have been looking for a convenient and simple detection method for a long time, especially for the innovation research on image edge detection technology. Because there are a lot of original information at the edge during image processing, thus, we can get the real image data in terms of the data acquisition. The usage of edge is often in the case of some irregular geometric objects, and we determine the contour of the image by combining with signal transmitted data. At the present stage, there are different algorithms in image edge detection, however, different types of algorithms have divergent disadvantages so It is diffi cult to detect the image changes in a reasonable range. We try to use wavelet transformation in image edge detection, making full use of the wave with the high resolution characteristics, and combining multiple images, in order to improve the accuracy of image edge detection.

  6. Application of Image Texture Analysis for Evaluation of X-Ray Images of Fungal-Infected Maize Kernels

    DEFF Research Database (Denmark)

    Orina, Irene; Manley, Marena; Kucheryavskiy, Sergey V.

    2018-01-01

    The feasibility of image texture analysis to evaluate X-ray images of fungal-infected maize kernels was investigated. X-ray images of maize kernels infected with Fusarium verticillioides and control kernels were acquired using high-resolution X-ray micro-computed tomography. After image acquisition...... developed using partial least squares discriminant analysis (PLS-DA), and accuracies of 67 and 73% were achieved using first-order statistical features and GLCM extracted features, respectively. This work provides information on the possible application of image texture as method for analysing X-ray images......., homogeneity and contrast) were extracted from the side, front and top views of each kernel and used as inputs for principal component analysis (PCA). The first-order statistical image features gave a better separation of the control from infected kernels on day 8 post-inoculation. Classification models were...

  7. Making web annotations persistent over time

    Energy Technology Data Exchange (ETDEWEB)

    Sanderson, Robert [Los Alamos National Laboratory; Van De Sompel, Herbert [Los Alamos National Laboratory

    2010-01-01

    As Digital Libraries (DL) become more aligned with the web architecture, their functional components need to be fundamentally rethought in terms of URIs and HTTP. Annotation, a core scholarly activity enabled by many DL solutions, exhibits a clearly unacceptable characteristic when existing models are applied to the web: due to the representations of web resources changing over time, an annotation made about a web resource today may no longer be relevant to the representation that is served from that same resource tomorrow. We assume the existence of archived versions of resources, and combine the temporal features of the emerging Open Annotation data model with the capability offered by the Memento framework that allows seamless navigation from the URI of a resource to archived versions of that resource, and arrive at a solution that provides guarantees regarding the persistence of web annotations over time. More specifically, we provide theoretical solutions and proof-of-concept experimental evaluations for two problems: reconstructing an existing annotation so that the correct archived version is displayed for all resources involved in the annotation, and retrieving all annotations that involve a given archived version of a web resource.

  8. Clinical application of MR susceptibility weighted imaging in cerebrovascular diseases

    International Nuclear Information System (INIS)

    Zhu Wenzhen; Qi Jianpin; Shen Hao; Wang Chengyuan; Xia Liming; Hu Junwu; Feng Dingyi

    2007-01-01

    Objective: To assess clinical application value of susceptibility weighted imaging (SWI) in cerebrovascular diseases. Method: Twenty-three patients with cerebrovascular disease were investigated, including 7 cases of cavernoma, 4 of venous hemangioma, 3 of small AVM, 1 of Sturge-Weber Syndrome, 2 of cerebral venous sinus thrombosis and 6 of chronic cerebral infarction. All patients underwent standard Mill and SWI, and most of them also underwent enhanced T 1 WI and MRA. The corrected phase (CP) values were obtained at the lesions and control areas. Results: The average CP values of the lesions and the control areas were -0.112±0.032 and -0.013±0.004, respectively (t=2.167, P 2 WI. The cavemoma could be differentiated from the hemorrhage within lesions. Moreover, multiple microcavernomas were detected on SWI. In 4 cases of venous hemangioma, SWI detected spider-like lesions with more hair-thin pulp veins adjacent to the dilated draining vein than contrast MRI. In 3 cases of small AVM, SWI was more advantageous than MRA in clearly detecting the small feeding artery. In 1 case of Sturge-Weber Syndrome, SWI demonstrated large areas of calcification and the abnormal vessels on the cerebral surface and the deep part of the cerebrum at the same time. In 2 cases of cerebral venous sinus thrombosis, the deep draining veins and superficial venous rete were generally dilated and winding, and the hemorrhagic lesions could be detected earlier than conventional MR images in one case. In 6 eases of cerebral infarction, old hemorrhage was clearly displayed within the lesions. Conclusion: SWI has more predominant advantages than conventional MRI and MRA in detecting the low-flow cerebral vascular malformations, identifying microbleeds and cerebral infarction accompanying hemorrhage, and the dilation of cerebral deep or superficial veins in patients with cerebral venous sinus thrombosis. Moreover, SWI can show the phase contrast between the lesions and the control areas. (authors)

  9. Crowdsourcing and annotating NER for Twitter #drift

    DEFF Research Database (Denmark)

    Fromreide, Hege; Hovy, Dirk; Søgaard, Anders

    2014-01-01

    We present two new NER datasets for Twitter; a manually annotated set of 1,467 tweets (kappa=0.942) and a set of 2,975 expert-corrected, crowdsourced NER annotated tweets from the dataset described in Finin et al. (2010). In our experiments with these datasets, we observe two important points: (a......) language drift on Twitter is significant, and while off-the-shelf systems have been reported to perform well on in-sample data, they often perform poorly on new samples of tweets, (b) state-of-the-art performance across various datasets can beobtained from crowdsourced annotations, making it more feasible...

  10. Recent advances in computational methods and clinical applications for spine imaging

    CERN Document Server

    Glocker, Ben; Klinder, Tobias; Li, Shuo

    2015-01-01

    This book contains the full papers presented at the MICCAI 2014 workshop on Computational Methods and Clinical Applications for Spine Imaging. The workshop brought together scientists and clinicians in the field of computational spine imaging. The chapters included in this book present and discuss the new advances and challenges in these fields, using several methods and techniques in order to address more efficiently different and timely applications involving signal and image acquisition, image processing and analysis, image segmentation, image registration and fusion, computer simulation, image based modeling, simulation and surgical planning, image guided robot assisted surgical and image based diagnosis. The book also includes papers and reports from the first challenge on vertebra segmentation held at the workshop.

  11. High definition ultrasound imaging for battlefield medical applications

    Energy Technology Data Exchange (ETDEWEB)

    Kwok, K.S.; Morimoto, A.K.; Kozlowski, D.M.; Krumm, J.C.; Dickey, F.M. [Sandia National Labs., Albuquerque, NM (United States); Rogers, B; Walsh, N. [Texas Univ. Health Science Center, San Antonio, TX (United States)

    1996-06-23

    A team has developed an improved resolution ultrasound system for low cost diagnostics. This paper describes the development of an ultrasound based imaging system capable of generating 3D images showing surface and subsurface tissue and bone structures. We include results of a comparative study between images obtained from X-Ray Computed Tomography (CT) and ultrasound. We found that the quality of ultrasound images compares favorably with those from CT. Volumetric and surface data extracted from these images were within 7% of the range between ultrasound and CT scans. We also include images of porcine abdominal scans from two different sets of animal trials.

  12. Raman chemical imaging technology for food and agricultural applications

    Science.gov (United States)

    This paper presents Raman chemical imaging technology for inspecting food and agricultural products. The paper puts emphasis on introducing and demonstrating Raman imaging techniques for practical uses in food analysis. The main topics include Raman scattering principles, Raman spectroscopy measurem...

  13. Magnetic resonance imaging- physical principles and clinical application

    International Nuclear Information System (INIS)

    Tavri, O.J.

    1996-01-01

    The advances in imaging techniques for better and efficient diagnosis have become possible with the advent of Magnetic Resonance Imaging (MRI). The modalities of MRI technique for indication and diagnosis are mentioned in with typical examples. (author)

  14. Simulation of deposed dose and application of image processing

    International Nuclear Information System (INIS)

    Dadi, A.; Fahli, A.

    1994-01-01

    In gamma radiation processing, the photons from radioactive isotopes, are absorbed in matter where they lost a part or the whole energy they process. On every point P of the material irradiated, the absorbed dose (D) is the deposed energy in the volume (dV) (centred in P) per the mass (Dm) of this volume. The radiations effects on every point of material it directly depend on the locally deposed energy. For technical applications it very important to know how the dose is deposed in irradiated material. Because of the random aspect of each process who may released in material ( Photoelectric effect, Compton diffusion and pair production (e sup +, e sup -..) and the arbitrary geometries can be treated, we use in this work, the Monte Carlo simulation, to describe the phenomenon and reproduce it at all in computer during the irradiation processing for photons with energies above a few KeV to a several MeV in any element compound or mixture. The rate dose is then calculated at every point P(x,y,z,) and restored as real data file in first time,and transformed to bytes data file and finally shown as a digital image in high resolution 16 colors allowing an analysis of the dose variation in material 2 figs., 2 refs. (author)

  15. Possible application of an imaging plate to space radiation dosimetry

    International Nuclear Information System (INIS)

    Ohuchi, Hiroko; Yamadera, Akira

    2002-01-01

    Fading correction plays an important role in the application of commercially available BaBrF:Eu 2+ phosphors: imaging plates (IP) to dosimetry. We successfully determined a fading correction equation, which is a function of elapsed time and absolute temperature, as the sum of several exponentially decaying components having different half-lives. In this work, a new method was developed to eliminate a short half-life component by annealing the IP and estimating the radiation dose with the long half-life components. Annealing decreases the effect of fading on the estimated dose, however, it also causes the loss of photo-stimulated luminescence (PSL). Considering an IP as an integral detector for a specific period of up to one month, the practically optimum conditions for quantitative measurement with two types of IP (BAS-TR and BAS-MS) were evaluated by using the fading correction equation, which was obtained after irradiation with a 244 Cm source as the alpha-ray source having a specific radioactivity of 1,638.5 Bq/cm 2 including beta and gamma-ray (alpha energy of 5.763 and 5.805 MeV). Annealing at 80 deg C for 24 hours after irradiation for one month using BAS-MS should minimize the effect of the elapsed time, resulting in sufficient sensitivity. The results demonstrate new possibilities for radiation dosimetry offered by the use of an IP. (author)

  16. A compatible electrocutaneous display for functional magnetic resonance imaging application.

    Science.gov (United States)

    Hartwig, V; Cappelli, C; Vanello, N; Ricciardi, E; Scilingo, E P; Giovannetti, G; Santarelli, M F; Positano, V; Pietrini, P; Landini, L; Bicchi, A

    2006-01-01

    In this paper we propose an MR (magnetic resonance) compatible electrocutaneous stimulator able to inject an electric current, variable in amplitude and frequency, into the fingertips in order to elicit tactile skin receptors (mechanoreceptors). The desired goal is to evoke specific tactile sensations selectively stimulating skin receptors by means of an electric current in place of mechanical stimuli. The field of application ranges from functional magnetic resonance imaging (fMRI) tactile studies to augmented reality technology. The device here proposed is designed using safety criteria in order to comply with the threshold of voltage and current permitted by regulations. Moreover, MR safety and compatibility criteria were considered in order to perform experiments inside the MR scanner during an fMRI acquisition for functional brain activation analysis. Psychophysical laboratory tests are performed in order to define the different evoked tactile sensation. After verifying the device MR safety and compatibility on a phantom, a test on a human subject during fMRI acquisition is performed to visualize the brain areas activated by the simulated tactile sensation.

  17. Application of Learning Theories on Medical Imaging Education

    Directory of Open Access Journals (Sweden)

    Osama A. Mabrouk Kheiralla

    2018-05-01

    Full Text Available The main objective of the education process is that student must learn well rather than the educators to teach well. If radiologists get involved in the process of medical education, it is important for them to do it through sound knowledge of how students learn. Researches have proved that most of the teachers in the field of medical education including diagnostic imaging are actually doctors or technicians, who didn’t have an opportunity to study the basics of learning. Mostly they have gained their knowledge through watching other educators, and they mostly rely on their personal skills and experience in doing their job. This will hinder them from conveying knowledge in an effective and scientific way, and they will find themselves lagging away behind the latest advances in the field of medical education and educational research, which will lead to negative cognitive outcomes among learners. This article presents an overview of three of the most influential basic theories of learning, upon which many teachers rely in their practical applications, which must be considered by radiologist who act as medical educators.

  18. Meteor showers an annotated catalog

    CERN Document Server

    Kronk, Gary W

    2014-01-01

    Meteor showers are among the most spectacular celestial events that may be observed by the naked eye, and have been the object of fascination throughout human history. In “Meteor Showers: An Annotated Catalog,” the interested observer can access detailed research on over 100 annual and periodic meteor streams in order to capitalize on these majestic spectacles. Each meteor shower entry includes details of their discovery, important observations and orbits, and gives a full picture of duration, location in the sky, and expected hourly rates. Armed with a fuller understanding, the amateur observer can better view and appreciate the shower of their choice. The original book, published in 1988, has been updated with over 25 years of research in this new and improved edition. Almost every meteor shower study is expanded, with some original minor showers being dropped while new ones are added. The book also includes breakthroughs in the study of meteor showers, such as accurate predictions of outbursts as well ...

  19. Digital image processing in NDT : Application to industrial radiography

    International Nuclear Information System (INIS)

    Aguirre, J.; Gonzales, C.; Pereira, D.

    1988-01-01

    Digital image processing techniques are applied to image enhancement discontinuity detection and characterization is radiographic test. Processing is performed mainly by image histogram modification, edge enhancement, texture and user interactive segmentation. Implementation was achieved in a microcomputer with video image capture system. Results are compared with those obtained through more specialized equipment main frame computers and high precision mechanical scanning digitisers. Procedures are intended as a precious stage for automatic defect detection

  20. Retrieval-based Face Annotation by Weak Label Regularized Local Coordinate Coding.

    Science.gov (United States)

    Wang, Dayong; Hoi, Steven C H; He, Ying; Zhu, Jianke; Mei, Tao; Luo, Jiebo

    2013-08-02

    Retrieval-based face annotation is a promising paradigm of mining massive web facial images for automated face annotation. This paper addresses a critical problem of such paradigm, i.e., how to effectively perform annotation by exploiting the similar facial images and their weak labels which are often noisy and incomplete. In particular, we propose an effective Weak Label Regularized Local Coordinate Coding (WLRLCC) technique, which exploits the principle of local coordinate coding in learning sparse features, and employs the idea of graph-based weak label regularization to enhance the weak labels of the similar facial images. We present an efficient optimization algorithm to solve the WLRLCC task. We conduct extensive empirical studies on two large-scale web facial image databases: (i) a Western celebrity database with a total of $6,025$ persons and $714,454$ web facial images, and (ii)an Asian celebrity database with $1,200$ persons and $126,070$ web facial images. The encouraging results validate the efficacy of the proposed WLRLCC algorithm. To further improve the efficiency and scalability, we also propose a PCA-based approximation scheme and an offline approximation scheme (AWLRLCC), which generally maintains comparable results but significantly saves much time cost. Finally, we show that WLRLCC can also tackle two existing face annotation tasks with promising performance.