WorldWideScience

Sample records for image markup project

  1. The caBIG annotation and image Markup project.

    Science.gov (United States)

    Channin, David S; Mongkolwat, Pattanasak; Kleper, Vladimir; Sepukar, Kastubh; Rubin, Daniel L

    2010-04-01

    Image annotation and markup are at the core of medical interpretation in both the clinical and the research setting. Digital medical images are managed with the DICOM standard format. While DICOM contains a large amount of meta-data about whom, where, and how the image was acquired, DICOM says little about the content or meaning of the pixel data. An image annotation is the explanatory or descriptive information about the pixel data of an image that is generated by a human or machine observer. An image markup is the graphical symbols placed over the image to depict an annotation. While DICOM is the standard for medical image acquisition, manipulation, transmission, storage, and display, there are no standards for image annotation and markup. Many systems expect annotation to be reported verbally, while markups are stored in graphical overlays or proprietary formats. This makes it difficult to extract and compute with both of them. The goal of the Annotation and Image Markup (AIM) project is to develop a mechanism, for modeling, capturing, and serializing image annotation and markup data that can be adopted as a standard by the medical imaging community. The AIM project produces both human- and machine-readable artifacts. This paper describes the AIM information model, schemas, software libraries, and tools so as to prepare researchers and developers for their use of AIM.

  2. Managing and Querying Image Annotation and Markup in XML.

    Science.gov (United States)

    Wang, Fusheng; Pan, Tony; Sharma, Ashish; Saltz, Joel

    2010-01-01

    Proprietary approaches for representing annotations and image markup are serious barriers for researchers to share image data and knowledge. The Annotation and Image Markup (AIM) project is developing a standard based information model for image annotation and markup in health care and clinical trial environments. The complex hierarchical structures of AIM data model pose new challenges for managing such data in terms of performance and support of complex queries. In this paper, we present our work on managing AIM data through a native XML approach, and supporting complex image and annotation queries through native extension of XQuery language. Through integration with xService, AIM databases can now be conveniently shared through caGrid.

  3. Application of whole slide image markup and annotation for pathologist knowledge capture.

    Science.gov (United States)

    Campbell, Walter S; Foster, Kirk W; Hinrichs, Steven H

    2013-01-01

    The ability to transfer image markup and annotation data from one scanned image of a slide to a newly acquired image of the same slide within a single vendor platform was investigated. The goal was to study the ability to use image markup and annotation data files as a mechanism to capture and retain pathologist knowledge without retaining the entire whole slide image (WSI) file. Accepted mathematical principles were investigated as a method to overcome variations in scans of the same glass slide and to accurately associate image markup and annotation data across different WSI of the same glass slide. Trilateration was used to link fixed points within the image and slide to the placement of markups and annotations of the image in a metadata file. Variation in markup and annotation placement between WSI of the same glass slide was reduced from over 80 μ to less than 4 μ in the x-axis and from 17 μ to 6 μ in the y-axis (P < 0.025). This methodology allows for the creation of a highly reproducible image library of histopathology images and interpretations for educational and research use.

  4. Informatics in radiology: An open-source and open-access cancer biomedical informatics grid annotation and image markup template builder.

    Science.gov (United States)

    Mongkolwat, Pattanasak; Channin, David S; Kleper, Vladimir; Rubin, Daniel L

    2012-01-01

    In a routine clinical environment or clinical trial, a case report form or structured reporting template can be used to quickly generate uniform and consistent reports. Annotation and image markup (AIM), a project supported by the National Cancer Institute's cancer biomedical informatics grid, can be used to collect information for a case report form or structured reporting template. AIM is designed to store, in a single information source, (a) the description of pixel data with use of markups or graphical drawings placed on the image, (b) calculation results (which may or may not be directly related to the markups), and (c) supplemental information. To facilitate the creation of AIM annotations with data entry templates, an AIM template schema and an open-source template creation application were developed to assist clinicians, image researchers, and designers of clinical trials to quickly create a set of data collection items, thereby ultimately making image information more readily accessible.

  5. The National Cancer Informatics Program (NCIP) Annotation and Image Markup (AIM) Foundation model.

    Science.gov (United States)

    Mongkolwat, Pattanasak; Kleper, Vladimir; Talbot, Skip; Rubin, Daniel

    2014-12-01

    Knowledge contained within in vivo imaging annotated by human experts or computer programs is typically stored as unstructured text and separated from other associated information. The National Cancer Informatics Program (NCIP) Annotation and Image Markup (AIM) Foundation information model is an evolution of the National Institute of Health's (NIH) National Cancer Institute's (NCI) Cancer Bioinformatics Grid (caBIG®) AIM model. The model applies to various image types created by various techniques and disciplines. It has evolved in response to the feedback and changing demands from the imaging community at NCI. The foundation model serves as a base for other imaging disciplines that want to extend the type of information the model collects. The model captures physical entities and their characteristics, imaging observation entities and their characteristics, markups (two- and three-dimensional), AIM statements, calculations, image source, inferences, annotation role, task context or workflow, audit trail, AIM creator details, equipment used to create AIM instances, subject demographics, and adjudication observations. An AIM instance can be stored as a Digital Imaging and Communications in Medicine (DICOM) structured reporting (SR) object or Extensible Markup Language (XML) document for further processing and analysis. An AIM instance consists of one or more annotations and associated markups of a single finding along with other ancillary information in the AIM model. An annotation describes information about the meaning of pixel data in an image. A markup is a graphical drawing placed on the image that depicts a region of interest. This paper describes fundamental AIM concepts and how to use and extend AIM for various imaging disciplines.

  6. Some Initial Reflections on XML Markup for an Image-Based Electronic Edition of the Brooklyn Museum Aramaic Papyri

    Directory of Open Access Journals (Sweden)

    F.W. Dobbs-Allsopp

    2016-04-01

    Full Text Available A collaborative project of the Brooklyn Museum and a number of allied institutions, including Princeton Theological Seminary and West Semitic Research, the Digital Brooklyn Museum Aramaic Papyri (DBMAP is to be both an image-based electronic facsimile edition of the important collection of Aramaic papyri from Elephantine housed at the Brooklyn Museum and an archival resource to support ongoing research on these papyri and the public dissemination of knowledge about them. In the process of building out a (partial prototype of the edition, to serve as a proof of concept, we have discovered little field-specific discussion that might guide our markup decisions. Consequently, here our chief ambition is to initiate such a conversation. After a brief overview of DBMAP, we offer some initial reflection on and assessment of XML markup schemes specifically for Semitic texts from the ancient Near East that comply with TEI, CSE, and MEP guidelines. We take as our example BMAP 3 (=TAD B3.4 and we focus on markup as pertains to the editorial transcription of this documentary text and to the linguistic analysis of the text’s language.

  7. iPad: Semantic annotation and markup of radiological images.

    Science.gov (United States)

    Rubin, Daniel L; Rodriguez, Cesar; Shah, Priyanka; Beaulieu, Chris

    2008-11-06

    Radiological images contain a wealth of information,such as anatomy and pathology, which is often not explicit and computationally accessible. Information schemes are being developed to describe the semantic content of images, but such schemes can be unwieldy to operationalize because there are few tools to enable users to capture structured information easily as part of the routine research workflow. We have created iPad, an open source tool enabling researchers and clinicians to create semantic annotations on radiological images. iPad hides the complexity of the underlying image annotation information model from users, permitting them to describe images and image regions using a graphical interface that maps their descriptions to structured ontologies semi-automatically. Image annotations are saved in a variety of formats,enabling interoperability among medical records systems, image archives in hospitals, and the Semantic Web. Tools such as iPad can help reduce the burden of collecting structured information from images, and it could ultimately enable researchers and physicians to exploit images on a very large scale and glean the biological and physiological significance of image content.

  8. imzML: Imaging Mass Spectrometry Markup Language: A common data format for mass spectrometry imaging.

    Science.gov (United States)

    Römpp, Andreas; Schramm, Thorsten; Hester, Alfons; Klinkert, Ivo; Both, Jean-Pierre; Heeren, Ron M A; Stöckli, Markus; Spengler, Bernhard

    2011-01-01

    Imaging mass spectrometry is the method of scanning a sample of interest and generating an "image" of the intensity distribution of a specific analyte. The data sets consist of a large number of mass spectra which are usually acquired with identical settings. Existing data formats are not sufficient to describe an MS imaging experiment completely. The data format imzML was developed to allow the flexible and efficient exchange of MS imaging data between different instruments and data analysis software.For this purpose, the MS imaging data is divided in two separate files. The mass spectral data is stored in a binary file to ensure efficient storage. All metadata (e.g., instrumental parameters, sample details) are stored in an XML file which is based on the standard data format mzML developed by HUPO-PSI. The original mzML controlled vocabulary was extended to include specific parameters of imaging mass spectrometry (such as x/y position and spatial resolution). The two files (XML and binary) are connected by offset values in the XML file and are unambiguously linked by a universally unique identifier. The resulting datasets are comparable in size to the raw data and the separate metadata file allows flexible handling of large datasets.Several imaging MS software tools already support imzML. This allows choosing from a (growing) number of processing tools. One is no longer limited to proprietary software, but is able to use the processing software which is best suited for a specific question or application. On the other hand, measurements from different instruments can be compared within one software application using identical settings for data processing. All necessary information for evaluating and implementing imzML can be found at http://www.imzML.org .

  9. Markups and Exporting Behavior

    DEFF Research Database (Denmark)

    De Loecker, Jan; Warzynski, Frederic Michel Patrick

    2012-01-01

    estimates of plant- level markups without specifying how firms compete in the product market. We rely on our method to explore the relationship be- tween markups and export behavior. We find that markups are estimated significantly higher when controlling for unobserved productivity; that exporters charge...

  10. A Leaner, Meaner Markup Language.

    Science.gov (United States)

    Online & CD-ROM Review, 1997

    1997-01-01

    In 1996 a working group of the World Wide Web Consortium developed and released a simpler form of markup language, Extensible Markup Language (XML), combining the flexibility of standard Generalized Markup Language (SGML) and the Web suitability of HyperText Markup Language (HTML). Reviews SGML and discusses XML's suitability for journal…

  11. Evolving a lingua franca and associated software infrastructure for computational systems biology: the Systems Biology Markup Language (SBML) project.

    Science.gov (United States)

    Hucka, M; Finney, A; Bornstein, B J; Keating, S M; Shapiro, B E; Matthews, J; Kovitz, B L; Schilstra, M J; Funahashi, A; Doyle, J C; Kitano, H

    2004-06-01

    Biologists are increasingly recognising that computational modelling is crucial for making sense of the vast quantities of complex experimental data that are now being collected. The systems biology field needs agreed-upon information standards if models are to be shared, evaluated and developed cooperatively. Over the last four years, our team has been developing the Systems Biology Markup Language (SBML) in collaboration with an international community of modellers and software developers. SBML has become a de facto standard format for representing formal, quantitative and qualitative models at the level of biochemical reactions and regulatory networks. In this article, we summarise the current and upcoming versions of SBML and our efforts at developing software infrastructure for supporting and broadening its use. We also provide a brief overview of the many SBML-compatible software tools available today.

  12. The Geometry Description Markup Language

    Institute of Scientific and Technical Information of China (English)

    RadovanChytracek

    2001-01-01

    Currently,a lot of effort is being put on designing complex detectors.A number of simulation and reconstruction frameworks and applications have been developed with the aim to make this job easier.A very important role in this activity is played by the geometry description of the detector apparatus layout and its working environment.However,no real common approach to represent geometry data is available and such data can be found in various forms starting from custom semi-structured text files,source code (C/C++/FORTRAN),to XML and database solutions.The XML(Extensible Markup Language)has proven to provide an interesting approach for describing detector geometries,with several different but incompatible XML-based solutions existing.Therefore,interoperability and geometry data exchange among different frameworks is not possible at present.This article introduces a markup language for geometry descriptions.Its aim is to define a common approach for sharing and exchanging of geometry description data.Its requirements and design have been driven by experience and user feedback from existing projects which have their geometry description in XML.

  13. Geography Markup Language

    OpenAIRE

    Burggraf, David S

    2006-01-01

    Geography Markup Language (GML) is an XML application that provides a standard way to represent geographic information. GML is developed and maintained by the Open Geospatial Consortium (OGC), which is an international consortium consisting of more than 250 members from industry, government, and university departments. Many of the conceptual models described in the ISO 19100 series of geomatics standards have been implemented in GML, and it is itself en route to becoming an ISO Standard (TC/2...

  14. GOATS Image Projection Component

    Science.gov (United States)

    Haber, Benjamin M.; Green, Joseph J.

    2011-01-01

    When doing mission analysis and design of an imaging system in orbit around the Earth, answering the fundamental question of imaging performance requires an understanding of the image products that will be produced by the imaging system. GOATS software represents a series of MATLAB functions to provide for geometric image projections. Unique features of the software include function modularity, a standard MATLAB interface, easy-to-understand first-principles-based analysis, and the ability to perform geometric image projections of framing type imaging systems. The software modules are created for maximum analysis utility, and can all be used independently for many varied analysis tasks, or used in conjunction with other orbit analysis tools.

  15. Markup in Engineering Design: A Discourse

    Directory of Open Access Journals (Sweden)

    Shaofeng Liu

    2010-03-01

    Full Text Available Today’s engineering companies are facing unprecedented competition in a global market place. There is now a knowledge intensive shift towards whole product lifecycle support, and collaborative environments. It has become particularly important to capture information, knowledge and experiences about previous design and following stages during their product lifecycle, so as to retrieve and reuse such information in new and follow-on designs activities. Recently, with the rapid development and adoption of digital technologies, annotation and markup are becoming important tools for information communication, retrieval and management. Such techniques are being increasingly applied to an array of applications and different digital items, such as text documents, 2D images and 3D models. This paper presents a state-of-the-art review of recent research in markup for engineering design, including a number of core markup languages and main markup strategies. Their applications and future utilization in engineering design, including multi-viewpoint of product models, capture of information and rationale across the whole product lifecycle, integration of engineering design processes, and engineering document management, are comprehensively discussed.

  16. Geography Markup Language

    Directory of Open Access Journals (Sweden)

    David S Burggraf

    2006-11-01

    Full Text Available Geography Markup Language (GML is an XML application that provides a standard way to represent geographic information. GML is developed and maintained by the Open Geospatial Consortium (OGC, which is an international consortium consisting of more than 250 members from industry, government, and university departments. Many of the conceptual models described in the ISO 19100 series of geomatics standards have been implemented in GML, and it is itself en route to becoming an ISO Standard (TC/211 CD 19136. An overview of GML together with its implications for the geospatial web is given in this paper.

  17. Electron Emission Projection Imager

    CERN Document Server

    Baturin, Stanislav S

    2016-01-01

    A new projection type imaging system is presented. The system can directly image the field emission site distribution on a cathode surface by making use of anode screens in the standard parallel plate configuration. The lateral spatial resolution of the projector is on the order of 1 {\\mu}m. The imaging sensitivity to the field emission current can be better than the current sensitivity of a typical electrometer, i.e. less than 1 nA.

  18. What Digital Imaging and Communication in Medicine (DICOM) could look like in common object request broker (CORBA) and extensible markup language (XML).

    Science.gov (United States)

    Van Nguyen, A; Avrin, D E; Tellis, W M; Andriole, K P; Arenson, R L

    2001-06-01

    Common object request broker architecture (CORBA) is a method for invoking distributed objects across a network. There has been some activity in applying this software technology to Digital Imaging and Communications in Medicine (DICOM), but no documented demonstration of how this would actually work. We report a CORBA demonstration that is functionally equivalent and in some ways superior to the DICOM communication protocol. In addition, in and outside of medicine, there is great interest in the use of extensible markup language (XML) to provide interoperation between databases. An example implementation of the DICOM data structure in XML will also be demonstrated. Using Visibroker ORB from Inprise (Scotts Valley, CA), a test bed was developed to simulate the principle DICOM operations: store, query, and retrieve (SQR). SQR is the most common interaction between a modality device application entity (AE) such as a computed tomography (CT) scanner, and a storage component, as well as between a storage component and a workstation. The storage of a CT study by invoking one of several storage objects residing on a network was simulated and demonstrated. In addition, XML database descriptors were used to facilitate the transfer of DICOM header information between independent databases. CORBA is demonstrated to have great potential for the next version of DICOM. It can provide redundant protection against single points of failure. XML appears to be an excellent method of providing interaction between separate databases managing the DICOM information object model, and may therefore eliminate the common use of proprietary client-server databases in commercial implementations of picture archiving and communication systems (PACS).

  19. Endogenous Markups, Firm Productivity and International Trade:

    DEFF Research Database (Denmark)

    Bellone, Flora; Musso, Patrick; Nesta, Lionel

    In this paper, we test key micro-level theoretical predictions ofMelitz and Ottaviano (MO) (2008), a model of international trade with heterogenous firms and endogenous mark-ups. At the firm-level, the MO model predicts that: 1) firm markups are negatively related to domestic market size; 2......) markups are positively related to firm productivity; 3) markups are negatively related to import penetration; 4) markups are positively related to firm export intensity and markups are higher on the export market than on the domestic ones in the presence of trade barriers and/or if competitors...

  20. Markups and Firm-Level Export Status

    DEFF Research Database (Denmark)

    De Loecker, Jan; Warzynski, Frederic

    We derive an estimating equation to estimate markups using the insight of Hall (1986) and the control function approach of Olley and Pakes (1996). We rely on our method to explore the relationship between markups and export behavior using plant-level data. We find significantly higher markups when...... we control for unobserved productivity shocks. Furthermore, we find significant higher markups for exporting firms and present new evidence on markup-export status dynamics. More specifically, we find that firms' markups significantly increase (decrease) after entering (exiting) export markets. We...

  1. TEI Standoff Markup - A work in progress

    NARCIS (Netherlands)

    Spadini, E.; Turska, Magdalena; Broughton, Misha

    2015-01-01

    Markup is said to be standoff, or external, when the markup data is placed outside of the text it is meant to tag” (). One of the most widely recognized limitations of inline XML markup is its inability to cope with element overlap; standoff has been considered as a possible solution to

  2. TEI Standoff Markup - A work in progress

    NARCIS (Netherlands)

    Spadini, E.; Turska, Magdalena; Broughton, Misha

    2015-01-01

    Markup is said to be standoff, or external, when the markup data is placed outside of the text it is meant to tag” (). One of the most widely recognized limitations of inline XML markup is its inability to cope with element overlap; standoff has been considered as a possible solution to

  3. LOG2MARKUP: State module to transform a Stata text log into a markup document

    DEFF Research Database (Denmark)

    2016-01-01

    log2markup extract parts of the text version from the Stata log command and transform the logfile into a markup based document with the same name, but with extension markup (or otherwise specified in option extension) instead of log. The author usually uses markdown for writing documents. However...... other users may decide on all sorts of markup languages, eg HTML or LaTex. The key is that markup of Stata code and Stata output can be set by the options....

  4. Changes in latent fingerprint examiners' markup between analysis and comparison.

    Science.gov (United States)

    Ulery, Bradford T; Hicklin, R Austin; Roberts, Maria Antonia; Buscaglia, JoAnn

    2015-02-01

    After the initial analysis of a latent print, an examiner will sometimes revise the assessment during comparison with an exemplar. Changes between analysis and comparison may indicate that the initial analysis of the latent was inadequate, or that confirmation bias may have affected the comparison. 170 volunteer latent print examiners, each randomly assigned 22 pairs of prints from a pool of 320 total pairs, provided detailed markup documenting their interpretations of the prints and the bases for their comparison conclusions. We describe changes in value assessments and markup of features and clarity. When examiners individualized, they almost always added or deleted minutiae (90.3% of individualizations); every examiner revised at least some markups. For inconclusive and exclusion determinations, changes were less common, and features were added more frequently when the image pair was mated (same source). Even when individualizations were based on eight or fewer corresponding minutiae, in most cases some of those minutiae had been added during comparison. One erroneous individualization was observed: the markup changes were notably extreme, and almost all of the corresponding minutiae had been added during comparison. Latents assessed to be of value for exclusion only (VEO) during analysis were often individualized when compared to a mated exemplar (26%); in our previous work, where examiners were not required to provide markup of features, VEO individualizations were much less common (1.8%).

  5. Projecting Images on a Sphere

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — A system for projecting images on an object with a reflective surface. A plurality of image projectors are spaced around the object and synchronized such that each...

  6. Astronomical Instrumentation System Markup Language

    Science.gov (United States)

    Goldbaum, Jesse M.

    2016-05-01

    The Astronomical Instrumentation System Markup Language (AISML) is an Extensible Markup Language (XML) based file format for maintaining and exchanging information about astronomical instrumentation. The factors behind the need for an AISML are first discussed followed by the reasons why XML was chosen as the format. Next it's shown how XML also provides the framework for a more precise definition of an astronomical instrument and how these instruments can be combined to form an Astronomical Instrumentation System (AIS). AISML files for several instruments as well as one for a sample AIS are provided. The files demonstrate how AISML can be utilized for various tasks from web page generation and programming interface to instrument maintenance and quality management. The advantages of widespread adoption of AISML are discussed.

  7. Wine Price Markup in California Restaurants

    OpenAIRE

    Amspacher, William

    2011-01-01

    The study quantifies the relationship between retail wine price and restaurant mark-up. Ordinary Least Squares regressions were run to estimate how restaurant mark-up responded to retail price. Separate regressions were run for white wine, red wine, and both red and white combined. Both slope and intercept coefficients for each of these regressions were highly significant and indicated the expected inverse relationship between retail price and mark-up.

  8. Answer Markup Algorithms for Southeast Asian Languages.

    Science.gov (United States)

    Henry, George M.

    1991-01-01

    Typical markup methods for providing feedback to foreign language learners are not applicable to languages not written in a strictly linear fashion. A modification of Hart's edit markup software is described, along with a second variation based on a simple edit distance algorithm adapted to a general Southeast Asian font system. (10 references)…

  9. Data Display Markup Language (DDML) Handbook

    Science.gov (United States)

    2017-01-31

    Moreover, the tendency of T&E is towards a plug-and-play-like data acquisition system that requires standard languages and modules for data displays...Telemetry Group DOCUMENT 127-17 DATA DISPLAY MARKUP LANGUAGE (DDML) HANDBOOK DISTRIBUTION A: APPROVED FOR...DOCUMENT 127-17 DATA DISPLAY MARKUP LANGUAGE (DDML) HANDBOOK January 2017 Prepared by Telemetry Group

  10. Electric Field Imaging Project

    Science.gov (United States)

    Wilcutt, Terrence; Hughitt, Brian; Burke, Eric; Generazio, Edward

    2016-01-01

    NDE historically has focused technology development in propagating wave phenomena with little attention to the field of electrostatics and emanating electric fields. This work is intended to bring electrostatic imaging to the forefront of new inspection technologies, and new technologies in general. The specific goals are to specify the electric potential and electric field including the electric field spatial components emanating from, to, and throughout volumes containing objects or in free space.

  11. Markups and Firm-Level Export Status

    DEFF Research Database (Denmark)

    De Loecker, Jan; Warzynski, Frederic

    in empirical industrial organization often rely on the availability of very detailed market-level data with information on prices, quantities sold, characteristics of products and more recently supplemented with consumer-level attributes. Often, both researchers and government agencies cannot rely...... on such detailed data, but still need an assessment of whether changes in the operating environment of firms had an impact on markups and therefore on consumer surplus. In this paper, we derive an estimating equation to estimate markups using standard production plant-level data based on the insight of Hall (1986...... and export behavior using plant-level data. We find that i) markups are estimated significantly higher when controlling for unobserved productivity, ii) exporters charge on average higher markups and iii) firms' markups increase (decrease) upon export entry (exit).We see these findings as a first step...

  12. The Accelerator Markup Language and the Universal Accelerator Parser

    Energy Technology Data Exchange (ETDEWEB)

    Sagan, D.; Forster, M.; /Cornell U., LNS; Bates, D.A.; /LBL, Berkeley; Wolski, A.; /Liverpool U. /Cockcroft Inst. Accel. Sci. Tech.; Schmidt, F.; /CERN; Walker, N.J.; /DESY; Larrieu, T.; Roblin, Y.; /Jefferson Lab; Pelaia, T.; /Oak Ridge; Tenenbaum, P.; Woodley, M.; /SLAC; Reiche, S.; /UCLA

    2006-10-06

    A major obstacle to collaboration on accelerator projects has been the sharing of lattice description files between modeling codes. To address this problem, a lattice description format called Accelerator Markup Language (AML) has been created. AML is based upon the standard eXtensible Markup Language (XML) format; this provides the flexibility for AML to be easily extended to satisfy changing requirements. In conjunction with AML, a software library, called the Universal Accelerator Parser (UAP), is being developed to speed the integration of AML into any program. The UAP is structured to make it relatively straightforward (by giving appropriate specifications) to read and write lattice files in any format. This will allow programs that use the UAP code to read a variety of different file formats. Additionally, this will greatly simplify conversion of files from one format to another. Currently, besides AML, the UAP supports the MAD lattice format.

  13. The Accelerator Markup Language and the Universal Accelerator Parser

    Energy Technology Data Exchange (ETDEWEB)

    Sagan, D.; Forster, M.; /Cornell U., LNS; Bates, D.A.; /LBL, Berkeley; Wolski, A.; /Liverpool U. /Cockcroft Inst. Accel. Sci. Tech.; Schmidt, F.; /CERN; Walker, N.J.; /DESY; Larrieu, T.; Roblin, Y.; /Jefferson Lab; Pelaia, T.; /Oak Ridge; Tenenbaum, P.; Woodley, M.; /SLAC; Reiche, S.; /UCLA

    2006-10-06

    A major obstacle to collaboration on accelerator projects has been the sharing of lattice description files between modeling codes. To address this problem, a lattice description format called Accelerator Markup Language (AML) has been created. AML is based upon the standard eXtensible Markup Language (XML) format; this provides the flexibility for AML to be easily extended to satisfy changing requirements. In conjunction with AML, a software library, called the Universal Accelerator Parser (UAP), is being developed to speed the integration of AML into any program. The UAP is structured to make it relatively straightforward (by giving appropriate specifications) to read and write lattice files in any format. This will allow programs that use the UAP code to read a variety of different file formats. Additionally, this will greatly simplify conversion of files from one format to another. Currently, besides AML, the UAP supports the MAD lattice format.

  14. Markups and Firm-Level Export Status

    DEFF Research Database (Denmark)

    De Loecker, Jan; Warzynski, Frederic

    Estimating markups has a long tradition in industrial organization and international trade. Economists and policy makers are interested in measuring the effect of various competition and trade policies on market power, typically measured by markups. The empirical methods that were developed in em...... in opening up the productivity-export black box, and provide a potential explanation for the big measured productivity premia for firms entering export markets.......Estimating markups has a long tradition in industrial organization and international trade. Economists and policy makers are interested in measuring the effect of various competition and trade policies on market power, typically measured by markups. The empirical methods that were developed...... in empirical industrial organization often rely on the availability of very detailed market-level data with information on prices, quantities sold, characteristics of products and more recently supplemented with consumer-level attributes. Often, both researchers and government agencies cannot rely...

  15. Instrument Remote Control via the Astronomical Instrument Markup Language

    Science.gov (United States)

    Sall, Ken; Ames, Troy; Warsaw, Craig; Koons, Lisa; Shafer, Richard

    1998-01-01

    The Instrument Remote Control (IRC) project ongoing at NASA's Goddard Space Flight Center's (GSFC) Information Systems Center (ISC) supports NASA's mission by defining an adaptive intranet-based framework that provides robust interactive and distributed control and monitoring of remote instruments. An astronomical IRC architecture that combines the platform-independent processing capabilities of Java with the power of Extensible Markup Language (XML) to express hierarchical data in an equally platform-independent, as well as human readable manner, has been developed. This architecture is implemented using a variety of XML support tools and Application Programming Interfaces (API) written in Java. IRC will enable trusted astronomers from around the world to easily access infrared instruments (e.g., telescopes, cameras, and spectrometers) located in remote, inhospitable environments, such as the South Pole, a high Chilean mountaintop, or an airborne observatory aboard a Boeing 747. Using IRC's frameworks, an astronomer or other scientist can easily define the type of onboard instrument, control the instrument remotely, and return monitoring data all through the intranet. The Astronomical Instrument Markup Language (AIML) is the first implementation of the more general Instrument Markup Language (IML). The key aspects of our approach to instrument description and control applies to many domains, from medical instruments to machine assembly lines. The concepts behind AIML apply equally well to the description and control of instruments in general. IRC enables us to apply our techniques to several instruments, preferably from different observatories.

  16. Kernel based subspace projection of hyperspectral images

    DEFF Research Database (Denmark)

    Larsen, Rasmus; Nielsen, Allan Aasbjerg; Arngren, Morten

    In hyperspectral image analysis an exploratory approach to analyse the image data is to conduct subspace projections. As linear projections often fail to capture the underlying structure of the data, we present kernel based subspace projections of PCA and Maximum Autocorrelation Factors (MAF). Th......). The MAF projection exploits the fact that interesting phenomena in images typically exhibit spatial autocorrelation. The analysis is based on nearinfrared hyperspectral images of maize grains demonstrating the superiority of the kernelbased MAF method....

  17. Threat image projection in CCTV

    Science.gov (United States)

    Neil, David; Thomas, Nicola; Baker, Bob

    2007-10-01

    Operators are key components in a Closed Circuit Television (CCTV) system, being the link between the system technology and its effective use. Operators' performance will largely determine the level of service provided by the system. There have been few studies testing operator performance, while much work has been done to test the performance of the technology. Previous work on CCTV operator performance carried out by the Home Office Scientific Development Branch (HOSDB) has used filmed video and subjects who knew they were undergoing testing, meaning subjects are likely to be concentrating harder on performing well. HOSDB believes that there is a need for a test that would be able to be routinely used in a CCTV control room throughout the course of a normal shift to provide management with operational performance data. Threat Image Projection (TIP) is routinely used in X-Ray baggage scanners to keep operators alert to possible threats. At random intervals, a threat target image is superimposed over the image of the baggage being screened. The operator then responds to this threat. A similar system could be used for CCTV operators. A threat image would be randomly superimposed over the live CCTV feed and the operator would be expected to respond to this.

  18. Kernel based subspace projection of hyperspectral images

    DEFF Research Database (Denmark)

    Larsen, Rasmus; Nielsen, Allan Aasbjerg; Arngren, Morten

    In hyperspectral image analysis an exploratory approach to analyse the image data is to conduct subspace projections. As linear projections often fail to capture the underlying structure of the data, we present kernel based subspace projections of PCA and Maximum Autocorrelation Factors (MAF...

  19. TumorML: Concept and requirements of an in silico cancer modelling markup language.

    Science.gov (United States)

    Johnson, David; Cooper, Jonathan; McKeever, Steve

    2011-01-01

    This paper describes the initial groundwork carried out as part of the European Commission funded Transatlantic Tumor Model Repositories project, to develop a new markup language for computational cancer modelling, TumorML. In this paper we describe the motivations for such a language, arguing that current state-of-the-art biomodelling languages are not suited to the cancer modelling domain. We go on to describe the work that needs to be done to develop TumorML, the conceptual design, and a description of what existing markup languages will be used to compose the language specification.

  20. Extensible Markup Language Data Mining System Model

    Institute of Scientific and Technical Information of China (English)

    李炜; 宋瀚涛

    2003-01-01

    The existing data mining methods are mostly focused on relational databases and structured data, but not on complex structured data (like in extensible markup language(XML)). By converting XML document type description to the relational semantic recording XML data relations, and using an XML data mining language, the XML data mining system presents a strategy to mine information on XML.

  1. Hospital markup and operation outcomes in the United States.

    Science.gov (United States)

    Gani, Faiz; Ejaz, Aslam; Makary, Martin A; Pawlik, Timothy M

    2016-07-01

    Although the price hospitals charge for operations has broad financial implications, hospital pricing is not subject to regulation. We sought to characterize national variation in hospital price markup for major cardiothoracic and gastrointestinal operations and to evaluate perioperative outcomes of hospitals relative to hospital price markup. All hospitals in which a patient underwent a cardiothoracic or gastrointestinal procedure were identified using the Nationwide Inpatient Sample for 2012. Markup ratios (ratio of charges to costs) for the total cost of hospitalization were compared across hospitals. Risk-adjusted morbidity, failure-to-rescue, and mortality were calculated using multivariable, hierarchical logistic regression. Among the 3,498 hospitals identified, markup ratios ranged from 0.5-12.2, with a median markup ratio of 2.8 (interquartile range 2.7-3.9). For the 888 hospitals with extreme markup (greatest markup ratio quartile: markup ratio >3.9), the median markup ratio was 4.9 (interquartile range 4.3-6.0), with 10% of these hospitals billing more than 7 times the Medicare-allowable costs (markup ratio ≥7.25). Extreme markup hospitals were more often large (46.3% vs 33.8%, P markup ratio compared with 19.3% (n = 452) and 6.8% (n = 35) of nonprofit and government hospitals, respectively. Perioperative morbidity (32.7% vs 26.4%, P markup hospitals. There is wide variation in hospital markup for cardiothoracic and gastrointestinal procedures, with approximately a quarter of hospital charges being 4 times greater than the actual cost of hospitalization. Hospitals with an extreme markup had greater perioperative morbidity. Copyright © 2016 Elsevier Inc. All rights reserved.

  2. Matched Spectral Filter Imager Project

    Data.gov (United States)

    National Aeronautics and Space Administration — OPTRA proposes the development of an imaging spectrometer for greenhouse gas and volcanic gas imaging based on matched spectral filtering and compressive imaging....

  3. Multispectral Panoramic Imaging System Project

    Data.gov (United States)

    National Aeronautics and Space Administration — International Electronic Machines Corporation, a leader in the design of precision imaging systems, will develop an innovative multispectral, panoramic imaging...

  4. Definition of an XML markup language for clinical laboratory procedures and comparison with generic XML markup.

    Science.gov (United States)

    Saadawi, Gilan M; Harrison, James H

    2006-10-01

    Clinical laboratory procedure manuals are typically maintained as word processor files and are inefficient to store and search, require substantial effort for review and updating, and integrate poorly with other laboratory information. Electronic document management systems could improve procedure management and utility. As a first step toward building such systems, we have developed a prototype electronic format for laboratory procedures using Extensible Markup Language (XML). Representative laboratory procedures were analyzed to identify document structure and data elements. This information was used to create a markup vocabulary, CLP-ML, expressed as an XML Document Type Definition (DTD). To determine whether this markup provided advantages over generic markup, we compared procedures structured with CLP-ML or with the vocabulary of the Health Level Seven, Inc. (HL7) Clinical Document Architecture (CDA) narrative block. CLP-ML includes 124 XML tags and supports a variety of procedure types across different laboratory sections. When compared with a general-purpose markup vocabulary (CDA narrative block), CLP-ML documents were easier to edit and read, less complex structurally, and simpler to traverse for searching and retrieval. In combination with appropriate software, CLP-ML is designed to support electronic authoring, reviewing, distributing, and searching of clinical laboratory procedures from a central repository, decreasing procedure maintenance effort and increasing the utility of procedure information. A standard electronic procedure format could also allow laboratories and vendors to share procedures and procedure layouts, minimizing duplicative word processor editing. Our results suggest that laboratory-specific markup such as CLP-ML will provide greater benefit for such systems than generic markup.

  5. Medical image libraries: ICoS project

    Science.gov (United States)

    Honniball, John; Thomas, Peter

    1999-08-01

    FOr use of digital techniques for the production, manipulation and storage of images has resulted in the creation of digital image libraries. These libraries often store many thousands of images. While provision of storage media for such large amounts of data has been straightforward, provision of effective searching and retrieval tools has not. Medicine relies heavily on images as a diagnostic tool. The most obvious example is the x-ray, but many other image forms are in everyday use. Advances in technology are affecting the ways medical images are generated, stored and retrieved. The paper describes the work of the Image COding and Segmentation to Support Variable Rate Transmission Channels and Variable Resolution Platforms (ICoS) research project currently under way in Bristol, UK. ICoS is a project of the Mobile of England and Hewlett-Packard Research Laboratories Europe. Funding is provided by the Engineering and PHysical Sciences Research Council. The aim of the ICoS project is to demonstrate the practical application of computer networking to medical image libraries. Work at the University of the West of England concentrates on user interface and indexing issues. Metadata is used to organize the images, coded using the WWW Consortium standard Resource Description Framework. We are investigating the application of such standards to medical images, one outcome being to implement a metadata-based image library. This paper describes the ICoS project in detail and discuses both metadata system and user interfaces in the context of medical applications.

  6. Improved hyperspectral imaging technologies Project

    Data.gov (United States)

    National Aeronautics and Space Administration — Improved hyperspectral imaging technologies could enable lower-cost analysis for planetary science including atmospheric studies, mineralogical investigations, and...

  7. Complex light modulation for lensless image projection

    Institute of Scientific and Technical Information of China (English)

    M. Makowski; A. Kolodziejczyk; A. Siemion; I. Ducin; K. Kakarenko; M. Sypek; A. M. Siemion; J. Suszek; D. Wojnowski; Z. Jaroszewicz

    2011-01-01

    We present a lensless projection of color images based on computer-generated Fourier holograms. Amplitude and phase modulation of three primary-colored laser beams is performed by a matched pair of spatial light modulators. The main advantage of the complex light modulation is the lack of iterative phase retrieval techniques. The advantage is the lack of speckles in the projected images. Experimental results are given and compared with the outcome of classical phase-only modulation.%We present a lensless projection of color images based on computer-generated Fourier holograms.Amplitude and phase modulation of three primary-colored laser beams is performed by a matched pair of spatial light modulators.The main advantage of the complex light modulation is the lack of iterative phase retrieval techniques.The advantage is the lack of speckles in the projected images.Experimental results are given and compared with the outcome of classical phase-only modulation.

  8. Chemical Markup, XML and the World-Wide Web. 8. Polymer Markup Language.

    Science.gov (United States)

    Adams, Nico; Winter, Jerry; Murray-Rust, Peter; Rzepa, Henry S

    2008-11-01

    Polymers are among the most important classes of materials but are only inadequately supported by modern informatics. The paper discusses the reasons why polymer informatics is considerably more challenging than small molecule informatics and develops a vision for the computer-aided design of polymers, based on modern semantic web technologies. The paper then discusses the development of Polymer Markup Language (PML). PML is an extensible language, designed to support the (structural) representation of polymers and polymer-related information. PML closely interoperates with Chemical Markup Language (CML) and overcomes a number of the previously identified challenges.

  9. Intended and unintended consequences of China's zero markup drug policy.

    Science.gov (United States)

    Yi, Hongmei; Miller, Grant; Zhang, Linxiu; Li, Shaoping; Rozelle, Scott

    2015-08-01

    Since economic liberalization in the late 1970s, China's health care providers have grown heavily reliant on revenue from drugs, which they both prescribe and sell. To curb abuse and to promote the availability, safety, and appropriate use of essential drugs, China introduced its national essential drug list in 2009 and implemented a zero markup policy designed to decouple provider compensation from drug prescription and sales. We collected and analyzed representative data from China's township health centers and their catchment-area populations both before and after the reform. We found large reductions in drug revenue, as intended by policy makers. However, we also found a doubling of inpatient care that appeared to be driven by supply, instead of demand. Thus, the reform had an important unintended consequence: China's health care providers have sought new, potentially inappropriate, forms of revenue. Project HOPE—The People-to-People Health Foundation, Inc.

  10. Semantic Markup for Literary Scholars: How Descriptive Markup Affects the Study and Teaching of Literature.

    Science.gov (United States)

    Campbell, D. Grant

    2002-01-01

    Describes a qualitative study which investigated the attitudes of literary scholars towards the features of semantic markup for primary texts in XML format. Suggests that layout is a vital part of the reading process which implies that the standardization of DTDs (Document Type Definitions) should extend to styling as well. (Author/LRW)

  11. PENDEKATAN MODEL MATEMATIS UNTUK MENENTUKAN PERSENTASE MARKUP HARGA JUAL PRODUK

    Directory of Open Access Journals (Sweden)

    Oviliani Yenty Yuliana

    2002-01-01

    Full Text Available The purpose of this research is to design Mathematical models that can determine the selling volume as an alternative to improve the markup percentage. Mathematical models was designed with double regression statistic. Selling volume is a function of markup, market condition, and substitute condition variables. The designed Mathematical model has fulfilled by the test of: error upon assumption, accurate model, validation model, and multi collinear problem. The Mathematical model has applied in application program with expectation that the application program can give: (1 alternative to decide percentage markup for user, (2 Illustration of gross profit estimation that will be achieve for selected percentage markup, (3 Illustration of estimation percentage of the units sold that will be achieve for selected percentage markup, and (4 Illustration of total net income before tax will get for specific period. Abstract in Bahasa Indonesia : Penelitian ini bertujuan untuk merancang model Matematis guna menetapkan volume penjualan, sebagai alternatif untuk menentukan persentase markup harga jual produk. Model Matematis dirancang menggunakan Statistik Regresi Berganda. Volume penjualan merupakan fungsi dari variabel markup, kondisi pasar, dan kondisi pengganti. Model Matematis yang dirancang sudah memenuhi uji: asumsi atas error, akurasi model, validasi model, dan masalah multikolinearitas. Rancangan model Matematis tersebut diterapkan dalam program aplikasi dengan harapan dapat memberi: (1 alternatif bagi pengguna mengenai berapa besar markup yang sebaiknya ditetapkan, (2 gambaran perkiraan laba kotor yang akan diperoleh setiap pemilihan markup, (3 gambaran perkiraan persentase unit yang terjual setiap pemilihan markup, dan (4 gambaran total laba kotor sebelum pajak yang dapat diperoleh pada periode yang bersangkutan. Kata kunci: model Matematis, aplikasi program, volume penjualan, markup, laba kotor.

  12. Visualizing Scientific Data Using Keyhole Markup Language (KML)

    Science.gov (United States)

    Valcic, L.; Bailey, J. E.; Dehn, J.

    2006-12-01

    Over the last five years there has been a proliferation in the development of virtual globe programs. Programs such as Google Earth, NASA World Wind, SkylineGlobe, Geofusion and ArcGIS Explorer each have their own strengths and weaknesses, and whether a market will remain for all tools will be determined by user application. This market is currently led by Google Earth, the release of which on 28 Jun 2005 helped spark a revolution in virtual globe technology, by bringing it into the public view and imagination. Many would argue that such a revolution was due, but it was certainly aided by the world-wide name recognition of Google, and the creation of a user-friendly interface. Google Earth is an updated version of a program originally called Earth Viewer, which was developed by Keyhole Inc. It was renamed after Google purchased Keyhole and their technology in 2001. In order to manage the geospatial data within these viewers, the developers created a new XML-based (Extensible Markup Language) called Keyhole Markup Language (KML). Through manipulation of KML scientists are finding increasingly creative and more visually appealing methods to display and manipulate their data. A measure of the success of Google Earth and KML is demonstrated by the fact that other virtual globes are now including various levels of KML compatibility. This presentation will display examples of how KML has been applied to scientific data. It will offer a forum for questions pertaining to how KML can be applied to a user's dataset. Interested parties are encouraged to bring examples of projects under development or being planned.

  13. Medical imaging projects meet at CERN

    CERN Multimedia

    CERN Bulletin

    2013-01-01

    ENTERVISION, the Research Training Network in 3D Digital Imaging for Cancer Radiation Therapy, successfully passed its mid-term review held at CERN on 11 January. This multidisciplinary project aims at qualifying experts in medical imaging techniques for improved hadron therapy.   ENTERVISION provides training in physics, medicine, electronics, informatics, radiobiology and engineering, as well as a wide range of soft skills, to 16 researchers of different backgrounds and nationalities. The network is funded by the European Commission within the Marie Curie Initial Training Network, and relies on the EU-funded research project ENVISION to provide a training platform for the Marie Curie researchers. The two projects hold their annual meetings jointly, allowing the young researchers to meet senior scientists and to have a full picture of the latest developments in the field beyond their individual research project. ENVISION and ENTERVISION are both co-ordinated by CERN, and the Laboratory hosts t...

  14. Balancing medicine prices and business sustainability: analyses of pharmacy costs, revenues and profit shed light on retail medicine mark-ups in rural Kyrgyzstan

    Directory of Open Access Journals (Sweden)

    Maddix Jason

    2010-07-01

    Full Text Available Abstract Background Numerous not-for-profit pharmacies have been created to improve access to medicines for the poor, but many have failed due to insufficient financial planning and management. These pharmacies are not well described in health services literature despite strong demand from policy makers, implementers, and researchers. Surveys reporting unaffordable medicine prices and high mark-ups have spurred efforts to reduce medicine prices, but price reduction goals are arbitrary in the absence of information on pharmacy costs, revenues, and profit structures. Health services research is needed to develop sustainable and "reasonable" medicine price goals and strategic initiatives to reach them. Methods We utilized cost accounting methods on inventory and financial information obtained from a not-for-profit rural pharmacy network in mountainous Kyrgyzstan to quantify costs, revenues, profits and medicine mark-ups during establishment and maintenance periods (October 2004-December 2007. Results Twelve pharmacies and one warehouse were established in remote Kyrgyzstan with 100%, respectively. Annual mark-ups increased dramatically each year to cover increasing recurrent costs, and by 2007, only 19% and 46% of products revealed mark-ups of 100%. 2007 medicine mark-ups varied substantially across these products, ranging from 32% to 244%. Mark-ups needed to sustain private pharmacies would be even higher in the absence of government subsidies. Conclusion Pharmacy networks can be established in hard-to-reach regions with little funding using public-private partnership, resource-sharing models. Medicine prices and mark-ups must be interpreted with consideration for regional costs of business. Mark-ups vary dramatically across medicines. Some mark-ups appear "excessive" but are likely necessary for pharmacy viability. Pharmacy financial data is available in remote settings and can be used towards determination of "reasonable" medicine price goals

  15. Balancing medicine prices and business sustainability: analyses of pharmacy costs, revenues and profit shed light on retail medicine mark-ups in rural Kyrgyzstan.

    Science.gov (United States)

    Waning, Brenda; Maddix, Jason; Soucy, Lyne

    2010-07-13

    Numerous not-for-profit pharmacies have been created to improve access to medicines for the poor, but many have failed due to insufficient financial planning and management. These pharmacies are not well described in health services literature despite strong demand from policy makers, implementers, and researchers. Surveys reporting unaffordable medicine prices and high mark-ups have spurred efforts to reduce medicine prices, but price reduction goals are arbitrary in the absence of information on pharmacy costs, revenues, and profit structures. Health services research is needed to develop sustainable and "reasonable" medicine price goals and strategic initiatives to reach them. We utilized cost accounting methods on inventory and financial information obtained from a not-for-profit rural pharmacy network in mountainous Kyrgyzstan to quantify costs, revenues, profits and medicine mark-ups during establishment and maintenance periods (October 2004-December 2007). Twelve pharmacies and one warehouse were established in remote Kyrgyzstan with 100%, respectively. Annual mark-ups increased dramatically each year to cover increasing recurrent costs, and by 2007, only 19% and 46% of products revealed mark-ups of 100%. 2007 medicine mark-ups varied substantially across these products, ranging from 32% to 244%. Mark-ups needed to sustain private pharmacies would be even higher in the absence of government subsidies. Pharmacy networks can be established in hard-to-reach regions with little funding using public-private partnership, resource-sharing models. Medicine prices and mark-ups must be interpreted with consideration for regional costs of business. Mark-ups vary dramatically across medicines. Some mark-ups appear "excessive" but are likely necessary for pharmacy viability. Pharmacy financial data is available in remote settings and can be used towards determination of "reasonable" medicine price goals. Health systems researchers must document the positive and negative

  16. Projection x-space magnetic particle imaging.

    Science.gov (United States)

    Goodwill, Patrick W; Konkle, Justin J; Zheng, Bo; Saritas, Emine U; Conolly, Steven M

    2012-05-01

    Projection magnetic particle imaging (MPI) can improve imaging speed by over 100-fold over traditional 3-D MPI. In this work, we derive the 2-D x-space signal equation, 2-D image equation, and introduce the concept of signal fading and resolution loss for a projection MPI imager. We then describe the design and construction of an x-space projection MPI scanner with a field gradient of 2.35 T/m across a 10 cm magnet free bore. The system has an expected resolution of 3.5 × 8.0 mm using Resovist tracer, and an experimental resolution of 3.8 × 8.4 mm resolution. The system images 2.5 cm × 5.0 cm partial field-of views (FOVs) at 10 frames/s, and acquires a full field-of-view of 10 cm × 5.0 cm in 4 s. We conclude by imaging a resolution phantom, a complex "Cal" phantom, mice injected with Resovist tracer, and experimentally confirm the theoretically predicted x-space spatial resolution.

  17. Genomic Sequence Variation Markup Language (GSVML).

    Science.gov (United States)

    Nakaya, Jun; Kimura, Michio; Hiroi, Kaei; Ido, Keisuke; Yang, Woosung; Tanaka, Hiroshi

    2010-02-01

    With the aim of making good use of internationally accumulated genomic sequence variation data, which is increasing rapidly due to the explosive amount of genomic research at present, the development of an interoperable data exchange format and its international standardization are necessary. Genomic Sequence Variation Markup Language (GSVML) will focus on genomic sequence variation data and human health applications, such as gene based medicine or pharmacogenomics. We developed GSVML through eight steps, based on case analysis and domain investigations. By focusing on the design scope to human health applications and genomic sequence variation, we attempted to eliminate ambiguity and to ensure practicability. We intended to satisfy the requirements derived from the use case analysis of human-based clinical genomic applications. Based on database investigations, we attempted to minimize the redundancy of the data format, while maximizing the data covering range. We also attempted to ensure communication and interface ability with other Markup Languages, for exchange of omics data among various omics researchers or facilities. The interface ability with developing clinical standards, such as the Health Level Seven Genotype Information model, was analyzed. We developed the human health-oriented GSVML comprising variation data, direct annotation, and indirect annotation categories; the variation data category is required, while the direct and indirect annotation categories are optional. The annotation categories contain omics and clinical information, and have internal relationships. For designing, we examined 6 cases for three criteria as human health application and 15 data elements for three criteria as data formats for genomic sequence variation data exchange. The data format of five international SNP databases and six Markup Languages and the interface ability to the Health Level Seven Genotype Model in terms of 317 items were investigated. GSVML was developed as

  18. Fast contrast enhanced imaging with projection reconstruction

    Science.gov (United States)

    Peters, Dana Ceceilia

    The use of contrast agents has lead to great advances in magnetic resonance angiography (MRA). Here we present the first application of projection reconstruction to contrast enhanced MRA. In this research the limited angle projection reconstruction (PR) trajectory is implemented to acquire higher resolution images per unit time than with conventional Fourier transform (FT) imaging. It is well known that as FOV is reduced in conventional spin- warp imaging, higher resolution per unit time can be obtained, but aliasing may appear as a replication of outside material within the FOV. The limited angle PR acquisition also produces aliasing artifacts. This method produced artifacts which were unacceptable in X-ray CT but which appear to be tolerable in MR Angiography. Resolution throughout the FOV is determined by the projection readout resolution and not by the number of projections. As the number of projections is reduced, the resolution is unchanged, but low intensity artifacts appear. Here are presented the results of using limited angle PR in phantoms and contrast-enhanced angiograms of humans.

  19. HGML: a hypertext guideline markup language.

    Science.gov (United States)

    Hagerty, C G; Pickens, D; Kulikowski, C; Sonnenberg, F

    2000-01-01

    Existing text-based clinical practice guidelines can be difficult to put into practice. While a growing number of such documents have gained acceptance in the medical community and contain a wealth of valuable information, the time required to digest them is substantial. Yet the expressive power, subtlety and flexibility of natural language pose challenges when designing computer tools that will help in their application. At the same time, formal computer languages typically lack such expressiveness and the effort required to translate existing documents into these languages may be costly. We propose a method based on the mark-up concept for converting text-based clinical guidelines into a machine-operable form. This allows existing guidelines to be manipulated by machine, and viewed in different formats at various levels of detail according to the needs of the practitioner, while preserving their originally published form.

  20. Descriptive markup languages and the development of digital humanities

    Directory of Open Access Journals (Sweden)

    Boris Bosančić

    2012-11-01

    Full Text Available The paper discusses the role of descriptive markup languages in the development of digital humanities, a new research discipline that is part of social sciences and humanities, which focuses on the use of computers in research. A chronological review of the development of digital humanities, and then descriptive markup languages is exposed, through several developmental stages. It is shown that the development of digital humanities since the mid-1980s and the appearance of SGML, markup language that was the foundation of TEI, a key standard for the encoding and exchange of humanities texts in the digital environment, is inseparable from the development of markup languages. Special attention is dedicated to the presentation of the Text Encoding Initiative – TEI development, a key organization that developed the titled standard, both from organizational and markup perspectives. By this time, TEI standard is published in five versions, and during 2000s SGML is replaced by XML markup language. Key words: markup languages, digital humanities, text encoding, TEI, SGML, XML

  1. Extremely simple holographic projection of color images

    Science.gov (United States)

    Makowski, Michal; Ducin, Izabela; Kakarenko, Karol; Suszek, Jaroslaw; Kolodziejczyk, Andrzej; Sypek, Maciej

    2012-03-01

    A very simple scheme of holographic projection is presented with some experimental results showing good quality image projection without any imaging lens. This technique can be regarded as an alternative to classic projection methods. It is based on the reconstruction real images from three phase iterated Fourier holograms. The illumination is performed with three laser beams of primary colors. A divergent wavefront geometry is used to achieve an increased throw angle of the projection, compared to plane wave illumination. Light fibers are used as light guidance in order to keep the setup as simple as possible and to provide point-like sources of high quality divergent wave-fronts at optimized position against the light modulator. Absorbing spectral filters are implemented to multiplex three holograms on a single phase-only spatial light modulator. Hence color mixing occurs without any time-division methods, which cause rainbow effects and color flicker. The zero diffractive order with divergent illumination is practically invisible and speckle field is effectively suppressed with phase optimization and time averaging techniques. The main advantages of the proposed concept are: a very simple and highly miniaturizable configuration; lack of lens; a single LCoS (Liquid Crystal on Silicon) modulator; a strong resistance to imperfections and obstructions of the spatial light modulator like dead pixels, dust, mud, fingerprints etc.; simple calculations based on Fast Fourier Transform (FFT) easily processed in real time mode with GPU (Graphic Programming).

  2. Patient information exchange guideline MERIT-9 using medical markup language MML.

    Science.gov (United States)

    Kimura, M; Ohe, K; Yoshihara, H; Ando, Y; Kawamata, F; Hishiki, T; Ohashi, K; Sakusabe, T; Tani, S; Akiyama, M

    1998-01-01

    To realize clinical data exchange between healthcare providers, there must be many standards in many layers. Terms and codes should be standardized, syntax to wrap the data must be mutually parsable, then transfer protocol or exchange media should be agreed. Among many standards for the syntax, HL7 and DICOM are most successful. However, everything could not be handled by HL7 solely. DICOM is good for radiology images, but, other clinical images are already handled by other "lighter" data formats like JPEG, TIFF. So, it is not realistic to use only one standard for every area of clinical information. For description of medical records, especially for narrative information, we created SGML DTD for medical information, called MML (Medical Markup Language). It is already implemented in more than 10 healthcare providers in Japan. As it is a hierarchical description of information, it is easily used as a basis of object request brokering. It is again not realistic to use MML solely for clinical information in various level of detail. Therefore, we proposed a guide-line for use of available medical standards to facilitate clinical information exchange between healthcare providers. It is called MERIT-9 (MEdical Records, Images, Texts,--Information eXchange). A typical use is HL7 files, DICOM files, referred from an MML file in a patient record, as external entities. Both MML and MERIT-9 are research projects of Japanese Ministry of Health and Welfare, and the purpose is to facilitate clinical data exchanges. They are becoming to be used in technical specifications for new hospital information systems in Japan.

  3. Web Based Distributed Coastal Image Analysis System Project

    Data.gov (United States)

    National Aeronautics and Space Administration — This project develops Web based distributed image analysis system processing the Moderate Resolution Imaging Spectroradiometer (MODIS) data to provide decision...

  4. Information and image integration: project spectrum

    Science.gov (United States)

    Blaine, G. James; Jost, R. Gilbert; Martin, Lori; Weiss, David A.; Lehmann, Ron; Fritz, Kevin

    1998-07-01

    The BJC Health System (BJC) and the Washington University School of Medicine (WUSM) formed a technology alliance with industry collaborators to develop and implement an integrated, advanced clinical information system. The industry collaborators include IBM, Kodak, SBC and Motorola. The activity, called Project Spectrum, provides an integrated clinical repository for the multiple hospital facilities of the BJC. The BJC System consists of 12 acute care hospitals serving over one million patients in Missouri and Illinois. An interface engine manages transactions from each of the hospital information systems, lab systems and radiology information systems. Data is normalized to provide a consistent view for the primary care physician. Access to the clinical repository is supported by web-based server/browser technology which delivers patient data to the physician's desktop. An HL7 based messaging system coordinates the acquisition and management of radiological image data and sends image keys to the clinical data repository. Access to the clinical chart browser currently provides radiology reports, laboratory data, vital signs and transcribed medical reports. A chart metaphor provides tabs for the selection of the clinical record for review. Activation of the radiology tab facilitates a standardized view of radiology reports and provides an icon used to initiate retrieval of available radiology images. The selection of the image icon spawns an image browser plug-in and utilizes the image key from the clinical repository to access the image server for the requested image data. The Spectrum system is collecting clinical data from five hospital systems and imaging data from two hospitals. Domain specific radiology imaging systems support the acquisition and primary interpretation of radiology exams. The spectrum clinical workstations are deployed to over 200 sites utilizing local area networks and ISDN connectivity.

  5. Invisibility cloak with image projection capability

    Science.gov (United States)

    Banerjee, Debasish; Ji, Chengang; Iizuka, Hideo

    2016-12-01

    Investigations of invisibility cloaks have been led by rigorous theories and such cloak structures, in general, require extreme material parameters. Consequently, it is challenging to realize them, particularly in the full visible region. Due to the insensitivity of human eyes to the polarization and phase of light, cloaking a large object in the full visible region has been recently realized by a simplified theory. Here, we experimentally demonstrate a device concept where a large object can be concealed in a cloak structure and at the same time any images can be projected through it by utilizing a distinctively different approach; the cloaking via one polarization and the image projection via the other orthogonal polarization. Our device structure consists of commercially available optical components such as polarizers and mirrors, and therefore, provides a significant further step towards practical application scenarios such as transparent devices and see-through displays.

  6. CytometryML: a markup language for analytical cytology

    Science.gov (United States)

    Leif, Robert C.; Leif, Stephanie H.; Leif, Suzanne B.

    2003-06-01

    Cytometry Markup Language, CytometryML, is a proposed new analytical cytology data standard. CytometryML is a set of XML schemas for encoding both flow cytometry and digital microscopy text based data types. CytometryML schemas reference both DICOM (Digital Imaging and Communications in Medicine) codes and FCS keywords. These schemas provide representations for the keywords in FCS 3.0 and will soon include DICOM microscopic image data. Flow Cytometry Standard (FCS) list-mode has been mapped to the DICOM Waveform Information Object. A preliminary version of a list mode binary data type, which does not presently exist in DICOM, has been designed. This binary type is required to enhance the storage and transmission of flow cytometry and digital microscopy data. Index files based on Waveform indices will be used to rapidly locate the cells present in individual subsets. DICOM has the advantage of employing standard file types, TIF and JPEG, for Digital Microscopy. Using an XML schema based representation means that standard commercial software packages such as Excel and MathCad can be used to analyze, display, and store analytical cytometry data. Furthermore, by providing one standard for both DICOM data and analytical cytology data, it eliminates the need to create and maintain special purpose interfaces for analytical cytology data thereby integrating the data into the larger DICOM and other clinical communities. A draft version of CytometryML is available at www.newportinstruments.com.

  7. A quality assessment tool for markup-based clinical guidelines.

    Science.gov (United States)

    Shalom, Erez; Shahar, Yuval; Taieb-Maimon, Meirav; Lunenfeld, Eitan

    2008-11-06

    We introduce a tool for quality assessment of procedural and declarative knowledge. We developed this tool for evaluating the specification of mark-up-based clinical GLs. Using this graphical tool, the expert physician and knowledge engineer collaborate to perform scoring, using pre-defined scoring scale, each of the knowledge roles of the mark-ups, comparing it to a gold standard. The tool enables scoring the mark-ups simultaneously at different sites by different users at different locations.

  8. STMML. A markup language for scientific, technical and medical publishing

    Directory of Open Access Journals (Sweden)

    Peter Murray-Rust

    2006-01-01

    Full Text Available STMML is an XML-based markup language covering many generic aspects of scientific information. It has been developed as a re-usable core for more specific markup languages. It supports data structures, data types, metadata, scientific units and some basic components of scientific narrative. The central means of adding semantic information is through dictionaries. The specification is through an XML Schema which can be used to validate STMML documents or fragments. Many examples of the language are given.

  9. Fast Image Correspondence with Global Structure Projection

    Institute of Scientific and Technical Information of China (English)

    Qing-Liang Lin; Bin Sheng; Yang Shen; Zhi-Feng Xie; Zhi-Hua Chen; Li-Zhuang Ma

    2012-01-01

    This paper presents a method for recognizing images with flat objects based on global keypoint structure correspondence.This technique works by two steps:reference keypoint selection and structure projection.The using of global keypoint structure is an extension of an orderless bag-of-features image representation,which is utilized by the proposed matching technique for computation efficiency.Specifically,our proposed method excels in the dataset of images containing “flat objects” such as CD covers,books,newspaper.The efficiency and accuracy of our proposed method has been tested on a database of nature pictures with flat objects and other kind of objects.The result shows our method works well in both occasions.

  10. PIML: the Pathogen Information Markup Language.

    Science.gov (United States)

    He, Yongqun; Vines, Richard R; Wattam, Alice R; Abramochkin, Georgiy V; Dickerman, Allan W; Eckart, J Dana; Sobral, Bruno W S

    2005-01-01

    A vast amount of information about human, animal and plant pathogens has been acquired, stored and displayed in varied formats through different resources, both electronically and otherwise. However, there is no community standard format for organizing this information or agreement on machine-readable format(s) for data exchange, thereby hampering interoperation efforts across information systems harboring such infectious disease data. The Pathogen Information Markup Language (PIML) is a free, open, XML-based format for representing pathogen information. XSLT-based visual presentations of valid PIML documents were developed and can be accessed through the PathInfo website or as part of the interoperable web services federation known as ToolBus/PathPort. Currently, detailed PIML documents are available for 21 pathogens deemed of high priority with regard to public health and national biological defense. A dynamic query system allows simple queries as well as comparisons among these pathogens. Continuing efforts are being taken to include other groups' supporting PIML and to develop more PIML documents. All the PIML-related information is accessible from http://www.vbi.vt.edu/pathport/pathinfo/

  11. AllerML: markup language for allergens.

    Science.gov (United States)

    Ivanciuc, Ovidiu; Gendel, Steven M; Power, Trevor D; Schein, Catherine H; Braun, Werner

    2011-06-01

    Many concerns have been raised about the potential allergenicity of novel, recombinant proteins into food crops. Guidelines, proposed by WHO/FAO and EFSA, include the use of bioinformatics screening to assess the risk of potential allergenicity or cross-reactivities of all proteins introduced, for example, to improve nutritional value or promote crop resistance. However, there are no universally accepted standards that can be used to encode data on the biology of allergens to facilitate using data from multiple databases in this screening. Therefore, we developed AllerML a markup language for allergens to assist in the automated exchange of information between databases and in the integration of the bioinformatics tools that are used to investigate allergenicity and cross-reactivity. As proof of concept, AllerML was implemented using the Structural Database of Allergenic Proteins (SDAP; http://fermi.utmb.edu/SDAP/) database. General implementation of AllerML will promote automatic flow of validated data that will aid in allergy research and regulatory analysis. Copyright © 2011 Elsevier Inc. All rights reserved.

  12. On the Power of Fuzzy Markup Language

    CERN Document Server

    Loia, Vincenzo; Lee, Chang-Shing; Wang, Mei-Hui

    2013-01-01

    One of the most successful methodology that arose from the worldwide diffusion of Fuzzy Logic is Fuzzy Control. After the first attempts dated in the seventies, this methodology has been widely exploited for controlling many industrial components and systems. At the same time, and very independently from Fuzzy Logic or Fuzzy Control, the birth of the Web has impacted upon almost all aspects of computing discipline. Evolution of Web, Web 2.0 and Web 3.0 has been making scenarios of ubiquitous computing much more feasible;  consequently information technology has been thoroughly integrated into everyday objects and activities. What happens when Fuzzy Logic meets Web technology? Interesting results might come out, as you will discover in this book. Fuzzy Mark-up Language is a son of this synergistic view, where some technological issues of Web are re-interpreted taking into account the transparent notion of Fuzzy Control, as discussed here.  The concept of a Fuzzy Control that is conceived and modeled in terms...

  13. Image quality assessment metrics by using directional projection

    Institute of Scientific and Technical Information of China (English)

    2008-01-01

    Objective image quality mcasure, which is a fundamental and challenging job in image processing, evaluates the image quality consistently with human perception automatically. On the assumption that any image distortion could be modeled as the difference between the directional projection-based maps of reference and distortion images, wc propose a new objective quality assessment method based on directional projection for full reference model. Experimental results show that the proposed metrics are well consistent with the subjective quality score.

  14. Interexaminer variation of minutia markup on latent fingerprints.

    Science.gov (United States)

    Ulery, Bradford T; Hicklin, R Austin; Roberts, Maria Antonia; Buscaglia, JoAnn

    2016-07-01

    Latent print examiners often differ in the number of minutiae they mark during analysis of a latent, and also during comparison of a latent with an exemplar. Differences in minutia counts understate interexaminer variability: examiners' markups may have similar minutia counts but differ greatly in which specific minutiae were marked. We assessed variability in minutia markup among 170 volunteer latent print examiners. Each provided detailed markup documenting their examinations of 22 latent-exemplar pairs of prints randomly assigned from a pool of 320 pairs. An average of 12 examiners marked each latent. The primary factors associated with minutia reproducibility were clarity, which regions of the prints examiners chose to mark, and agreement on value or comparison determinations. In clear areas (where the examiner was "certain of the location, presence, and absence of all minutiae"), median reproducibility was 82%; in unclear areas, median reproducibility was 46%. Differing interpretations regarding which regions should be marked (e.g., when there is ambiguity in the continuity of a print) contributed to variability in minutia markup: especially in unclear areas, marked minutiae were often far from the nearest minutia marked by a majority of examiners. Low reproducibility was also associated with differences in value or comparison determinations. Lack of standardization in minutia markup and unfamiliarity with test procedures presumably contribute to the variability we observed. We have identified factors accounting for interexaminer variability; implementing standards for detailed markup as part of documentation and focusing future training efforts on these factors may help to facilitate transparency and reduce subjectivity in the examination process. Published by Elsevier Ireland Ltd.

  15. Tiny Devices Project Sharp, Colorful Images

    Science.gov (United States)

    2009-01-01

    Displaytech Inc., based in Longmont, Colorado and recently acquired by Micron Technology Inc. of Boise, Idaho, first received a Small Business Innovation Research contract in 1993 from Johnson Space Center to develop tiny, electronic, color displays, called microdisplays. Displaytech has since sold over 20 million microdisplays and was ranked one of the fastest growing technology companies by Deloitte and Touche in 2005. Customers currently incorporate the microdisplays in tiny pico-projectors, which weigh only a few ounces and attach to media players, cell phones, and other devices. The projectors can convert a digital image from the typical postage stamp size into a bright, clear, four-foot projection. The company believes sales of this type of pico-projector may exceed $1.1 billion within 5 years.

  16. Estimation of service sector mark-ups determined by structural reform indicators

    OpenAIRE

    Anna Thum-Thysen; Erik Canton

    2015-01-01

    This paper analyses the impact of regulation on product sector mark-ups across the EU and confirms that less strict regulation tends to foster competition and reduce mark-up rates. The results also show that mark-ups in most EU countries and sectors have been declining over the last 15 years as a result of competition-friendly reforms. The paper also casts light on which areas of regulation are most important for mark-ups in individual sectors.

  17. Image processing methods to obtain symmetrical distribution from projection image.

    Science.gov (United States)

    Asano, H; Takenaka, N; Fujii, T; Nakamatsu, E; Tagami, Y; Takeshima, K

    2004-10-01

    Flow visualization and measurement of cross-sectional liquid distribution is very effective to clarify the effects of obstacles in a conduit on heat transfer and flow characteristics of gas-liquid two-phase flow. In this study, two methods to obtain cross-sectional distribution of void fraction are applied to vertical upward air-water two-phase flow. These methods need projection image only from one direction. Radial distributions of void fraction in a circular tube and a circular-tube annuli with a spacer were calculated by Abel transform based on the assumption of axial symmetry. On the other hand, cross-sectional distributions of void fraction in a circular tube with a wire coil whose conduit configuration rotates about the tube central axis periodically were measured by CT method based on the assumption that the relative distributions of liquid phase against the wire were kept along the flow direction.

  18. SGML-Based Markup for Literary Texts: Two Problems and Some Solutions.

    Science.gov (United States)

    Barnard, David; And Others

    1988-01-01

    Identifies the Standard Generalized Markup Language (SGML) as the best basis for a markup standard for encoding literary texts. Outlines solutions to problems using SGML and discusses the problem of maintaining multiple views of a document. Examines several ways of reducing the burden of markups. (GEA)

  19. A New Extended Projection-Based Image Registration Algorithm

    Institute of Scientific and Technical Information of China (English)

    CHENHuafu; YAODezhong

    2005-01-01

    In the presence of fixed -pattern noise, the projection-based image registration technique is effective but its implementation is only confined to translation registration. Presented in this paper is an extended projectionbased image registration technique in which, by rearranging the projections of images, the image registration is implemented in two steps: rotation and translation, to accomplish two-dimensional (2-D) image registration. Thisapproach transforms the general 2-D optimization procedure into an 1-D projection optimization, thus considerably reducing the amount of computation. The validity ofthe new method is testified by simulation experiment.

  20. Systems biology markup language: Level 2 and beyond.

    Science.gov (United States)

    Finney, A; Hucka, M

    2003-12-01

    The SBML (systems biology markup language) is a standard exchange format for computational models of biochemical networks. We continue developing SBML collaboratively with the modelling community to meet their evolving needs. The recently introduced SBML Level 2 includes several enhancements to the original Level 1, and features under development for SBML Level 3 include model composition, multistate chemical species and diagrams.

  1. The WANDAML Markup Language for Digital Document Annotation

    NARCIS (Netherlands)

    Franke, K.; Guyon, I.; Schomaker, L.; Vuurpijl, L.

    2004-01-01

    WANDAML is an XML-based markup language for the annotation and filter journaling of digital documents. It addresses in particular the needs of forensic handwriting data examination, by allowing experts to enter information about writer, material (pen, paper), script and content, and to record chains

  2. The Next Step towards a Function Markup Language

    NARCIS (Netherlands)

    Heylen, Dirk K.J.; Kopp, S.; Marsella, S.C.; Pelachaud, C.; Vilhjálmson, H.; Prendinger, H.; Lester, J.; Ishizuka, M.

    2008-01-01

    In order to enable collaboration and exchange of modules for generating multimodal communicative behaviours of robots and virtual agents, the SAIBA initiative envisions the definition of two representation languages. One of these is the Function Markup Language (FML). This language specifies the

  3. The WANDAML Markup Language for Digital Document Annotation

    NARCIS (Netherlands)

    Franke, K.; Guyon, I.; Schomaker, L.; Vuurpijl, L.

    2004-01-01

    WANDAML is an XML-based markup language for the annotation and filter journaling of digital documents. It addresses in particular the needs of forensic handwriting data examination, by allowing experts to enter information about writer, material (pen, paper), script and content, and to record chains

  4. Improving Interoperability by Incorporating UnitsML Into Markup Languages.

    Science.gov (United States)

    Celebi, Ismet; Dragoset, Robert A; Olsen, Karen J; Schaefer, Reinhold; Kramer, Gary W

    2010-01-01

    Maintaining the integrity of analytical data over time is a challenge. Years ago, data were recorded on paper that was pasted directly into a laboratory notebook. The digital age has made maintaining the integrity of data harder. Nowadays, digitized analytical data are often separated from information about how the sample was collected and prepared for analysis and how the data were acquired. The data are stored on digital media, while the related information about the data may be written in a paper notebook or stored separately in other digital files. Sometimes the connection between this "scientific meta-data" and the analytical data is lost, rendering the spectrum or chromatogram useless. We have been working with ASTM Subcommittee E13.15 on Analytical Data to create the Analytical Information Markup Language or AnIML-a new way to interchange and store spectroscopy and chromatography data based on XML (Extensible Markup Language). XML is a language for describing what data are by enclosing them in computer-useable tags. Recording the units associated with the analytical data and metadata is an essential issue for any data representation scheme that must be addressed by all domain-specific markup languages. As scientific markup languages proliferate, it is very desirable to have a single scheme for handling units to facilitate moving information between different data domains. At NIST, we have been developing a general markup language just for units that we call UnitsML. This presentation will describe how UnitsML is used and how it is being incorporated into AnIML.

  5. Gunturk-Altunbasak-Mersereau Alternating Projections Image Demosaicking

    Directory of Open Access Journals (Sweden)

    Pascal Getreuer

    2011-09-01

    Full Text Available The problem of image demosaicking (or demosaicing is where an image has been captured through a color filter array (CFA, and the goal is to estimate complete color information at every pixel. This IPOL article describes the image demosaicking method proposed by Gunturk, Altunbasak, and Mersereau in "Color Plane Interpolation Using Alternating Projections." Given an initial demosaicking, the method improves the result by alternatingly applying two different projections. One projection copies the green channel's wavelet detail coefficients to the red and blue channels while the other projection constrains the solution to agree with the observed data.

  6. Fast image matching algorithm based on projection characteristics

    Science.gov (United States)

    Zhou, Lijuan; Yue, Xiaobo; Zhou, Lijun

    2011-06-01

    Based on analyzing the traditional template matching algorithm, this paper identified the key factors restricting the speed of matching and put forward a brand new fast matching algorithm based on projection. Projecting the grayscale image, this algorithm converts the two-dimensional information of the image into one-dimensional one, and then matches and identifies through one-dimensional correlation, meanwhile, because of normalization has been done, when the image brightness or signal amplitude increasing in proportion, it could also perform correct matching. Experimental results show that the projection characteristics based image registration method proposed in this article could greatly improve the matching speed, which ensuring the matching accuracy as well.

  7. Precise and Efficient Retrieval of Captioned Images: The MARIE Project.

    Science.gov (United States)

    Rowe, Neil C.

    1999-01-01

    The MARIE project explores knowledge-based information retrieval of captioned images of the kind found in picture libraries and on the Internet. MARIE's five-part approach exploits the idea that images are easier to understand with context, especially descriptive text near them, but it also does image analysis. Experiments show MARIE prototypes…

  8. Extreme Markup: The Fifty US Hospitals With The Highest Charge-To-Cost Ratios.

    Science.gov (United States)

    Bai, Ge; Anderson, Gerard F

    2015-06-01

    Using Medicare cost reports, we examined the fifty US hospitals with the highest charge-to-cost ratios in 2012. These hospitals have markups (ratios of charges over Medicare-allowable costs) approximately ten times their Medicare-allowable costs compared to a national average of 3.4 and a mode of 2.4. Analysis of the fifty hospitals showed that forty-nine are for profit (98 percent), forty-six are owned by for-profit hospital systems (92 percent), and twenty (40 percent) operate in Florida. One for-profit hospital system owns half of these fifty hospitals. While most public and private health insurers do not use hospital charges to set their payment rates, uninsured patients are commonly asked to pay the full charges, and out-of-network patients and casualty and workers' compensation insurers are often expected to pay a large portion of the full charges. Because it is difficult for patients to compare prices, market forces fail to constrain hospital charges. Federal and state governments may want to consider limitations on the charge-to-cost ratio, some form of all-payer rate setting, or mandated price disclosure to regulate hospital markups. Project HOPE—The People-to-People Health Foundation, Inc.

  9. Systems Biology Markup Language (SBML) Level 2 Version 5: Structures and Facilities for Model Definitions.

    Science.gov (United States)

    Hucka, Michael; Bergmann, Frank T; Dräger, Andreas; Hoops, Stefan; Keating, Sarah M; Le Novère, Nicolas; Myers, Chris J; Olivier, Brett G; Sahle, Sven; Schaff, James C; Smith, Lucian P; Waltemath, Dagmar; Wilkinson, Darren J

    2015-09-04

    Computational models can help researchers to interpret data, understand biological function, and make quantitative predictions. The Systems Biology Markup Language (SBML) is a file format for representing computational models in a declarative form that can be exchanged between different software systems. SBML is oriented towards describing biological processes of the sort common in research on a number of topics, including metabolic pathways, cell signaling pathways, and many others. By supporting SBML as an input/output format, different tools can all operate on an identical representation of a model, removing opportunities for translation errors and assuring a common starting point for analyses and simulations. This document provides the specification for Version 5 of SBML Level 2. The specification defines the data structures prescribed by SBML as well as their encoding in XML, the eXtensible Markup Language. This specification also defines validation rules that determine the validity of an SBML document, and provides many examples of models in SBML form. Other materials and software are available from the SBML project web site, http://sbml.org.

  10. The Systems Biology Markup Language (SBML): Language Specification for Level 3 Version 1 Core.

    Science.gov (United States)

    Hucka, Michael; Bergmann, Frank T; Hoops, Stefan; Keating, Sarah M; Sahle, Sven; Schaff, James C; Smith, Lucian P; Wilkinson, Darren J

    2015-09-04

    Computational models can help researchers to interpret data, understand biological function, and make quantitative predictions. The Systems Biology Markup Language (SBML) is a file format for representing computational models in a declarative form that can be exchanged between different software systems. SBML is oriented towards describing biological processes of the sort common in research on a number of topics, including metabolic pathways, cell signaling pathways, and many others. By supporting SBML as an input/output format, different tools can all operate on an identical representation of a model, removing opportunities for translation errors and assuring a common starting point for analyses and simulations. This document provides the specification for Version 1 of SBML Level 3 Core. The specification defines the data structures prescribed by SBML as well as their encoding in XML, the eXtensible Markup Language. This specification also defines validation rules that determine the validity of an SBML document, and provides many examples of models in SBML form. Other materials and software are available from the SBML project web site, http://sbml.org/.

  11. Systems Biology Markup Language (SBML) Level 2 Version 5: Structures and Facilities for Model Definitions

    Science.gov (United States)

    Hucka, Michael; Bergmann, Frank T.; Dräger, Andreas; Hoops, Stefan; Keating, Sarah M.; Le Novére, Nicolas; Myers, Chris J.; Olivier, Brett G.; Sahle, Sven; Schaff, James C.; Smith, Lucian P.; Waltemath, Dagmar; Wilkinson, Darren J.

    2017-01-01

    Summary Computational models can help researchers to interpret data, understand biological function, and make quantitative predictions. The Systems Biology Markup Language (SBML) is a file format for representing computational models in a declarative form that can be exchanged between different software systems. SBML is oriented towards describing biological processes of the sort common in research on a number of topics, including metabolic pathways, cell signaling pathways, and many others. By supporting SBML as an input/output format, different tools can all operate on an identical representation of a model, removing opportunities for translation errors and assuring a common starting point for analyses and simulations. This document provides the specification for Version 5 of SBML Level 2. The specification defines the data structures prescribed by SBML as well as their encoding in XML, the eXtensible Markup Language. This specification also defines validation rules that determine the validity of an SBML document, and provides many examples of models in SBML form. Other materials and software are available from the SBML project web site, http://sbml.org/. PMID:26528569

  12. Semantic markup of nouns and adjectives for the Electronic corpus of texts in Tuvan language

    Directory of Open Access Journals (Sweden)

    Bajlak Ch. Oorzhak

    2016-12-01

    Full Text Available The article examines the progress of semantic markup of the Electronic corpus of texts in Tuvan language (ECTTL, which is another stage of adding Tuvan texts to the database and marking up the corpus. ECTTL is a collaborative project by researchers from Tuvan State University (Research and Education Center of Turkic Studies and Department of Information Technologies. Semantic markup of Tuvan lexis will come as a search engine and reference system which will help users find text snippets containing words with desired meanings in ECTTL. The first stage of this process is setting up databases of basic lexemes of Tuvan language. All meaningful lexemes were classified into the following semantic groups: humans, animals, objects, natural objects and phenomena, and abstract concepts. All Tuvan object nouns, as well as both descriptive and relative adjectives, were assigned to one of these lexico-semantic classes. Each class, sub-class and descriptor is tagged in Tuvan, Russian and English; these tags, in turn, will help automatize searching. The databases of meaningful lexemes of Tuvan language will also outline their lexical combinations. The automatized system will contain information on semantic combinations of adjectives with nouns, adverbs with verbs, nouns with verbs, as well as on the combinations which are semantically incompatible.

  13. Ghost Imaging of Space Objects Project

    Data.gov (United States)

    National Aeronautics and Space Administration — Ghost imaging is an optical imaging technique that utilizes the correlations between optical fields in two channels. One of the channels contains the object,...

  14. Space-Ready Advanced Imaging System Project

    Data.gov (United States)

    National Aeronautics and Space Administration — In this Phase II effort Toyon will increase the state-of-the-art for video/image systems. This will include digital image compression algorithms as well as system...

  15. Ghost Imaging of Space Objects Project

    Data.gov (United States)

    National Aeronautics and Space Administration — The NIAC research effort entitled “The Ghost Imaging of Space Objects” has been inspired by the original 1995 Ghost Imaging and Ghost Diffraction...

  16. [Managing digital medical imaging projects in healthcare services: lessons learned].

    Science.gov (United States)

    Rojas de la Escalera, D

    2013-01-01

    Medical imaging is one of the most important diagnostic instruments in clinical practice. The technological development of digital medical imaging has enabled healthcare services to undertake large scale projects that require the participation and collaboration of many professionals of varied backgrounds and interests as well as substantial investments in infrastructures. Rather than focusing on systems for dealing with digital medical images, this article deals with the management of projects for implementing these systems, reviewing various organizational, technological, and human factors that are critical to ensure the success of these projects and to guarantee the compatibility and integration of digital medical imaging systems with other health information systems. To this end, the author relates several lessons learned from a review of the literature and the author's own experience in the technical coordination of digital medical imaging projects. Copyright © 2012 SERAM. Published by Elsevier Espana. All rights reserved.

  17. Biological Dynamics Markup Language (BDML): an open format for representing quantitative biological dynamics data.

    Science.gov (United States)

    Kyoda, Koji; Tohsato, Yukako; Ho, Kenneth H L; Onami, Shuichi

    2015-04-01

    Recent progress in live-cell imaging and modeling techniques has resulted in generation of a large amount of quantitative data (from experimental measurements and computer simulations) on spatiotemporal dynamics of biological objects such as molecules, cells and organisms. Although many research groups have independently dedicated their efforts to developing software tools for visualizing and analyzing these data, these tools are often not compatible with each other because of different data formats. We developed an open unified format, Biological Dynamics Markup Language (BDML; current version: 0.2), which provides a basic framework for representing quantitative biological dynamics data for objects ranging from molecules to cells to organisms. BDML is based on Extensible Markup Language (XML). Its advantages are machine and human readability and extensibility. BDML will improve the efficiency of development and evaluation of software tools for data visualization and analysis. A specification and a schema file for BDML are freely available online at http://ssbd.qbic.riken.jp/bdml/. Supplementary data are available at Bioinformatics online. © The Author 2014. Published by Oxford University Press.

  18. Modeling Hydrates and the Gas Hydrate Markup Language

    Directory of Open Access Journals (Sweden)

    Weihua Wang

    2007-06-01

    Full Text Available Natural gas hydrates, as an important potential fuels, flow assurance hazards, and possible factors initiating the submarine geo-hazard and global climate change, have attracted the interest of scientists all over the world. After two centuries of hydrate research, a great amount of scientific data on gas hydrates has been accumulated. Therefore the means to manage, share, and exchange these data have become an urgent task. At present, metadata (Markup Language is recognized as one of the most efficient ways to facilitate data management, storage, integration, exchange, discovery and retrieval. Therefore the CODATA Gas Hydrate Data Task Group proposed and specified Gas Hydrate Markup Language (GHML as an extensible conceptual metadata model to characterize the features of data on gas hydrate. This article introduces the details of modeling portion of GHML.

  19. Using descriptive mark-up to formalize translation quality assessment

    CERN Document Server

    Kutuzov, Andrey

    2008-01-01

    The paper deals with using descriptive mark-up to emphasize translation mistakes. The author postulates the necessity to develop a standard and formal XML-based way of describing translation mistakes. It is considered to be important for achieving impersonal translation quality assessment. Marked-up translations can be used in corpus translation studies; moreover, automatic translation assessment based on marked-up mistakes is possible. The paper concludes with setting up guidelines for further activity within the described field.

  20. Field Data and the Gas Hydrate Markup Language

    Directory of Open Access Journals (Sweden)

    Ralf Löwner

    2007-06-01

    Full Text Available Data and information exchange are crucial for any kind of scientific research activities and are becoming more and more important. The comparison between different data sets and different disciplines creates new data, adds value, and finally accumulates knowledge. Also the distribution and accessibility of research results is an important factor for international work. The gas hydrate research community is dispersed across the globe and therefore, a common technical communication language or format is strongly demanded. The CODATA Gas Hydrate Data Task Group is creating the Gas Hydrate Markup Language (GHML, a standard based on the Extensible Markup Language (XML to enable the transport, modeling, and storage of all manner of objects related to gas hydrate research. GHML initially offers an easily deducible content because of the text-based encoding of information, which does not use binary data. The result of these investigations is a custom-designed application schema, which describes the features, elements, and their properties, defining all aspects of Gas Hydrates. One of the components of GHML is the "Field Data" module, which is used for all data and information coming from the field. It considers international standards, particularly the standards defined by the W3C (World Wide Web Consortium and the OGC (Open Geospatial Consortium. Various related standards were analyzed and compared with our requirements (in particular the Geographic Markup Language (ISO19136, GML and the whole ISO19000 series. However, the requirements demanded a quick solution and an XML application schema readable for any scientist without a background in information technology. Therefore, ideas, concepts and definitions have been used to build up the modules of GHML without importing any of these Markup languages. This enables a comprehensive schema and simple use.

  1. Ultrahigh Resolution 3-Dimensional Imaging Project

    Data.gov (United States)

    National Aeronautics and Space Administration — Southwest Sciences proposes to develop innovative instrumentation for the rapid, 3-dimensional imaging of biological tissues with cellular resolution. Our approach...

  2. Adaptive Computed Tomography Imaging Spectrometer Project

    Data.gov (United States)

    National Aeronautics and Space Administration — The present proposal describes the development of an adaptive Computed Tomography Imaging Spectrometer (CTIS), or "Snapshot" spectrometer which can "instantaneously"...

  3. Highly Stable, Large Format EUV Imager Project

    Data.gov (United States)

    National Aeronautics and Space Administration — Higher detection efficiency and better radiation tolerance imagers are needed for the next generation of EUV instruments. Previously, CCD technology has demonstrated...

  4. Synthetic Imaging Maneuver Optimization (SIMO) Project

    Data.gov (United States)

    National Aeronautics and Space Administration — Space-based interferometry missions have the potential to revolutionize imaging and astrometry, providing observations of unprecedented accuracy. Realizing the full...

  5. Measuring Brand Image Effects of Flagship Projects for Place Brands

    DEFF Research Database (Denmark)

    Zenker, Sebastian; Beckmann, Suzanne C.

    2013-01-01

    structural differences for the brand image dimensions of the city contingent upon the type of flagship project. Hence, this study shows (i) that different flagship projects have different image effects for the city brand and (ii) provides a novel method for measuring perceived place brand image effects...... (Elbphilharmonie), euro400 million to develop the ‘International Architectural Fair’ and it is also considering candidature again for the ‘Olympic Games’ in 2024/2028. As assessing the image effects of such projects is rather difficult, this article introduces an improved version of the Brand Concept Map approach......, which was originally developed for product brands. An experimental design was used to first measure the Hamburg brand as such and then the changes in the brand perceptions after priming the participants (N=209) for one of the three different flagship projects. The findings reveal several important...

  6. Airborne Wide Area Imager for Wildfire Mapping and Detection Project

    Data.gov (United States)

    National Aeronautics and Space Administration — An advanced airborne imaging system for fire detection/mapping is proposed. The goal of the project is to improve control and management of wildfires in order to...

  7. Tomographic image reconstruction from continuous projections

    NARCIS (Netherlands)

    Cant, J.; Palenstijn, W.J.; Behiels, G.; Sijbers, J.

    2014-01-01

    An important design aspect in tomographic image reconstruction is the choice between a step-and-shoot protocol versus continuous X-ray tube movement for image acquisition. A step-and-shoot protocol implies a perfectly still tube during X-ray exposure, and hence involves moving the tube to its next p

  8. The Assessment of Distortion in Neurosurgical Image Overlay Projection.

    Science.gov (United States)

    Vakharia, Nilesh N; Paraskevopoulos, Dimitris; Lang, Jozsef; Vakharia, Vejay N

    2016-02-01

    Numerous studies have demonstrated the superiority of neuronavigation during neurosurgical procedures compared to non-neuronavigation-based procedures. Limitations to neuronavigation systems include the need for the surgeons to avert their gaze from the surgical field and the cost of the systems, especially for hospitals in developing countries. Overlay projection of imaging directly onto the patient allows localization of intracranial structures. A previous study using overlay projection demonstrated the accuracy of image coregistration for a lesion in the temporal region but did not assess image distortion when projecting onto other anatomical locations. Our aim is to quantify this distortion and establish which regions of the skull would be most suitable for overlay projection. Using the difference in size of a square grid when projected onto an anatomically accurate model skull and a flat surface, from the same distance, we were able to calculate the degree of image distortion when projecting onto the skull from the anterior, posterior, superior, and lateral aspects. Measuring the size of a square when projected onto a flat surface from different distances allowed us to model change in lesion size when projecting a deep structure onto the skull surface. Using 2 mm as the upper limit for distortion, our results show that images can be accurately projected onto the majority (81.4%) of the surface of the skull. Our results support the use of image overlay projection in regions with ≤2 mm distortion to assist with localization of intracranial lesions at a fraction of the cost of existing methods. © The Author(s) 2015.

  9. Image reconstruction technique using projection data from neutron tomography system

    Directory of Open Access Journals (Sweden)

    Waleed Abd el Bar

    2015-12-01

    Full Text Available Neutron tomography is a very powerful technique for nondestructive evaluation of heavy industrial components as well as for soft hydrogenous materials enclosed in heavy metals which are usually difficult to image using X-rays. Due to the properties of the image acquisition system, the projection images are distorted by several artifacts, and these reduce the quality of the reconstruction. In order to eliminate these harmful effects the projection images should be corrected before reconstruction. This paper gives a description of a filter back projection (FBP technique, which is used for reconstruction of projected data obtained from transmission measurements by neutron tomography system We demonstrated the use of spatial Discrete Fourier Transform (DFT and the 2D Inverse DFT in the formulation of the method, and outlined the theory of reconstruction of a 2D neutron image from a sequence of 1D projections taken at different angles between 0 and π in MATLAB environment. Projections are generated by applying the Radon transform to the original image at different angles.

  10. Are the determinants of markup size industry-specific? The case of Slovenian manufacturing firms

    Directory of Open Access Journals (Sweden)

    Ponikvar Nina

    2011-01-01

    Full Text Available The aim of this paper is to identify factors that affect the pricing policy in Slovenian manufacturing firms in terms of the markup size and, most of all, to explicitly account for the possibility of differences in pricing procedures among manufacturing industries. Accordingly, the analysis of the dynamic panel is carried out on an industry-by-industry basis, allowing the coefficients on the markup determinants to vary across industries. We find that the oligopoly theory of markup determination for the most part holds for the manufacturing sector as a whole, although large variability in markup determinants exists across industries within the Slovenian manufacturing. Our main conclusion is that each industry should be investigated separately in detail in order to assess the precise role of markup factors in the markup-determination process.

  11. Total Variation and Tomographic Imaging from Projections

    DEFF Research Database (Denmark)

    Hansen, Per Christian; Jørgensen, Jakob Heide

    2011-01-01

    Total Variation (TV) regularization is a powerful technique for image reconstruction tasks such as denoising, in-painting, and deblurring, because of its ability to produce sharp edges in the images. In this talk we discuss the use of TV regularization for tomographic imaging, where we compute a 2D...... incorporates our prior information about the solution and thus compensates for the loss of accuracy in the data. A consequence is that smaller data acquisition times can be used, thus reducing a patients exposure to X-rays in medical scanning and speeding up non-destructive measurements in materials science....

  12. The -Curvature Images of Convex Bodies and -Projection Bodies

    Indian Academy of Sciences (India)

    Songjun Lv; Gangsong Leng

    2008-08-01

    Associated with the -curvature image defined by Lutwak, some inequalities for extended mixed -affine surface areas of convex bodies and the support functions of -projection bodies are established. As a natural extension of a result due to Lutwak, an -type affine isoperimetric inequality, whose special cases are -Busemann–Petty centroid inequality and -affine projection inequality, respectively, is established. Some -mixed volume inequalities involving -projection bodies are also established.

  13. Towards a Semantics for XML Markup%XML标记的语义

    Institute of Scientific and Technical Information of China (English)

    艾兰•瑞尼尔; 戴维德•杜宾; 斯芬伯格•麦奎因; 克劳斯•惠特福德; 王晓光(译); 王俊芳(译)

    2016-01-01

    Although XML Document Type Definitions provide a mechanism for specifying, in machine-readable form, the syntax of an XML markup language, there is no comparable mechanism for specifying the semantics of an XML vocabulary. That is, there is no way to characterize the meaning of XML markup so that the facts and relationships represented by the occurrence of XML constructs can be explicitly, comprehensively, and mechanical y identified. This has serious practical and theoretical consequences. On the positive side, XML constructs can be assigned arbitrary semantics and used in application areas not foreseen by the original designers. On the less positive side, both content developers and application engineers must rely upon prose documentation, or, worse, conjectures about the intention of the markup language designer — a process that is time-consuming, error-prone, incomplete, and unverifiable, even when the language designer properly documents the language. In addition, the lack of a substantial body of research in markup semantics means that digital document processing is undertheorized as an engineering application area. Although there are some related projects underway (XML Schema, RDF, the Semantic Web) which provide relevant results, none of these projects directly and comprehensively address the core problems of XML markup semantics. This paper (i) summarizes the history of the concept of markup meaning, (i ) characterizes the specific problems that motivate the need for a formal semantics for XML and (i i) describes an ongoing research project :the BECHAMEL Markup Semantics Project —that is attempting to develop such a semantics.%尽管 XML 文档类型定义提供了一种机器可读形式的、能够说明 XML 语言语法的机制,但目前并没有类似的机制来指定 XML 词汇表的具体语义。这意味着没办法说明 XML 标记的意义,由 XML 形式呈现的事实和关系无法清晰、全面和规范地定义。这在实践和

  14. High Temperature Fiberoptic Thermal Imaging System Project

    Data.gov (United States)

    National Aeronautics and Space Administration — The proposed Phase 1 program will fabricate and demonstrate a small diameter single fiber endoscope that can perform high temperature thermal imaging in a jet engine...

  15. High-Speed FPGA Image Decoder Project

    Data.gov (United States)

    National Aeronautics and Space Administration — NASA space imagery is gathered and transmitted back to earth in many formats. One of the newer formats is the lossy/lossless image format CCSDS (CCSDS 122.0-B-1),...

  16. Portable Remote Imaging Spectrometer (PRISM) Project

    Data.gov (United States)

    National Aeronautics and Space Administration — Develop an UV-NIR (350nm to 1050 nm) portable remote imaging spectrometer (PRISM) for flight on a variety of airborne platforms with high SNR and response...

  17. Synthetic Imaging Maneuver Optimization (SIMO) Project

    Data.gov (United States)

    National Aeronautics and Space Administration — Aurora Flight Sciences (AFS), in collaboration with the MIT Space Systems Laboratory (MIT-SSL), proposed the Synthetic Imaging Maneuver Optimization (SIMO) program...

  18. Hyperspectral Single Pixel Image Sensor Project

    Data.gov (United States)

    National Aeronautics and Space Administration — This is a drastic enhancement to the current prototype which only allows us to collect visible light and reconstruct a single wavelength image. This approach is a...

  19. Ge Quantum Dot Infrared Imaging Camera Project

    Data.gov (United States)

    National Aeronautics and Space Administration — Luna Innovations Incorporated proposes to develop a high performance Ge quantum dots-based infrared (IR) imaging camera on Si substrate. The high sensitivity, large...

  20. Miniaturized Airborne Imaging Central Server System Project

    Data.gov (United States)

    National Aeronautics and Space Administration — The innovation is a miniaturized airborne imaging central server system (MAICSS). MAICSS is designed as a high-performance-computer-based electronic backend that...

  1. Miniaturized Airborne Imaging Central Server System Project

    Data.gov (United States)

    National Aeronautics and Space Administration — The innovation is a miniaturized airborne imaging central server system (MAICSS). MAICSS is designed as a high-performance computer-based electronic backend that...

  2. High resolution image reconstruction from projection of low resolution images differing in subpixel shifts

    Science.gov (United States)

    Mareboyana, Manohar; Le Moigne, Jacqueline; Bennett, Jerome

    2016-05-01

    In this paper, we demonstrate simple algorithms that project low resolution (LR) images differing in subpixel shifts on a high resolution (HR) also called super resolution (SR) grid. The algorithms are very effective in accuracy as well as time efficiency. A number of spatial interpolation techniques using nearest neighbor, inverse-distance weighted averages, Radial Basis Functions (RBF) etc. are used in projection. For best accuracy of reconstructing SR image by a factor of two requires four LR images differing in four independent subpixel shifts. The algorithm has two steps: i) registration of low resolution images and (ii) shifting the low resolution images to align with reference image and projecting them on high resolution grid based on the shifts of each low resolution image using different interpolation techniques. Experiments are conducted by simulating low resolution images by subpixel shifts and subsampling of original high resolution image and the reconstructing the high resolution images from the simulated low resolution images. The results of accuracy of reconstruction are compared by using mean squared error measure between original high resolution image and reconstructed image. The algorithm was tested on remote sensing images and found to outperform previously proposed techniques such as Iterative Back Projection algorithm (IBP), Maximum Likelihood (ML) algorithms. The algorithms are robust and are not overly sensitive to the registration inaccuracies.

  3. High Resolution Image Reconstruction from Projection of Low Resolution Images DIffering in Subpixel Shifts

    Science.gov (United States)

    Mareboyana, Manohar; Le Moigne-Stewart, Jacqueline; Bennett, Jerome

    2016-01-01

    In this paper, we demonstrate a simple algorithm that projects low resolution (LR) images differing in subpixel shifts on a high resolution (HR) also called super resolution (SR) grid. The algorithm is very effective in accuracy as well as time efficiency. A number of spatial interpolation techniques using nearest neighbor, inverse-distance weighted averages, Radial Basis Functions (RBF) etc. used in projection yield comparable results. For best accuracy of reconstructing SR image by a factor of two requires four LR images differing in four independent subpixel shifts. The algorithm has two steps: i) registration of low resolution images and (ii) shifting the low resolution images to align with reference image and projecting them on high resolution grid based on the shifts of each low resolution image using different interpolation techniques. Experiments are conducted by simulating low resolution images by subpixel shifts and subsampling of original high resolution image and the reconstructing the high resolution images from the simulated low resolution images. The results of accuracy of reconstruction are compared by using mean squared error measure between original high resolution image and reconstructed image. The algorithm was tested on remote sensing images and found to outperform previously proposed techniques such as Iterative Back Projection algorithm (IBP), Maximum Likelihood (ML), and Maximum a posterior (MAP) algorithms. The algorithm is robust and is not overly sensitive to the registration inaccuracies.

  4. High Resolution Digital Imaging of Paintings: The Vasari Project.

    Science.gov (United States)

    Martinez, Kirk

    1991-01-01

    Describes VASARI (the Visual Art System for Archiving and Retrieval of Images), a project funded by the European Community to show the feasibility of high resolution colormetric imaging directly from paintings. The hardware and software used in the system are explained, storage on optical disks is described, and initial results are reported. (five…

  5. Image Analysis of Fabric Pilling Based on Light Projection

    Institute of Scientific and Technical Information of China (English)

    陈霞; 黄秀宝

    2003-01-01

    The objective assessment of fabric pilling based on light projection and image analysis has been exploited recently.The device for capturing the cross-sectional images of the pilled fabrics with light projection is elaborated.The detection of the profile line and integration of the sequential cross-sectional pilled image are discussed.The threshold based on Gaussian model is recommended for pill segmentation.The results show that the installed system is capable of eliminating the interference with pill information from the fabric color and pattern.

  6. Reconstruction of CT images by the Bayes- back projection method

    CERN Document Server

    Haruyama, M; Takase, M; Tobita, H

    2002-01-01

    In the course of research on quantitative assay of non-destructive measurement of radioactive waste, the have developed a unique program based on the Bayesian theory for reconstruction of transmission computed tomography (TCT) image. The reconstruction of cross-section images in the CT technology usually employs the Filtered Back Projection method. The new imaging reconstruction program reported here is based on the Bayesian Back Projection method, and it has a function of iterative improvement images by every step of measurement. Namely, this method has the capability of prompt display of a cross-section image corresponding to each angled projection data from every measurement. Hence, it is possible to observe an improved cross-section view by reflecting each projection data in almost real time. From the basic theory of Baysian Back Projection method, it can be not only applied to CT types of 1st, 2nd, and 3rd generation. This reported deals with a reconstruction program of cross-section images in the CT of ...

  7. Aligning Projection Images from Binary Volumes

    NARCIS (Netherlands)

    Bleichrodt, F.; Beenhouwer, J. de; Sijbers, J.; Batenburg, K.J.

    2014-01-01

    In tomography, slight differences between the geometry of the scanner hardware and the geometric model used in the reconstruction lead to alignment artifacts. To exploit high-resolution detectors used in many applications of tomography, alignment of the projection data is essential. Markerless align

  8. Scanned Image Projection System Employing Intermediate Image Plane

    Science.gov (United States)

    DeJong, Christian Dean (Inventor); Hudman, Joshua M. (Inventor)

    2014-01-01

    In imaging system, a spatial light modulator is configured to produce images by scanning a plurality light beams. A first optical element is configured to cause the plurality of light beams to converge along an optical path defined between the first optical element and the spatial light modulator. A second optical element is disposed between the spatial light modulator and a waveguide. The first optical element and the spatial light modulator are arranged such that an image plane is created between the spatial light modulator and the second optical element. The second optical element is configured to collect the diverging light from the image plane and collimate it. The second optical element then delivers the collimated light to a pupil at an input of the waveguide.

  9. Compressed Sensing Inspired Image Reconstruction from Overlapped Projections

    Directory of Open Access Journals (Sweden)

    Lin Yang

    2010-01-01

    Full Text Available The key idea discussed in this paper is to reconstruct an image from overlapped projections so that the data acquisition process can be shortened while the image quality remains essentially uncompromised. To perform image reconstruction from overlapped projections, the conventional reconstruction approach (e.g., filtered backprojection (FBP algorithms cannot be directly used because of two problems. First, overlapped projections represent an imaging system in terms of summed exponentials, which cannot be transformed into a linear form. Second, the overlapped measurement carries less information than the traditional line integrals. To meet these challenges, we propose a compressive sensing-(CS- based iterative algorithm for reconstruction from overlapped data. This algorithm starts with a good initial guess, relies on adaptive linearization, and minimizes the total variation (TV. Then, we demonstrated the feasibility of this algorithm in numerical tests.

  10. The gel electrophoresis markup language (GelML) from the Proteomics Standards Initiative.

    Science.gov (United States)

    Gibson, Frank; Hoogland, Christine; Martinez-Bartolomé, Salvador; Medina-Aunon, J Alberto; Albar, Juan Pablo; Babnigg, Gyorgy; Wipat, Anil; Hermjakob, Henning; Almeida, Jonas S; Stanislaus, Romesh; Paton, Norman W; Jones, Andrew R

    2010-09-01

    The Human Proteome Organisation's Proteomics Standards Initiative has developed the GelML (gel electrophoresis markup language) data exchange format for representing gel electrophoresis experiments performed in proteomics investigations. The format closely follows the reporting guidelines for gel electrophoresis, which are part of the Minimum Information About a Proteomics Experiment (MIAPE) set of modules. GelML supports the capture of metadata (such as experimental protocols) and data (such as gel images) resulting from gel electrophoresis so that laboratories can be compliant with the MIAPE Gel Electrophoresis guidelines, while allowing such data sets to be exchanged or downloaded from public repositories. The format is sufficiently flexible to capture data from a broad range of experimental processes, and complements other PSI formats for MS data and the results of protein and peptide identifications to capture entire gel-based proteome workflows. GelML has resulted from the open standardisation process of PSI consisting of both public consultation and anonymous review of the specifications.

  11. Precision markup modeling and display in a global geo-spatial environment

    Science.gov (United States)

    Wartell, Zachary J.; Ribarsky, William; Faust, Nickolas L.

    2003-08-01

    A data organization, scalable structure, and multiresolution visualization approach is described for precision markup modeling in a global geospatial environment. The global environment supports interactive visual navigation from global overviews to details on the ground at the resolution of inches or less. This is a difference in scale of 10 orders of magnitude or more. To efficiently handle details over this range of scales while providing accurate placement of objects, a set of nested coordinate systems is used, which always refers, through a series of transformations, to the fundamental world coordinate system (with its origin at the center of the earth). This coordinate structure supports multi-resolution models of imagery, terrain, vector data, buildings, moving objects, and other geospatial data. Thus objects that are static or moving on the terrain can be displayed without inaccurate positioning or jumping due to coordinate round-off. Examples of high resolution images, 3D objects, and terrain-following annotations are shown.

  12. Field Markup Language: biological field representation in XML.

    Science.gov (United States)

    Chang, David; Lovell, Nigel H; Dokos, Socrates

    2007-01-01

    With an ever increasing number of biological models available on the internet, a standardized modeling framework is required to allow information to be accessed or visualized. Based on the Physiome Modeling Framework, the Field Markup Language (FML) is being developed to describe and exchange field information for biological models. In this paper, we describe the basic features of FML, its supporting application framework and its ability to incorporate CellML models to construct tissue-scale biological models. As a typical application example, we present a spatially-heterogeneous cardiac pacemaker model which utilizes both FML and CellML to describe and solve the underlying equations of electrical activation and propagation.

  13. Experimental Applications of Automatic Test Markup Language (ATML)

    Science.gov (United States)

    Lansdowne, Chatwin A.; McCartney, Patrick; Gorringe, Chris

    2012-01-01

    The authors describe challenging use-cases for Automatic Test Markup Language (ATML), and evaluate solutions. The first case uses ATML Test Results to deliver active features to support test procedure development and test flow, and bridging mixed software development environments. The second case examines adding attributes to Systems Modelling Language (SysML) to create a linkage for deriving information from a model to fill in an ATML document set. Both cases are outside the original concept of operations for ATML but are typical when integrating large heterogeneous systems with modular contributions from multiple disciplines.

  14. Stereoscopic interpretation of low-dose breast tomosynthesis projection images.

    Science.gov (United States)

    Muralidhar, Gautam S; Markey, Mia K; Bovik, Alan C; Haygood, Tamara Miner; Stephens, Tanya W; Geiser, William R; Garg, Naveen; Adrada, Beatriz E; Dogan, Basak E; Carkaci, Selin; Khisty, Raunak; Whitman, Gary J

    2014-04-01

    The purpose of this study was to evaluate stereoscopic perception of low-dose breast tomosynthesis projection images. In this Institutional Review Board exempt study, craniocaudal breast tomosynthesis cases (N = 47), consisting of 23 biopsy-proven malignant mass cases and 24 normal cases, were retrospectively reviewed. A stereoscopic pair comprised of two projection images that were ±4° apart from the zero angle projection was displayed on a Planar PL2010M stereoscopic display (Planar Systems, Inc., Beaverton, OR, USA). An experienced breast imager verified the truth for each case stereoscopically. A two-phase blinded observer study was conducted. In the first phase, two experienced breast imagers rated their ability to perceive 3D information using a scale of 1-3 and described the most suspicious lesion using the BI-RADS® descriptors. In the second phase, four experienced breast imagers were asked to make a binary decision on whether they saw a mass for which they would initiate a diagnostic workup or not and also report the location of the mass and provide a confidence score in the range of 0-100. The sensitivity and the specificity of the lesion detection task were evaluated. The results from our study suggest that radiologists who can perceive stereo can reliably interpret breast tomosynthesis projection images using stereoscopic viewing.

  15. A novel augmented reality system of image projection for image-guided neurosurgery.

    Science.gov (United States)

    Mahvash, Mehran; Besharati Tabrizi, Leila

    2013-05-01

    Augmented reality systems combine virtual images with a real environment. To design and develop an augmented reality system for image-guided surgery of brain tumors using image projection. A virtual image was created in two ways: (1) MRI-based 3D model of the head matched with the segmented lesion of a patient using MRIcro software (version 1.4, freeware, Chris Rorden) and (2) Digital photograph based model in which the tumor region was drawn using image-editing software. The real environment was simulated with a head phantom. For direct projection of the virtual image to the head phantom, a commercially available video projector (PicoPix 1020, Philips) was used. The position and size of the virtual image was adjusted manually for registration, which was performed using anatomical landmarks and fiducial markers position. An augmented reality system for image-guided neurosurgery using direct image projection has been designed successfully and implemented in first evaluation with promising results. The virtual image could be projected to the head phantom and was registered manually. Accurate registration (mean projection error: 0.3 mm) was performed using anatomical landmarks and fiducial markers position. The direct projection of a virtual image to the patients head, skull, or brain surface in real time is an augmented reality system that can be used for image-guided neurosurgery. In this paper, the first evaluation of the system is presented. The encouraging first visualization results indicate that the presented augmented reality system might be an important enhancement of image-guided neurosurgery.

  16. Parallel hyperspectral image reconstruction using random projections

    Science.gov (United States)

    Sevilla, Jorge; Martín, Gabriel; Nascimento, José M. P.

    2016-10-01

    Spaceborne sensors systems are characterized by scarce onboard computing and storage resources and by communication links with reduced bandwidth. Random projections techniques have been demonstrated as an effective and very light way to reduce the number of measurements in hyperspectral data, thus, the data to be transmitted to the Earth station is reduced. However, the reconstruction of the original data from the random projections may be computationally expensive. SpeCA is a blind hyperspectral reconstruction technique that exploits the fact that hyperspectral vectors often belong to a low dimensional subspace. SpeCA has shown promising results in the task of recovering hyperspectral data from a reduced number of random measurements. In this manuscript we focus on the implementation of the SpeCA algorithm for graphics processing units (GPU) using the compute unified device architecture (CUDA). Experimental results conducted using synthetic and real hyperspectral datasets on the GPU architecture by NVIDIA: GeForce GTX 980, reveal that the use of GPUs can provide real-time reconstruction. The achieved speedup is up to 22 times when compared with the processing time of SpeCA running on one core of the Intel i7-4790K CPU (3.4GHz), with 32 Gbyte memory.

  17. Basis Functions in Image Reconstruction From Projections: A Tutorial Introduction

    Science.gov (United States)

    Herman, Gabor T.

    2015-11-01

    The series expansion approaches to image reconstruction from projections assume that the object to be reconstructed can be represented as a linear combination of fixed basis functions and the task of the reconstruction algorithm is to estimate the coefficients in such a linear combination based on the measured projection data. It is demonstrated that using spherically symmetric basis functions (blobs), instead of ones based on the more traditional pixels, yields superior reconstructions of medically relevant objects. The demonstration uses simulated computerized tomography projection data of head cross-sections and the series expansion method ART for the reconstruction. In addition to showing the results of one anecdotal example, the relative efficacy of using pixel and blob basis functions in image reconstruction from projections is also evaluated using a statistical hypothesis testing based task oriented comparison methodology. The superiority of the efficacy of blob basis functions over that of pixel basis function is found to be statistically significant.

  18. Trade reforms, mark-ups and bargaining power of workers: the case ...

    African Journals Online (AJOL)

    Trade reforms, mark-ups and bargaining power of workers: the case of ... from firms' market power; which is negatively associated with to decline with trade reforms. ... model of mark-up with labor bargaining power was estimated using random ...

  19. Advanced Imaging Catheter: Final Project Report

    Energy Technology Data Exchange (ETDEWEB)

    Krulevitch, P; Colston, B; DaSilva, L; Hilken, D; Kluiwstra, J U; Lee, A P; London, R; Miles, R; Schumann, D; Seward, K; Wang, A

    2001-07-20

    Minimally invasive surgery (MIS) is an approach whereby procedures conventionally performed with large and potentially traumatic incisions are replaced by several tiny incisions through which specialized instruments are inserted. Early MIS, often called laparoscopic surgery, used video cameras and laparoscopes to visualize and control the medical devices, which were typically cutting or stapling tools. More recently, catheter-based procedures have become a fast growing sector of all surgeries. In these procedures, small incisions are made into one of the main arteries (e.g. femoral artery in the thigh), and a long thin hollow tube is inserted and positioned near the target area. The key advantage of this technique is that recovery time can be reduced from months to a matter of days. In the United States, over 700,000 catheter procedures are performed annually representing a market of over $350 million. Further growth in this area will require significant improvements in the current catheter technology. In order to effectively navigate a catheter through the tortuous vessels of the body, two capabilities must exist: imaging and positioning. In most cases, catheter procedures rely on radiography for visualization and manual manipulation for positioning of the device. Radiography provides two-dimensional, global images of the vasculature and cannot be used continuously due to radiation exposure to both the patient and physician. Intravascular ultrasound devices are available for continuous local imaging at the catheter tip, but these devices cannot be used simultaneously with therapeutic devices. Catheters are highly compliant devices, and manipulating the catheter is similar to pushing on a string. Often, a guide wire is used to help position the catheter, but this procedure has its own set of problems. Three characteristics are used to describe catheter maneuverability: (1) pushability -- the amount of linear displacement of the distal end (inside body) relative to

  20. Planned growth as a determinant of the markup: the case of Slovenian manufacturing

    Directory of Open Access Journals (Sweden)

    Maks Tajnikar

    2009-11-01

    Full Text Available The paper follows the idea of heterodox economists that a cost-plus price is above all a reproductive price and growth price. The authors apply a firm-level model of markup determination which, in line with theory and empirical evidence, contains proposed firm-specific determinants of the markup, including the firm’s planned growth. The positive firm-level relationship between growth and markup that is found in data for Slovenian manufacturing firms implies that retained profits gathered via the markup are an important source of growth financing and that the investment decisions of Slovenian manufacturing firms affect their pricing policy and decisions on the markup size as proposed by Post-Keynesian theory. The authors thus conclude that at least a partial trade-off between a firm’s growth and competitive outcome exists in Slovenian manufacturing.

  1. Object-Image Correspondence for Algebraic Curves under Projections

    Directory of Open Access Journals (Sweden)

    Joseph M. Burdis

    2013-03-01

    Full Text Available We present a novel algorithm for deciding whether a given planar curve is an image of a given spatial curve, obtained by a central or a parallel projection with unknown parameters. The motivation comes from the problem of establishing a correspondence between an object and an image, taken by a camera with unknown position and parameters. A straightforward approach to this problem consists of setting up a system of conditions on the projection parameters and then checking whether or not this system has a solution. The computational advantage of the algorithm presented here, in comparison to algorithms based on the straightforward approach, lies in a significant reduction of a number of real parameters that need to be eliminated in order to establish existence or non-existence of a projection that maps a given spatial curve to a given planar curve. Our algorithm is based on projection criteria that reduce the projection problem to a certain modification of the equivalence problem of planar curves under affine and projective transformations. To solve the latter problem we make an algebraic adaptation of signature construction that has been used to solve the equivalence problems for smooth curves. We introduce a notion of a classifying set of rational differential invariants and produce explicit formulas for such invariants for the actions of the projective and the affine groups on the plane.

  2. Multispectral high-resolution hologram generation using orthographic projection images

    Science.gov (United States)

    Muniraj, I.; Guo, C.; Sheridan, J. T.

    2016-08-01

    We present a new method of synthesizing a digital hologram of three-dimensional (3D) real-world objects from multiple orthographic projection images (OPI). A high-resolution multiple perspectives of 3D objects (i.e., two dimensional elemental image array) are captured under incoherent white light using synthetic aperture integral imaging (SAII) technique and their OPIs are obtained respectively. The reference beam is then multiplied with the corresponding OPI and integrated to form a Fourier hologram. Eventually, a modified phase retrieval algorithm (GS/HIO) is applied to reconstruct the hologram. The principle is validated experimentally and the results support the feasibility of the proposed method.

  3. Discriminating Projections for Estimating Face Age in Wild Images

    Energy Technology Data Exchange (ETDEWEB)

    Tokola, Ryan A [ORNL; Bolme, David S [ORNL; Ricanek, Karl [ORNL; Barstow, Del R [ORNL; Boehnen, Chris Bensing [ORNL

    2014-01-01

    We introduce a novel approach to estimating the age of a human from a single uncontrolled image. Current face age estimation algorithms work well in highly controlled images, and some are robust to changes in illumination, but it is usually assumed that images are close to frontal. This bias is clearly seen in the datasets that are commonly used to evaluate age estimation, which either entirely or mostly consist of frontal images. Using pose-specific projections, our algorithm maps image features into a pose-insensitive latent space that is discriminative with respect to age. Age estimation is then performed using a multi-class SVM. We show that our approach outperforms other published results on the Images of Groups dataset, which is the only age-related dataset with a non-trivial number of off-axis face images, and that we are competitive with recent age estimation algorithms on the mostly-frontal FG-NET dataset. We also experimentally demonstrate that our feature projections introduce insensitivity to pose.

  4. Test for optical systems in laser projection imaging for PCB

    Science.gov (United States)

    Qin, Ouyang; Zhou, Jinyun; Lei, Liang; Lin, Qinghua

    2010-11-01

    Projection imaging is one of the most important steps in the fabrication of Printed Circuit Board. In order to meet the increasing demand for higher resolution, speed and larger area of imaging, a novel Laser Projection Imaging (LPI) has been developed to take the place of the conventional Hg lamp exposure. We set up a system with resolution 10μm over large exposure area of 460mm×610mm on substrate materials. The system is available by the combination of three main parts: an XeF excimer laser with a wavelength of 351nm and single pulse energy of 120mJ, an illumination system with numerical aperture (NA) value of 0.02, and a double telecentric optical projection lens with NA value of 0.025. Such designs can theoretically meet the demand of actual lithography. However, experiments have shown that the propagation loss ratio of laser power from the light source to the substrate can be up to 50% or more so as to hardly achieve the expected results. In this paper, we present our results of experiments under different conditions on laser projection imaging equipment, and meanwhile, parameters such as gas lifetime, pulse repetition rate, exposure dose, as well as the optical lose of quartz microlens array are analyzed. Finally, we acquired the optimum exposure parameters.

  5. Root system markup language: toward a unified root architecture description language.

    Science.gov (United States)

    Lobet, Guillaume; Pound, Michael P; Diener, Julien; Pradal, Christophe; Draye, Xavier; Godin, Christophe; Javaux, Mathieu; Leitner, Daniel; Meunier, Félicien; Nacry, Philippe; Pridmore, Tony P; Schnepf, Andrea

    2015-03-01

    The number of image analysis tools supporting the extraction of architectural features of root systems has increased in recent years. These tools offer a handy set of complementary facilities, yet it is widely accepted that none of these software tools is able to extract in an efficient way the growing array of static and dynamic features for different types of images and species. We describe the Root System Markup Language (RSML), which has been designed to overcome two major challenges: (1) to enable portability of root architecture data between different software tools in an easy and interoperable manner, allowing seamless collaborative work; and (2) to provide a standard format upon which to base central repositories that will soon arise following the expanding worldwide root phenotyping effort. RSML follows the XML standard to store two- or three-dimensional image metadata, plant and root properties and geometries, continuous functions along individual root paths, and a suite of annotations at the image, plant, or root scale at one or several time points. Plant ontologies are used to describe botanical entities that are relevant at the scale of root system architecture. An XML schema describes the features and constraints of RSML, and open-source packages have been developed in several languages (R, Excel, Java, Python, and C#) to enable researchers to integrate RSML files into popular research workflow.

  6. Image Segmentation in Liquid Argon Time Projection Chamber Detector

    CERN Document Server

    Płoński, Piotr; Sulej, Robert; Zaremba, Krzysztof

    2015-01-01

    The Liquid Argon Time Projection Chamber (LAr-TPC) detectors provide excellent imaging and particle identification ability for studying neutrinos. An efficient and automatic reconstruction procedures are required to exploit potential of this imaging technology. Herein, a novel method for segmentation of images from LAr-TPC detectors is presented. The proposed approach computes a feature descriptor for each pixel in the image, which characterizes amplitude distribution in pixel and its neighbourhood. The supervised classifier is employed to distinguish between pixels representing particle's track and noise. The classifier is trained and evaluated on the hand-labeled dataset. The proposed approach can be a preprocessing step for reconstructing algorithms working directly on detector images.

  7. HIGH RESOLUTION IMAGE PROJECTION IN FREQUENCY DOMAIN FOR CONTINUOUS IMAGE SEQUENCE

    Directory of Open Access Journals (Sweden)

    M. Nagaraju Naik

    2010-09-01

    Full Text Available Unlike most other information technologies, which have enjoyed an exponential growth for the past several decades, display resolution has largely stagnated. Low display resolution has in turn limited the resolution of digital images. Scaling is a non-trivial process that involves a trade-off between efficiency, smoothness and sharpness. As the size of an image is increased, so the pixels, which comprise the image, become increasingly visible, making the image to appear soft. Super scalar representation of image sequence is limited due to image information present in low dimensional image sequence. To project a image frame sequence into high-resolution static or fractional scalingvalue, a scaling approach is developed based on energy spectral interpolation and frequency spectral interpolation techniques. To realize the frequency spectral resolution Cubic-B-Spline method is used.

  8. A standard MIGS/MIMS compliant XML Schema: toward the development of the Genomic Contextual Data Markup Language (GCDML).

    Science.gov (United States)

    Kottmann, Renzo; Gray, Tanya; Murphy, Sean; Kagan, Leonid; Kravitz, Saul; Lombardot, Thierry; Field, Dawn; Glöckner, Frank Oliver

    2008-06-01

    The Genomic Contextual Data Markup Language (GCDML) is a core project of the Genomic Standards Consortium (GSC) that implements the "Minimum Information about a Genome Sequence" (MIGS) specification and its extension, the "Minimum Information about a Metagenome Sequence" (MIMS). GCDML is an XML Schema for generating MIGS/MIMS compliant reports for data entry, exchange, and storage. When mature, this sample-centric, strongly-typed schema will provide a diverse set of descriptors for describing the exact origin and processing of a biological sample, from sampling to sequencing, and subsequent analysis. Here we describe the need for such a project, outline design principles required to support the project, and make an open call for participation in defining the future content of GCDML. GCDML is freely available, and can be downloaded, along with documentation, from the GSC Web site (http://gensc.org).

  9. Pathology data integration with eXtensible Markup Language.

    Science.gov (United States)

    Berman, Jules J

    2005-02-01

    It is impossible to overstate the importance of XML (eXtensible Markup Language) as a data organization tool. With XML, pathologists can annotate all of their data (clinical and anatomic) in a format that can transform every pathology report into a database, without compromising narrative structure. The purpose of this manuscript is to provide an overview of XML for pathologists. Examples will demonstrate how pathologists can use XML to annotate individual data elements and to structure reports in a common format that can be merged with other XML files or queried using standard XML tools. This manuscript gives pathologists a glimpse into how XML allows pathology data to be linked to other types of biomedical data and reduces our dependence on centralized proprietary databases.

  10. Earth Science Markup Language: Transitioning From Design to Application

    Science.gov (United States)

    Moe, Karen; Graves, Sara; Ramachandran, Rahul

    2002-01-01

    The primary objective of the proposed Earth Science Markup Language (ESML) research is to transition from design to application. The resulting schema and prototype software will foster community acceptance for the "define once, use anywhere" concept central to ESML. Supporting goals include: 1. Refinement of the ESML schema and software libraries in cooperation with the user community. 2. Application of the ESML schema and software libraries to a variety of Earth science data sets and analysis tools. 3. Development of supporting prototype software for enhanced ease of use. 4. Cooperation with standards bodies in order to assure ESML is aligned with related metadata standards as appropriate. 5. Widespread publication of the ESML approach, schema, and software.

  11. Tomographic image reconstruction from limited projections using iterative revisions in image and transform spaces.

    Science.gov (United States)

    Sato, T; Norton, S J; Linzer, M; Ikeda, O; Hirama, M

    1981-02-01

    An iterative technique is proposed for improving the quality of reconstructions from projections when the number of projections is small or the angular range of projections is limited. The technique consists of transforming repeatedly between image and transform spaces and applying a priori object information at each iteration. The approach is a generalization of the Gerchberg-Papoulis algorithm, a technique for extrapolating in the Fourier domain by imposing a space-limiting constraint on the object in the spatial domain. A priori object data that may be applied, in addition to truncating the image beyond the known boundaries of the object, include limiting the maximum range of variation of the physical parameter being imaged. The results of computer simulations show clearly how the process of forcing the image to conform to a priori object data reduces artifacts arising from limited data available in the Fourier domain.

  12. A study of images of Projective Angles of pulmonary veins

    Energy Technology Data Exchange (ETDEWEB)

    Wang Jue [Beijing Anzhen Hospital, Beijing (China); Zhaoqi, Zhang [Beijing Anzhen Hospital, Beijing (China)], E-mail: zhaoqi5000@vip.sohu.com; Yu Wei; Miao Cuilian; Yan Zixu; Zhao Yike [Beijing Anzhen Hospital, Beijing (China)

    2009-09-15

    Aims: In images of magnetic resonance and computed tomography (CT) there are visible angles between pulmonary veins and the coronary, transversal or sagittal section of body. In this study these angles are measured and defined as Projective Angles of pulmonary veins. Several possible influential factors and characters of distribution are studied and analyzed for a better understanding of this imaging anatomic character of pulmonary veins. And it could be the anatomic base of adjusting correctly the angle of the central X-ray of the angiography of pulmonary veins undergoing the catheter ablation of atrial fibrillation (AF). Method: Images of contrast enhanced magnetic resonance angiography (CEMRA) and contrast enhanced computer tomography (CECT) of the left atrium and pulmonary veins of 137 health objects and patients with atrial fibrillation (AF) are processed with the technique of post-processing, and Projective Angles to the coronary and transversal sections are measured and analyzed statistically. Result: Project Angles of pulmonary veins are one of real and steady imaging anatomic characteristics of pulmonary veins. The statistical distribution of variables is relatively concentrated, with a fairly good representation of average value. It is possible to improve the angle of the central X-ray according to the average value in the selective angiography of pulmonary veins undergoing the catheter ablation of AF.

  13. Data on the interexaminer variation of minutia markup on latent fingerprints.

    Science.gov (United States)

    Ulery, Bradford T; Hicklin, R Austin; Roberts, Maria Antonia; Buscaglia, JoAnn

    2016-09-01

    The data in this article supports the research paper entitled "Interexaminer variation of minutia markup on latent fingerprints" [1]. The data in this article describes the variability in minutia markup during both analysis of the latents and comparison between latents and exemplars. The data was collected in the "White Box Latent Print Examiner Study," in which each of 170 volunteer latent print examiners provided detailed markup documenting their examinations of latent-exemplar pairs of prints randomly assigned from a pool of 320 pairs. Each examiner examined 22 latent-exemplar pairs; an average of 12 examiners marked each latent.

  14. Data on the interexaminer variation of minutia markup on latent fingerprints

    Directory of Open Access Journals (Sweden)

    Bradford T. Ulery

    2016-09-01

    Full Text Available The data in this article supports the research paper entitled “Interexaminer variation of minutia markup on latent fingerprints” [1]. The data in this article describes the variability in minutia markup during both analysis of the latents and comparison between latents and exemplars. The data was collected in the “White Box Latent Print Examiner Study,” in which each of 170 volunteer latent print examiners provided detailed markup documenting their examinations of latent-exemplar pairs of prints randomly assigned from a pool of 320 pairs. Each examiner examined 22 latent-exemplar pairs; an average of 12 examiners marked each latent.

  15. Relevance Preserving Projection and Ranking for Web Image Search Reranking.

    Science.gov (United States)

    Ji, Zhong; Pang, Yanwei; Li, Xuelong

    2015-11-01

    An image search reranking (ISR) technique aims at refining text-based search results by mining images' visual content. Feature extraction and ranking function design are two key steps in ISR. Inspired by the idea of hypersphere in one-class classification, this paper proposes a feature extraction algorithm named hypersphere-based relevance preserving projection (HRPP) and a ranking function called hypersphere-based rank (H-Rank). Specifically, an HRPP is a spectral embedding algorithm to transform an original high-dimensional feature space into an intrinsically low-dimensional hypersphere space by preserving the manifold structure and a relevance relationship among the images. An H-Rank is a simple but effective ranking algorithm to sort the images by their distances to the hypersphere center. Moreover, to capture the user's intent with minimum human interaction, a reversed k-nearest neighbor (KNN) algorithm is proposed, which harvests enough pseudorelevant images by requiring that the user gives only one click on the initially searched images. The HRPP method with reversed KNN is named one-click-based HRPP (OC-HRPP). Finally, an OC-HRPP algorithm and the H-Rank algorithm form a new ISR method, H-reranking. Extensive experimental results on three large real-world data sets show that the proposed algorithms are effective. Moreover, the fact that only one relevant image is required to be labeled makes it has a strong practical significance.

  16. Measuring Brand Image Effects of Flagship Projects for Place Brands

    DEFF Research Database (Denmark)

    Zenker, Sebastian; Beckmann, Suzanne C.

    2013-01-01

    Cities invest large sums of money in ‘flagship projects’, with the aim of not only developing the city as such, but also changing the perceptions of the city brand towards a desired image. The city of Hamburg, Germany, is currently investing euro575 million in order to build a new symphony hall...... (Elbphilharmonie), euro400 million to develop the ‘International Architectural Fair’ and it is also considering candidature again for the ‘Olympic Games’ in 2024/2028. As assessing the image effects of such projects is rather difficult, this article introduces an improved version of the Brand Concept Map approach......, which was originally developed for product brands. An experimental design was used to first measure the Hamburg brand as such and then the changes in the brand perceptions after priming the participants (N=209) for one of the three different flagship projects. The findings reveal several important...

  17. FuGEFlow: data model and markup language for flow cytometry.

    Science.gov (United States)

    Qian, Yu; Tchuvatkina, Olga; Spidlen, Josef; Wilkinson, Peter; Gasparetto, Maura; Jones, Andrew R; Manion, Frank J; Scheuermann, Richard H; Sekaly, Rafick-Pierre; Brinkman, Ryan R

    2009-06-16

    Flow cytometry technology is widely used in both health care and research. The rapid expansion of flow cytometry applications has outpaced the development of data storage and analysis tools. Collaborative efforts being taken to eliminate this gap include building common vocabularies and ontologies, designing generic data models, and defining data exchange formats. The Minimum Information about a Flow Cytometry Experiment (MIFlowCyt) standard was recently adopted by the International Society for Advancement of Cytometry. This standard guides researchers on the information that should be included in peer reviewed publications, but it is insufficient for data exchange and integration between computational systems. The Functional Genomics Experiment (FuGE) formalizes common aspects of comprehensive and high throughput experiments across different biological technologies. We have extended FuGE object model to accommodate flow cytometry data and metadata. We used the MagicDraw modelling tool to design a UML model (Flow-OM) according to the FuGE extension guidelines and the AndroMDA toolkit to transform the model to a markup language (Flow-ML). We mapped each MIFlowCyt term to either an existing FuGE class or to a new FuGEFlow class. The development environment was validated by comparing the official FuGE XSD to the schema we generated from the FuGE object model using our configuration. After the Flow-OM model was completed, the final version of the Flow-ML was generated and validated against an example MIFlowCyt compliant experiment description. The extension of FuGE for flow cytometry has resulted in a generic FuGE-compliant data model (FuGEFlow), which accommodates and links together all information required by MIFlowCyt. The FuGEFlow model can be used to build software and databases using FuGE software toolkits to facilitate automated exchange and manipulation of potentially large flow cytometry experimental data sets. Additional project documentation, including

  18. FuGEFlow: data model and markup language for flow cytometry

    Directory of Open Access Journals (Sweden)

    Manion Frank J

    2009-06-01

    . Additional project documentation, including reusable design patterns and a guide for setting up a development environment, was contributed back to the FuGE project. Conclusion We have shown that an extension of FuGE can be used to transform minimum information requirements in natural language to markup language in XML. Extending FuGE required significant effort, but in our experiences the benefits outweighed the costs. The FuGEFlow is expected to play a central role in describing flow cytometry experiments and ultimately facilitating data exchange including public flow cytometry repositories currently under development.

  19. Imaging through turbid media via sparse representation: imaging quality comparison of three projection matrices

    Science.gov (United States)

    Shao, Xiaopeng; Li, Huijuan; Wu, Tengfei; Dai, Weijia; Bi, Xiangli

    2015-05-01

    The incident light will be scattered away due to the inhomogeneity of the refractive index in many materials which will greatly reduce the imaging depth and degrade the imaging quality. Many exciting methods have been presented in recent years for solving this problem and realizing imaging through a highly scattering medium, such as the wavefront modulation technique and reconstruction technique. The imaging method based on compressed sensing (CS) theory can decrease the computational complexity because it doesn't require the whole speckle pattern to realize reconstruction. One of the key premises of this method is that the object is sparse or can be sparse representation. However, choosing a proper projection matrix is very important to the imaging quality. In this paper, we analyzed that the transmission matrix (TM) of a scattering medium obeys circular Gaussian distribution, which makes it possible that a scattering medium can be used as the measurement matrix in the CS theory. In order to verify the performance of this method, a whole optical system is simulated. Various projection matrices are introduced to make the object sparse, including the fast Fourier transform (FFT) basis, the discrete cosine transform (DCT) basis and the discrete wavelet transform (DWT) basis, the imaging performances of each of which are compared comprehensively. Simulation results show that for most targets, applying the discrete wavelet transform basis will obtain an image in good quality. This work can be applied to biomedical imaging and used to develop real-time imaging through highly scattering media.

  20. Computed tomography image reconstruction from only two projections

    CERN Document Server

    Mohammad-Djafari, Ali

    2007-01-01

    English: This paper concerns the image reconstruction from a few projections in Computed Tomography (CT). The main objective of this paper is to show that the problem is so ill posed that no classical method, such as analytical methods based on inverse Radon transform, nor the algebraic methods such as Least squares (LS) or regularization theory can give satisfactory result. As an example, we consider in detail the case of image reconstruction from two horizontal and vertical projections. We then show how a particular composite Markov modeling and the Bayesian estimation framework can possibly propose satisfactory solutions to the problem. For demonstration and educational purpose a set of Matlab programs are given for a live presentation of the results. ----- French: Ce travail, \\`a but p\\'edagogique, pr\\'esente le probl\\`eme inverse de la reconstruction d'image en tomographie X lorsque le nombre des projections est tr\\`es limit\\'e. voir le texte en Anglais et en Fran\\c{c}ais.

  1. Reproducible computational biology experiments with SED-ML--the Simulation Experiment Description Markup Language

    National Research Council Canada - National Science Library

    Waltemath, Dagmar; Adams, Richard; Bergmann, Frank T; Hucka, Michael; Kolpakov, Fedor; Miller, Andrew K; Moraru, Ion I; Nickerson, David; Sahle, Sven; Snoep, Jacky L; Le Novère, Nicolas

    2011-01-01

    .... In this article, we present the Simulation Experiment Description Markup Language (SED-ML). SED-ML encodes in a computer-readable exchange format the information required by MIASE to enable reproduction of simulation experiments...

  2. Projective 3D-reconstruction of Uncalibrated Endoscopic Images

    Directory of Open Access Journals (Sweden)

    P. Faltin

    2010-01-01

    Full Text Available The most common medical diagnostic method for urinary bladder cancer is cystoscopy. This inspection of the bladder is performed by a rigid endoscope, which is usually guided close to the bladder wall. This causes a very limited field of view; difficulty of navigation is aggravated by the usage of angled endoscopes. These factors cause difficulties in orientation and visual control. To overcome this problem, the paper presents a method for extracting 3D information from uncalibrated endoscopic image sequences and for reconstructing the scene content. The method uses the SURF-algorithm to extract features from the images and relates the images by advanced matching. To stabilize the matching, the epipolar geometry is extracted for each image pair using a modified RANSAC-algorithm. Afterwards these matched point pairs are used to generate point triplets over three images and to describe the trifocal geometry. The 3D scene points are determined by applying triangulation to the matched image points. Thus, these points are used to generate a projective 3D reconstruction of the scene, and provide the first step for further metric reconstructions.

  3. The GPlates Geological Information Model and Markup Language

    Directory of Open Access Journals (Sweden)

    X. Qin

    2012-07-01

    Full Text Available Understanding tectonic and geodynamic processes leading to the present-day configuration of the Earth involves studying data and models across a variety of disciplines, from geochemistry, geochronology and geophysics, to plate kinematics and mantle dynamics. All these data represent a 3-dimensional spatial and 1-dimensional temporal framework, a formalism which is not exploited by traditional spatial analysis tools. This is arguably a fundamental limit in both the rigour and sophistication in which datasets can be combined for geological "deep time" analysis, and often confines the extent of data analyses to the present-day configurations of geological objects. The GPlates Geological Information Model (GPGIM represents a formal specification of geological and geophysical data in a time-varying plate tectonics context, used by the GPlates virtual-globe software. It provides a framework in which relevant types of geological data are attached to a common plate tectonic reference frame, allowing the data to be reconstructed in a time-dependent spatio-temporal plate reference frame. The GPlates Markup Language (GPML, being an extension of the open standard Geography Markup Language (GML, is both the modelling language for the GPGIM and an XML-based data format for the interoperable storage and exchange of data modelled by it. The GPlates software implements the GPGIM allowing researchers to query, visualise, reconstruct and analyse a rich set of geological data including numerical raster data. The GPGIM has recently been extended to support time-dependent geo-referenced numerical raster data by wrapping GML primitives into the time-dependent framework of the GPGIM. Coupled with GPlates' ability to reconstruct numerical raster data and import/export from/to a variety of raster file formats, as well as its handling of time-dependent plate boundary topologies, interoperability with geodynamic softwares is established, leading to a new generation of deep

  4. The GPlates Geological Information Model and Markup Language

    Directory of Open Access Journals (Sweden)

    X. Qin

    2012-10-01

    Full Text Available Understanding tectonic and geodynamic processes leading to the present-day configuration of the Earth involves studying data and models across a variety of disciplines, from geochemistry, geochronology and geophysics, to plate kinematics and mantle dynamics. All these data represent a 3-D spatial and 1-D temporal framework, a formalism which is not exploited by traditional spatial analysis tools. This is arguably a fundamental limit in both the rigour and sophistication in which datasets can be combined for geological deep time analysis, and often confines the extent of data analyses to the present-day configurations of geological objects. The GPlates Geological Information Model (GPGIM represents a formal specification of geological and geophysical data in a time-varying plate tectonics context, used by the GPlates virtual-globe software. It provides a framework in which relevant types of geological data are attached to a common plate tectonic reference frame, allowing the data to be reconstructed in a time-dependent spatio-temporal plate reference frame. The GPlates Markup Language (GPML, being an extension of the open standard Geography Markup Language (GML, is both the modelling language for the GPGIM and an XML-based data format for the interoperable storage and exchange of data modelled by it. The GPlates software implements the GPGIM allowing researchers to query, visualise, reconstruct and analyse a rich set of geological data including numerical raster data. The GPGIM has recently been extended to support time-dependent geo-referenced numerical raster data by wrapping GML primitives into the time-dependent framework of the GPGIM. Coupled with GPlates' ability to reconstruct numerical raster data and import/export from/to a variety of raster file formats, as well as its handling of time-dependent plate boundary topologies, interoperability with geodynamic softwares is established, leading to a new generation of deep-time spatio

  5. Distortion-Invariant Binary Image Recognition Based on Central Projection Correlation

    Institute of Scientific and Technical Information of China (English)

    WU Yaming; XIAO Yanping; SUN Fanghong; FANG Nian

    2001-01-01

    A method of central projection correlation which is invariant to distortion of shift, scale and rotation of the binary target image is proposed. A 2-D binary image is transformed into an 1-D central projection referring to the centroid of the binary image. The distortion-invariant central projection correlation is successfully performed by computer simulations and its optical implementation is presented.

  6. QUESTION ANSWERING SYSTEM BERBASIS ARTIFICIAL INTELLIGENCE MARKUP LANGUAGE SEBAGAI MEDIA INFORMASI

    Directory of Open Access Journals (Sweden)

    Fajrin Azwary

    2016-04-01

    Full Text Available Artificial intelligence technology nowadays, can be processed with a variety of forms, such as chatbot, and the various methods, one of them using Artificial Intelligence Markup Language (AIML. AIML using template matching, by comparing the specific patterns in the database. AIML template design process begins with determining the necessary information, then formed into questions, these questions adapted to AIML pattern. From the results of the study, can be known that the Question-Answering System in the chatbot using Artificial Intelligence Markup Language are able to communicate and deliver information. Keywords: Artificial Intelligence, Template Matching, Artificial Intelligence Markup Language, AIML Teknologi kecerdasan buatan saat ini dapat diolah dengan berbagai macam bentuk, seperti ChatBot, dan berbagai macam metode, salah satunya menggunakan Artificial Intelligence Markup Language (AIML. AIML menggunakan metode template matching yaitu dengan membandingkan pola-pola tertentu pada database. Proses perancangan template AIML diawali dengan menentukan informasi yang diperlukan, kemudian dibentuk menjadi pertanyaan, pertanyaan tersebut disesuaikan dengan bentuk pattern AIML. Hasil penelitian dapat diperoleh bahwa Question-Answering System dalam bentuk ChatBot menggunakan Artificial Intelligence Markup Language dapat berkomunikasi dan menyampaikan informasi. Kata kunci : Kecerdasan Buatan, Pencocokan Pola, Artificial Intelligence Markup Language, AIML

  7. Geometry Description Markup Language for Physics Simulation And Analysis Applications.

    Energy Technology Data Exchange (ETDEWEB)

    Chytracek, R.; /CERN; McCormick, J.; /SLAC; Pokorski, W.; /CERN; Santin, G.; /European Space Agency

    2007-01-23

    The Geometry Description Markup Language (GDML) is a specialized XML-based language designed as an application-independent persistent format for describing the geometries of detectors associated with physics measurements. It serves to implement ''geometry trees'' which correspond to the hierarchy of volumes a detector geometry can be composed of, and to allow to identify the position of individual solids, as well as to describe the materials they are made of. Being pure XML, GDML can be universally used, and in particular it can be considered as the format for interchanging geometries among different applications. In this paper we will present the current status of the development of GDML. After having discussed the contents of the latest GDML schema, which is the basic definition of the format, we will concentrate on the GDML processors. We will present the latest implementation of the GDML ''writers'' as well as ''readers'' for either Geant4 [2], [3] or ROOT [4], [10].

  8. The basics of CrossRef extensible markup language

    Directory of Open Access Journals (Sweden)

    Rachael Lammey

    2014-08-01

    Full Text Available CrossRef is an association of scholarly publishers that develops shared infrastructure to support more effective scholarly communications. Launched in 2000, CrossRef’s citation-linking network today covers over 68 million journal articles and other content items (books chapters, data, theses, and technical reports from thousands of scholarly and professional publishers around the globe. CrossRef has over 4,000 member publishers who join as members in order to avail of a number of CrossRef services, reference linking via the Digital Object Identifier (DOI being the core service. To deposit CrossRef DOIs, publishers and editors need to become familiar with the basics of extensible markup language (XML. This article will give an introduction to CrossRef XML and what publishers need to do in order to start to deposit DOIs with CrossRef and thus ensure their publications are discoverable and can be linked to consistently in an online environment.

  9. Extensions to the Dynamic Aerospace Vehicle Exchange Markup Language

    Science.gov (United States)

    Brian, Geoffrey J.; Jackson, E. Bruce

    2011-01-01

    The Dynamic Aerospace Vehicle Exchange Markup Language (DAVE-ML) is a syntactical language for exchanging flight vehicle dynamic model data. It provides a framework for encoding entire flight vehicle dynamic model data packages for exchange and/or long-term archiving. Version 2.0.1 of DAVE-ML provides much of the functionality envisioned for exchanging aerospace vehicle data; however, it is limited in only supporting scalar time-independent data. Additional functionality is required to support vector and matrix data, abstracting sub-system models, detailing dynamics system models (both discrete and continuous), and defining a dynamic data format (such as time sequenced data) for validation of dynamics system models and vehicle simulation packages. Extensions to DAVE-ML have been proposed to manage data as vectors and n-dimensional matrices, and record dynamic data in a compatible form. These capabilities will improve the clarity of data being exchanged, simplify the naming of parameters, and permit static and dynamic data to be stored using a common syntax within a single file; thereby enhancing the framework provided by DAVE-ML for exchanging entire flight vehicle dynamic simulation models.

  10. Evolution of Web-Based Applications Using Domain-Specific Markup Languages

    Directory of Open Access Journals (Sweden)

    Guntram Graef

    2000-11-01

    Full Text Available The lifecycle of Web-based applications is characterized by frequent changes to content, user interface, and functionality. Updating content, improving the services provided to users, drives further development of a Web-based application. The major goal for the success of a Web-based application becomes therefore its evolution. Though, development and maintenance of Web-based applications suffers from the underlying document-based implementation model. A disciplined evolution of Web based applications requires the application of software engineering practice for systematic further development and reuse of software artifacts. In this contribution we suggest to adopt the component paradigm to development and evolution of Web-based applications. The approach is based on a dedicated component technology and component-software architecture. It allows abstracting from many technical aspects related to the Web as an application platform by introducing domain specific markup languages. These languages allow the description of services, which represent domain components in our Web-component-software approach. Domain experts with limited knowledge of technical details can therefore describe application functionality and the evolution of orthogonal aspects of the application can be de-coupled. The whole approach is based on XML to achieve the necessary standardization and economic efficiency for the use in real world projects.

  11. Radio Synthesis Imaging - A High Performance Computing and Communications Project

    Science.gov (United States)

    Crutcher, Richard M.

    The National Science Foundation has funded a five-year High Performance Computing and Communications project at the National Center for Supercomputing Applications (NCSA) for the direct implementation of several of the computing recommendations of the Astronomy and Astrophysics Survey Committee (the "Bahcall report"). This paper is a summary of the project goals and a progress report. The project will implement a prototype of the next generation of astronomical telescope systems - remotely located telescopes connected by high-speed networks to very high performance, scalable architecture computers and on-line data archives, which are accessed by astronomers over Gbit/sec networks. Specifically, a data link has been installed between the BIMA millimeter-wave synthesis array at Hat Creek, California and NCSA at Urbana, Illinois for real-time transmission of data to NCSA. Data are automatically archived, and may be browsed and retrieved by astronomers using the NCSA Mosaic software. In addition, an on-line digital library of processed images will be established. BIMA data will be processed on a very high performance distributed computing system, with I/O, user interface, and most of the software system running on the NCSA Convex C3880 supercomputer or Silicon Graphics Onyx workstations connected by HiPPI to the high performance, massively parallel Thinking Machines Corporation CM-5. The very computationally intensive algorithms for calibration and imaging of radio synthesis array observations will be optimized for the CM-5 and new algorithms which utilize the massively parallel architecture will be developed. Code running simultaneously on the distributed computers will communicate using the Data Transport Mechanism developed by NCSA. The project will also use the BLANCA Gbit/s testbed network between Urbana and Madison, Wisconsin to connect an Onyx workstation in the University of Wisconsin Astronomy Department to the NCSA CM-5, for development of long

  12. An adaptive filtered back-projection for photoacoustic image reconstruction

    Energy Technology Data Exchange (ETDEWEB)

    Huang, He; Bustamante, Gilbert; Peterson, Ralph; Ye, Jing Yong, E-mail: jingyong.ye@utsa.edu [Department of Biomedical Engineering, University of Texas at San Antonio, San Antonio, Texas 78249 (United States)

    2015-05-15

    Purpose: The purpose of this study is to develop an improved filtered-back-projection (FBP) algorithm for photoacoustic tomography (PAT), which allows image reconstruction with higher quality compared to images reconstructed through traditional algorithms. Methods: A rigorous expression of a weighting function has been derived directly from a photoacoustic wave equation and used as a ramp filter in Fourier domain. The authors’ new algorithm utilizes this weighting function to precisely calculate each photoacoustic signal’s contribution and then reconstructs the image based on the retarded potential generated from the photoacoustic sources. In addition, an adaptive criterion has been derived for selecting the cutoff frequency of a low pass filter. Two computational phantoms were created to test the algorithm. The first phantom contained five spheres with each sphere having different absorbances. The phantom was used to test the capability for correctly representing both the geometry and the relative absorbed energy in a planar measurement system. The authors also used another phantom containing absorbers of different sizes with overlapping geometry to evaluate the performance of the new method for complicated geometry. In addition, random noise background was added to the simulated data, which were obtained by using an arc-shaped array of 50 evenly distributed transducers that spanned 160° over a circle with a radius of 65 mm. A normalized factor between the neighbored transducers was applied for correcting measurement signals in PAT simulations. The authors assumed that the scanned object was mounted on a holder that rotated over the full 360° and the scans were set to a sampling rate of 20.48 MHz. Results: The authors have obtained reconstructed images of the computerized phantoms by utilizing the new FBP algorithm. From the reconstructed image of the first phantom, one can see that this new approach allows not only obtaining a sharp image but also showing

  13. Scene projection technology development for imaging sensor testing at AEDC

    Science.gov (United States)

    Lowry, H.; Fedde, M.; Crider, D.; Horne, H.; Bynum, K.; Steely, S.; Labello, J.

    2012-06-01

    Arnold Engineering Development Center (AEDC) is tasked with visible-to-LWIR imaging sensor calibration and characterization, as well as hardware-in-the-loop (HWIL) testing with high-fidelity complex scene projection to validate sensor mission performance. They are thus involved in the development of technologies and methodologies that are used in space simulation chambers for such testing. These activities support a variety of program needs such as space situational awareness (SSA). This paper provides an overview of pertinent technologies being investigated and implemented at AEDC.

  14. Displaying perfusion MRI images as color intensity projections

    CERN Document Server

    Hoefnagels, Friso; Sanchez, Ester; Lagerwaard, Frank J

    2007-01-01

    Dynamic susceptibility-weighted contrast-enhanced (DSC) MRI or perfusion-MRI plays an important role in the non-invasive assessment of tumor vascularity. However, the large number of images provided by the method makes display and interpretation of the results challenging. Current practice is to display the perfusion information as relative cerebral blood volume maps (rCBV). Color intensity projections (CIPs) provides a simple, intuitive display of the perfusion-MRI data so that regional perfusion characteristics are intrinsically integrated into the anatomy structure the T2 images. The ease of use and quick calculation time of CIPs should allow it to be easily integrated into current analysis and interpretation pipelines.

  15. Sparse Nonlinear Electromagnetic Imaging Accelerated With Projected Steepest Descent Algorithm

    KAUST Repository

    Desmal, Abdulla

    2017-04-03

    An efficient electromagnetic inversion scheme for imaging sparse 3-D domains is proposed. The scheme achieves its efficiency and accuracy by integrating two concepts. First, the nonlinear optimization problem is constrained using L₀ or L₁-norm of the solution as the penalty term to alleviate the ill-posedness of the inverse problem. The resulting Tikhonov minimization problem is solved using nonlinear Landweber iterations (NLW). Second, the efficiency of the NLW is significantly increased using a steepest descent algorithm. The algorithm uses a projection operator to enforce the sparsity constraint by thresholding the solution at every iteration. Thresholding level and iteration step are selected carefully to increase the efficiency without sacrificing the convergence of the algorithm. Numerical results demonstrate the efficiency and accuracy of the proposed imaging scheme in reconstructing sparse 3-D dielectric profiles.

  16. Spacecraft design project: High temperature superconducting infrared imaging satellite

    Science.gov (United States)

    1991-01-01

    The High Temperature Superconductor Infrared Imaging Satellite (HTSCIRIS) is designed to perform the space based infrared imaging and surveillance mission. The design of the satellite follows the black box approach. The payload is a stand alone unit, with the spacecraft bus designed to meet the requirements of the payload as listed in the statement of work. Specifications influencing the design of the spacecraft bus were originated by the Naval Research Lab. A description of the following systems is included: spacecraft configuration, orbital dynamics, radio frequency communication subsystem, electrical power system, propulsion, attitude control system, thermal control, and structural design. The issues of testing and cost analysis are also addressed. This design project was part of the course Advanced Spacecraft Design taught at the Naval Postgraduate School.

  17. Final Project Report: Imaging Fault Zones Using a Novel Elastic Reverse-Time Migration Imaging Technique

    Energy Technology Data Exchange (ETDEWEB)

    Huang, Lianjie [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Chen, Ting [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Tan, Sirui [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Lin, Youzuo [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Gao, Kai [Los Alamos National Lab. (LANL), Los Alamos, NM (United States)

    2017-05-10

    Imaging fault zones and fractures is crucial for geothermal operators, providing important information for reservoir evaluation and management strategies. However, there are no existing techniques available for directly and clearly imaging fault zones, particularly for steeply dipping faults and fracture zones. In this project, we developed novel acoustic- and elastic-waveform inversion methods for high-resolution velocity model building. In addition, we developed acoustic and elastic reverse-time migration methods for high-resolution subsurface imaging of complex subsurface structures and steeply-dipping fault/fracture zones. We first evaluated and verified the improved capabilities of our newly developed seismic inversion and migration imaging methods using synthetic seismic data. Our numerical tests verified that our new methods directly image subsurface fracture/fault zones using surface seismic reflection data. We then applied our novel seismic inversion and migration imaging methods to a field 3D surface seismic dataset acquired at the Soda Lake geothermal field using Vibroseis sources. Our migration images of the Soda Lake geothermal field obtained using our seismic inversion and migration imaging algorithms revealed several possible fault/fracture zones. AltaRock Energy, Inc. is working with Cyrq Energy, Inc. to refine the geologic interpretation at the Soda Lake geothermal field. Trenton Cladouhos, Senior Vice President R&D of AltaRock, was very interested in our imaging results of 3D surface seismic data from the Soda Lake geothermal field. He planed to perform detailed interpretation of our images in collaboration with James Faulds and Holly McLachlan of University of Nevada at Reno. Using our high-resolution seismic inversion and migration imaging results can help determine the optimal locations to drill wells for geothermal energy production and reduce the risk of geothermal exploration.

  18. Systematic reconstruction of TRANSPATH data into cell system markup language.

    Science.gov (United States)

    Nagasaki, Masao; Saito, Ayumu; Li, Chen; Jeong, Euna; Miyano, Satoru

    2008-06-23

    Many biological repositories store information based on experimental study of the biological processes within a cell, such as protein-protein interactions, metabolic pathways, signal transduction pathways, or regulations of transcription factors and miRNA. Unfortunately, it is difficult to directly use such information when generating simulation-based models. Thus, modeling rules for encoding biological knowledge into system-dynamics-oriented standardized formats would be very useful for fully understanding cellular dynamics at the system level. We selected the TRANSPATH database, a manually curated high-quality pathway database, which provides a plentiful source of cellular events in humans, mice, and rats, collected from over 31,500 publications. In this work, we have developed 16 modeling rules based on hybrid functional Petri net with extension (HFPNe), which is suitable for graphical representing and simulating biological processes. In the modeling rules, each Petri net element is incorporated with Cell System Ontology to enable semantic interoperability of models. As a formal ontology for biological pathway modeling with dynamics, CSO also defines biological terminology and corresponding icons. By combining HFPNe with the CSO features, it is possible to make TRANSPATH data to simulation-based and semantically valid models. The results are encoded into a biological pathway format, Cell System Markup Language (CSML), which eases the exchange and integration of biological data and models. By using the 16 modeling rules, 97% of the reactions in TRANSPATH are converted into simulation-based models represented in CSML. This reconstruction demonstrates that it is possible to use our rules to generate quantitative models from static pathway descriptions.

  19. Remote Imaging Projects In Research And Astrophotography With Starpals

    Science.gov (United States)

    Fischer, Audrey; Kingan, J.

    2008-05-01

    StarPals is a nascent non-profit organization with the goal of providing opportunities for international collaboration between students of all ages within space science research. We believe that by encouraging an interest in the cosmos, the one thing that is truly Universal, from a young age, students will not only further their knowledge of and interest in science but will learn valuable teamwork and life skills. The goal is to foster respect, understanding and appreciation of cultural diversity among all StarPals participants, whether students, teachers, or mentors. StarPals aims to inspire students by providing opportunities in which, more than simply visualizing themselves as research scientists, they can actually become one. The technologies of robotic telescopes, videoconferencing, and online classrooms are expanding the possibilities like never before. In honor of IYA2009, StarPals would like to encourage 400 schools to participate on a global scale in astronomy/cosmology research on various concurrent projects. We will offer in-person or online workshops and training sessions to teach the teachers. We will be seeking publication in scientific journals for some student research. For our current project, the Double Stars Challenge, students use the robotic telescopes to take a series of four images of one of 30 double stars from a list furnished by the US Naval Observatory and then use MPO Canopus software to take distance and position angle measurements. StarPals provides students with hands-on training, telescope time, and software to complete the imaging and measuring. A paper will be drafted from our research data and submitted to the Journal of Double Star Observations. The kids who participate in this project may potentially be the youngest contributors to an article in a vetted scientific journal. Kids rapidly adapt and improve their computer skills operating these telescopes and discover for themselves that science is COOL!

  20. A methodology to annotate systems biology markup language models with the synthetic biology open language.

    Science.gov (United States)

    Roehner, Nicholas; Myers, Chris J

    2014-02-21

    Recently, we have begun to witness the potential of synthetic biology, noted here in the form of bacteria and yeast that have been genetically engineered to produce biofuels, manufacture drug precursors, and even invade tumor cells. The success of these projects, however, has often failed in translation and application to new projects, a problem exacerbated by a lack of engineering standards that combine descriptions of the structure and function of DNA. To address this need, this paper describes a methodology to connect the systems biology markup language (SBML) to the synthetic biology open language (SBOL), existing standards that describe biochemical models and DNA components, respectively. Our methodology involves first annotating SBML model elements such as species and reactions with SBOL DNA components. A graph is then constructed from the model, with vertices corresponding to elements within the model and edges corresponding to the cause-and-effect relationships between these elements. Lastly, the graph is traversed to assemble the annotating DNA components into a composite DNA component, which is used to annotate the model itself and can be referenced by other composite models and DNA components. In this way, our methodology can be used to build up a hierarchical library of models annotated with DNA components. Such a library is a useful input to any future genetic technology mapping algorithm that would automate the process of composing DNA components to satisfy a behavioral specification. Our methodology for SBML-to-SBOL annotation is implemented in the latest version of our genetic design automation (GDA) software tool, iBioSim.

  1. Multiview Discriminative Geometry Preserving Projection for Image Classification

    Directory of Open Access Journals (Sweden)

    Ziqiang Wang

    2014-01-01

    Full Text Available In many image classification applications, it is common to extract multiple visual features from different views to describe an image. Since different visual features have their own specific statistical properties and discriminative powers for image classification, the conventional solution for multiple view data is to concatenate these feature vectors as a new feature vector. However, this simple concatenation strategy not only ignores the complementary nature of different views, but also ends up with “curse of dimensionality.” To address this problem, we propose a novel multiview subspace learning algorithm in this paper, named multiview discriminative geometry preserving projection (MDGPP for feature extraction and classification. MDGPP can not only preserve the intraclass geometry and interclass discrimination information under a single view, but also explore the complementary property of different views to obtain a low-dimensional optimal consensus embedding by using an alternating-optimization-based iterative algorithm. Experimental results on face recognition and facial expression recognition demonstrate the effectiveness of the proposed algorithm.

  2. Implementation of GPU-accelerated back projection for EPR imaging.

    Science.gov (United States)

    Qiao, Zhiwei; Redler, Gage; Epel, Boris; Qian, Yuhua; Halpern, Howard

    2015-01-01

    Electron paramagnetic resonance (EPR) Imaging (EPRI) is a robust method for measuring in vivo oxygen concentration (pO2). For 3D pulse EPRI, a commonly used reconstruction algorithm is the filtered backprojection (FBP) algorithm, in which the backprojection process is computationally intensive and may be time consuming when implemented on a CPU. A multistage implementation of the backprojection can be used for acceleration, however it is not flexible (requires equal linear angle projection distribution) and may still be time consuming. In this work, single-stage backprojection is implemented on a GPU (Graphics Processing Units) having 1152 cores to accelerate the process. The GPU implementation results in acceleration by over a factor of 200 overall and by over a factor of 3500 if only the computing time is considered. Some important experiences regarding the implementation of GPU-accelerated backprojection for EPRI are summarized. The resulting accelerated image reconstruction is useful for real-time image reconstruction monitoring and other time sensitive applications.

  3. ART-ML: a new markup language for modelling and representation of biological processes in cardiovascular diseases.

    Science.gov (United States)

    Karvounis, E C; Exarchos, T P; Fotiou, E; Sakellarios, A I; Iliopoulou, D; Koutsouris, D; Fotiadis, D I

    2013-01-01

    With an ever increasing number of biological models available on the internet, a standardized modelling framework is required to allow information to be accessed and visualized. In this paper we propose a novel Extensible Markup Language (XML) based format called ART-ML that aims at supporting the interoperability and the reuse of models of geometry, blood flow, plaque progression and stent modelling, exported by any cardiovascular disease modelling software. ART-ML has been developed and tested using ARTool. ARTool is a platform for the automatic processing of various image modalities of coronary and carotid arteries. The images and their content are fused to develop morphological models of the arteries in 3D representations. All the above described procedures integrate disparate data formats, protocols and tools. ART-ML proposes a representation way, expanding ARTool, for interpretability of the individual resources, creating a standard unified model for the description of data and, consequently, a format for their exchange and representation that is machine independent. More specifically, ARTool platform incorporates efficient algorithms which are able to perform blood flow simulations and atherosclerotic plaque evolution modelling. Integration of data layers between different modules within ARTool are based upon the interchange of information included in the ART-ML model repository. ART-ML provides a markup representation that enables the representation and management of embedded models within the cardiovascular disease modelling platform, the storage and interchange of well-defined information. The corresponding ART-ML model incorporates all relevant information regarding geometry, blood flow, plaque progression and stent modelling procedures. All created models are stored in a model repository database which is accessible to the research community using efficient web interfaces, enabling the interoperability of any cardiovascular disease modelling software

  4. Development of clinical contents model markup language for electronic health records.

    Science.gov (United States)

    Yun, Ji-Hyun; Ahn, Sun-Ju; Kim, Yoon

    2012-09-01

    To develop dedicated markup language for clinical contents models (CCM) to facilitate the active use of CCM in electronic health record systems. Based on analysis of the structure and characteristics of CCM in the clinical domain, we designed extensible markup language (XML) based CCM markup language (CCML) schema manually. CCML faithfully reflects CCM in both the syntactic and semantic aspects. As this language is based on XML, it can be expressed and processed in computer systems and can be used in a technology-neutral way. CCML HAS THE FOLLOWING STRENGTHS: it is machine-readable and highly human-readable, it does not require a dedicated parser, and it can be applied for existing electronic health record systems.

  5. National Land Imaging Requirements (NLIR) Pilot Project summary report: summary of moderate resolution imaging user requirements

    Science.gov (United States)

    Vadnais, Carolyn; Stensaas, Gregory

    2014-01-01

    Under the National Land Imaging Requirements (NLIR) Project, the U.S. Geological Survey (USGS) is developing a functional capability to obtain, characterize, manage, maintain and prioritize all Earth observing (EO) land remote sensing user requirements. The goal is a better understanding of community needs that can be supported with land remote sensing resources, and a means to match needs with appropriate solutions in an effective and efficient way. The NLIR Project is composed of two components. The first component is focused on the development of the Earth Observation Requirements Evaluation System (EORES) to capture, store and analyze user requirements, whereas, the second component is the mechanism and processes to elicit and document the user requirements that will populate the EORES. To develop the second component, the requirements elicitation methodology was exercised and refined through a pilot project conducted from June to September 2013. The pilot project focused specifically on applications and user requirements for moderate resolution imagery (5–120 meter resolution) as the test case for requirements development. The purpose of this summary report is to provide a high-level overview of the requirements elicitation process that was exercised through the pilot project and an early analysis of the moderate resolution imaging user requirements acquired to date to support ongoing USGS sustainable land imaging study needs. The pilot project engaged a limited set of Federal Government users from the operational and research communities and therefore the information captured represents only a subset of all land imaging user requirements. However, based on a comparison of results, trends, and analysis, the pilot captured a strong baseline of typical applications areas and user needs for moderate resolution imagery. Because these results are preliminary and represent only a sample of users and application areas, the information from this report should only

  6. Advances in aircraft design: Multiobjective optimization and a markup language

    Science.gov (United States)

    Deshpande, Shubhangi

    Today's modern aerospace systems exhibit strong interdisciplinary coupling and require a multidisciplinary, collaborative approach. Analysis methods that were once considered feasible only for advanced and detailed design are now available and even practical at the conceptual design stage. This changing philosophy for conducting conceptual design poses additional challenges beyond those encountered in a low fidelity design of aircraft. This thesis takes some steps towards bridging the gaps in existing technologies and advancing the state-of-the-art in aircraft design. The first part of the thesis proposes a new Pareto front approximation method for multiobjective optimization problems. The method employs a hybrid optimization approach using two derivative free direct search techniques, and is intended for solving blackbox simulation based multiobjective optimization problems with possibly nonsmooth functions where the analytical formof the objectives is not known and/or the evaluation of the objective function(s) is very expensive (very common in multidisciplinary design optimization). A new adaptive weighting scheme is proposed to convert a multiobjective optimization problem to a single objective optimization problem. Results show that the method achieves an arbitrarily close approximation to the Pareto front with a good collection of well-distributed nondominated points. The second part deals with the interdisciplinary data communication issues involved in a collaborative mutidisciplinary aircraft design environment. Efficient transfer, sharing, and manipulation of design and analysis data in a collaborative environment demands a formal structured representation of data. XML, a W3C recommendation, is one such standard concomitant with a number of powerful capabilities that alleviate interoperability issues. A compact, generic, and comprehensive XML schema for an aircraft design markup language (ADML) is proposed here to provide a common language for data

  7. SuML: A Survey Markup Language for Generalized Survey Encoding

    Science.gov (United States)

    Barclay, MW; Lober, WB; Karras, BT

    2002-01-01

    There is a need in clinical and research settings for a sophisticated, generalized, web based survey tool that supports complex logic, separation of content and presentation, and computable guidelines. There are many commercial and open source survey packages available that provide simple logic; few provide sophistication beyond “goto” statements; none support the use of guidelines. These tools are driven by databases, static web pages, and structured documents using markup languages such as eXtensible Markup Language (XML). We propose a generalized, guideline aware language and an implementation architecture using open source standards.

  8. Free Trade Agreements and Firm-Product Markups in Chilean Manufacturing

    DEFF Research Database (Denmark)

    Lamorgese, A.R.; Linarello, A.; Warzynski, Frederic Michel Patrick

    In this paper, we use detailed information about firms' product portfolio to study how trade liberalization affects prices, markups and productivity. We document these effects using firm product level data in Chilean manufacturing following two major trade agreements with the EU and the US...... at the firm-product level. On average, adjustment on the profit margin does not appear to play a role. However, for more differentiated products, we find some evidence of an increase in markups, suggesting that firms do not fully pass-through increases in productivity on prices whenever they have enough...

  9. On multigrid methods for image reconstruction from projections

    Energy Technology Data Exchange (ETDEWEB)

    Henson, V.E.; Robinson, B.T. [Naval Postgraduate School, Monterey, CA (United States); Limber, M. [Simon Fraser Univ., Burnaby, British Columbia (Canada)

    1994-12-31

    The sampled Radon transform of a 2D function can be represented as a continuous linear map R : L{sup 1} {yields} R{sup N}. The image reconstruction problem is: given a vector b {element_of} R{sup N}, find an image (or density function) u(x, y) such that Ru = b. Since in general there are infinitely many solutions, the authors pick the solution with minimal 2-norm. Numerous proposals have been made regarding how best to discretize this problem. One can, for example, select a set of functions {phi}{sub j} that span a particular subspace {Omega} {contained_in} L{sup 1}, and model R : {Omega} {yields} R{sup N}. The subspace {Omega} may be chosen as a member of a sequence of subspaces whose limit is dense in L{sup 1}. One approach to the choice of {Omega} gives rise to a natural pixel discretization of the image space. Two possible choices of the set {phi}{sub j} are the set of characteristic functions of finite-width `strips` representing energy transmission paths and the set of intersections of such strips. The authors have studied the eigenstructure of the matrices B resulting from these choices and the effect of applying a Gauss-Seidel iteration to the problem Bw = b. There exists a near null space into which the error vectors migrate with iteration, after which Gauss-Seidel iteration stalls. The authors attempt to accelerate convergence via a multilevel scheme, based on the principles of McCormick`s Multilevel Projection Method (PML). Coarsening is achieved by thickening the rays which results in a much smaller discretization of an optimal grid, and a halving of the number of variables. This approach satisfies all the requirements of the PML scheme. They have observed that a multilevel approach based on this idea accelerates convergence at least to the point where noise in the data dominates.

  10. Reproducible computational biology experiments with SED-ML--the Simulation Experiment Description Markup Language.

    Science.gov (United States)

    Waltemath, Dagmar; Adams, Richard; Bergmann, Frank T; Hucka, Michael; Kolpakov, Fedor; Miller, Andrew K; Moraru, Ion I; Nickerson, David; Sahle, Sven; Snoep, Jacky L; Le Novère, Nicolas

    2011-12-15

    The increasing use of computational simulation experiments to inform modern biological research creates new challenges to annotate, archive, share and reproduce such experiments. The recently published Minimum Information About a Simulation Experiment (MIASE) proposes a minimal set of information that should be provided to allow the reproduction of simulation experiments among users and software tools. In this article, we present the Simulation Experiment Description Markup Language (SED-ML). SED-ML encodes in a computer-readable exchange format the information required by MIASE to enable reproduction of simulation experiments. It has been developed as a community project and it is defined in a detailed technical specification and additionally provides an XML schema. The version of SED-ML described in this publication is Level 1 Version 1. It covers the description of the most frequent type of simulation experiments in the area, namely time course simulations. SED-ML documents specify which models to use in an experiment, modifications to apply on the models before using them, which simulation procedures to run on each model, what analysis results to output, and how the results should be presented. These descriptions are independent of the underlying model implementation. SED-ML is a software-independent format for encoding the description of simulation experiments; it is not specific to particular simulation tools. Here, we demonstrate that with the growing software support for SED-ML we can effectively exchange executable simulation descriptions. With SED-ML, software can exchange simulation experiment descriptions, enabling the validation and reuse of simulation experiments in different tools. Authors of papers reporting simulation experiments can make their simulation protocols available for other scientists to reproduce the results. Because SED-ML is agnostic about exact modeling language(s) used, experiments covering models from different fields of research

  11. Reproducible computational biology experiments with SED-ML - The Simulation Experiment Description Markup Language

    Science.gov (United States)

    2011-01-01

    Background The increasing use of computational simulation experiments to inform modern biological research creates new challenges to annotate, archive, share and reproduce such experiments. The recently published Minimum Information About a Simulation Experiment (MIASE) proposes a minimal set of information that should be provided to allow the reproduction of simulation experiments among users and software tools. Results In this article, we present the Simulation Experiment Description Markup Language (SED-ML). SED-ML encodes in a computer-readable exchange format the information required by MIASE to enable reproduction of simulation experiments. It has been developed as a community project and it is defined in a detailed technical specification and additionally provides an XML schema. The version of SED-ML described in this publication is Level 1 Version 1. It covers the description of the most frequent type of simulation experiments in the area, namely time course simulations. SED-ML documents specify which models to use in an experiment, modifications to apply on the models before using them, which simulation procedures to run on each model, what analysis results to output, and how the results should be presented. These descriptions are independent of the underlying model implementation. SED-ML is a software-independent format for encoding the description of simulation experiments; it is not specific to particular simulation tools. Here, we demonstrate that with the growing software support for SED-ML we can effectively exchange executable simulation descriptions. Conclusions With SED-ML, software can exchange simulation experiment descriptions, enabling the validation and reuse of simulation experiments in different tools. Authors of papers reporting simulation experiments can make their simulation protocols available for other scientists to reproduce the results. Because SED-ML is agnostic about exact modeling language(s) used, experiments covering models from

  12. Reproducible computational biology experiments with SED-ML - The Simulation Experiment Description Markup Language

    Directory of Open Access Journals (Sweden)

    Waltemath Dagmar

    2011-12-01

    Full Text Available Abstract Background The increasing use of computational simulation experiments to inform modern biological research creates new challenges to annotate, archive, share and reproduce such experiments. The recently published Minimum Information About a Simulation Experiment (MIASE proposes a minimal set of information that should be provided to allow the reproduction of simulation experiments among users and software tools. Results In this article, we present the Simulation Experiment Description Markup Language (SED-ML. SED-ML encodes in a computer-readable exchange format the information required by MIASE to enable reproduction of simulation experiments. It has been developed as a community project and it is defined in a detailed technical specification and additionally provides an XML schema. The version of SED-ML described in this publication is Level 1 Version 1. It covers the description of the most frequent type of simulation experiments in the area, namely time course simulations. SED-ML documents specify which models to use in an experiment, modifications to apply on the models before using them, which simulation procedures to run on each model, what analysis results to output, and how the results should be presented. These descriptions are independent of the underlying model implementation. SED-ML is a software-independent format for encoding the description of simulation experiments; it is not specific to particular simulation tools. Here, we demonstrate that with the growing software support for SED-ML we can effectively exchange executable simulation descriptions. Conclusions With SED-ML, software can exchange simulation experiment descriptions, enabling the validation and reuse of simulation experiments in different tools. Authors of papers reporting simulation experiments can make their simulation protocols available for other scientists to reproduce the results. Because SED-ML is agnostic about exact modeling language(s used

  13. The evolution of the CUAHSI Water Markup Language (WaterML)

    Science.gov (United States)

    Zaslavsky, I.; Valentine, D.; Maidment, D.; Tarboton, D. G.; Whiteaker, T.; Hooper, R.; Kirschtel, D.; Rodriguez, M.

    2009-04-01

    The CUAHSI Hydrologic Information System (HIS, his.cuahsi.org) uses web services as the core data exchange mechanism which provides programmatic connection between many heterogeneous sources of hydrologic data and a variety of online and desktop client applications. The service message schema follows the CUAHSI Water Markup Language (WaterML) 1.x specification (see OGC Discussion Paper 07-041r1). Data sources that can be queried via WaterML-compliant water data services include national and international repositories such as USGS NWIS (National Water Information System), USEPA STORET (Storage & Retrieval), USDA SNOTEL (Snowpack Telemetry), NCDC ISH and ISD(Integrated Surface Hourly and Daily Data), MODIS (Moderate Resolution Imaging Spectroradiometer), and DAYMET (Daily Surface Weather Data and Climatological Summaries). Besides government data sources, CUAHSI HIS provides access to a growing number of academic hydrologic observation networks. These networks are registered by researchers associated with 11 hydrologic observatory testbeds around the US, and other research, government and commercial groups wishing to join the emerging CUAHSI Water Data Federation. The Hydrologic Information Server (HIS Server) software stack deployed at NSF-supported hydrologic observatory sites and other universities around the country, supports a hydrologic data publication workflow which includes the following steps: (1) observational data are loaded from static files or streamed from sensors into a local instance of an Observations Data Model (ODM) database; (2) a generic web service template is configured for the new ODM instance to expose the data as a WaterML-compliant water data service, and (3) the new water data service is registered at the HISCentral registry (hiscentral.cuahsi.org), its metadata are harvested and semantically tagged using concepts from a hydrologic ontology. As a result, the new service is indexed in the CUAHSI central metadata catalog, and becomes

  14. Resolving Controlled Vocabulary in DITA Markup: A Case Example in Agroforestry

    Science.gov (United States)

    Zschocke, Thomas

    2012-01-01

    Purpose: This paper aims to address the issue of matching controlled vocabulary on agroforestry from knowledge organization systems (KOS) and incorporating these terms in DITA markup. The paper has been selected for an extended version from MTSR'11. Design/methodology/approach: After a general description of the steps taken to harmonize controlled…

  15. A methodology for evaluation of a markup-based specification of clinical guidelines.

    Science.gov (United States)

    Shalom, Erez; Shahar, Yuval; Taieb-Maimon, Meirav; Lunenfeld, Eitan

    2008-11-06

    We introduce a three-phase, nine-step methodology for specification of clinical guidelines (GLs) by expert physicians, clinical editors, and knowledge engineers, and for quantitative evaluation of the specification's quality. We applied this methodology to a particular framework for incremental GL structuring (mark-up) and to GLs in three clinical domains with encouraging results.

  16. Developing a Markup Language for Encoding Graphic Content in Plan Documents

    Science.gov (United States)

    Li, Jinghuan

    2009-01-01

    While deliberating and making decisions, participants in urban development processes need easy access to the pertinent content scattered among different plans. A Planning Markup Language (PML) has been proposed to represent the underlying structure of plans in an XML-compliant way. However, PML currently covers only textual information and lacks…

  17. Chemical Markup, XML, and the World Wide Web. 7. CMLSpect, an XML vocabulary for spectral data.

    Science.gov (United States)

    Kuhn, Stefan; Helmus, Tobias; Lancashire, Robert J; Murray-Rust, Peter; Rzepa, Henry S; Steinbeck, Christoph; Willighagen, Egon L

    2007-01-01

    CMLSpect is an extension of Chemical Markup Language (CML) for managing spectral and other analytical data. It is designed to be flexible enough to contain a wide variety of spectral data. The paper describes the CMLElements used and gives practical examples for common types of spectra. In addition it demonstrates how different views of the data can be expressed and what problems still exist.

  18. Resolving Controlled Vocabulary in DITA Markup: A Case Example in Agroforestry

    Science.gov (United States)

    Zschocke, Thomas

    2012-01-01

    Purpose: This paper aims to address the issue of matching controlled vocabulary on agroforestry from knowledge organization systems (KOS) and incorporating these terms in DITA markup. The paper has been selected for an extended version from MTSR'11. Design/methodology/approach: After a general description of the steps taken to harmonize controlled…

  19. ICAAP eXtended Markup Language: Exploiting XML and Adding Value to the Journals Production Process.

    Science.gov (United States)

    Sosteric, Mike

    1999-01-01

    Discusses the technological advances attained by the International Consortium for Alternative Academic Publication (ACAAP) aimed at reforming the scholarly communication system and providing an alternative to high-priced commercial publishing. Describes the eXtended markup language, based on SGML and HTML, that provides indexing and…

  20. The Adoption of Mark-Up Tools in an Interactive e-Textbook Reader

    Science.gov (United States)

    Van Horne, Sam; Russell, Jae-eun; Schuh, Kathy L.

    2016-01-01

    Researchers have more often examined whether students prefer using an e-textbook over a paper textbook or whether e-textbooks provide a better resource for learning than paper textbooks, but students' adoption of mark-up tools has remained relatively unexamined. Drawing on the concept of Innovation Diffusion Theory, we used educational data mining…

  1. A primer on the Petri Net Markup Language and ISO/IEC 15909-2

    DEFF Research Database (Denmark)

    Hillah, L. M.; Kindler, Ekkart; Kordon, F.

    2009-01-01

    Standard, defines a transfer format for high-level nets. The transfer format defined in Part 2 of ISO/IEC 15909 is (or is based on) the \\emph{Petri Net Markup Language} (PNML), which was originally introduced as an interchange format for different kinds of Petri nets. In ISO/IEC 15909-2, however...

  2. RELISH LMF: unlocking the full power of the Lexical Markup Framework

    NARCIS (Netherlands)

    Windhouwer, Menzo

    2014-01-01

    In 2008 ISO TC 37 (ISO 24613:2008) published the Lexical Markup Framework (LMF) standard (www.lexicalmarkupframework.org). This standard was based on input of many experts in the field a core model and a whole series of extensions were specified in the form of UML class diagrams. For a specific

  3. A primer on the Petri Net Markup Language and ISO/IEC 15909-2

    DEFF Research Database (Denmark)

    Hillah, L. M.; Kindler, Ekkart; Kordon, F.

    2009-01-01

    Standard, defines a transfer format for high-level nets. The transfer format defined in Part 2 of ISO/IEC 15909 is (or is based on) the \\emph{Petri Net Markup Language} (PNML), which was originally introduced as an interchange format for different kinds of Petri nets. In ISO/IEC 15909-2, however...

  4. Trade Reforms, Mark-Ups and Bargaining Power of Workers: The ...

    African Journals Online (AJOL)

    Optiplex 7010 Pro

    model of mark-up with labor bargaining power was estimated using random effects and LDPDM. ... article and also Dar es Salaam University and African Economic Research. Consortium. .... up with a positive association between workers' rent sharing parameter and ... individual contract or through collective agreements.

  5. A Gimbal-Stabilized Compact Hyperspectral Imaging System Project

    Data.gov (United States)

    National Aeronautics and Space Administration — The Gimbal-stabilized Compact Hyperspectral Imaging System (GCHIS) fully integrates multi-sensor spectral imaging, stereovision, GPS and inertial measurement,...

  6. A Gimbal-Stabilized Compact Hyperspectral Imaging System Project

    Data.gov (United States)

    National Aeronautics and Space Administration — The Gimbal-stabilized Compact Hyperspectral Imaging System (GCHIS) fully integrates multi-sensor spectral imaging, stereovision, GPS and inertial measurement,...

  7. Charged particle velocity map image reconstruction with one-dimensional projections of spherical functions

    Energy Technology Data Exchange (ETDEWEB)

    Gerber, Thomas; Liu Yuzhu; Knopp, Gregor; Hemberger, Patrick; Bodi, Andras; Radi, Peter; Sych, Yaroslav [Molecular Dynamics Group, Paul Scherrer Institut, 5232 Villigen (Switzerland)

    2013-03-15

    Velocity map imaging (VMI) is used in mass spectrometry and in angle resolved photo-electron spectroscopy to determine the lateral momentum distributions of charged particles accelerated towards a detector. VM-images are composed of projected Newton spheres with a common centre. The 2D images are usually evaluated by a decomposition into base vectors each representing the 2D projection of a set of particles starting from a centre with a specific velocity distribution. We propose to evaluate 1D projections of VM-images in terms of 1D projections of spherical functions, instead. The proposed evaluation algorithm shows that all distribution information can be retrieved from an adequately chosen set of 1D projections, alleviating the numerical effort for the interpretation of VM-images considerably. The obtained results produce directly the coefficients of the involved spherical functions, making the reconstruction of sliced Newton spheres obsolete.

  8. Collaborative Tracking of Image Features Based on Projective Invariance

    Science.gov (United States)

    Jiang, Jinwei

    -mode sensors for improving the flexibility and robustness of the system. From the experimental results during three field tests for the LASOIS system, we observed that most of the errors in the image processing algorithm are caused by the incorrect feature tracking. This dissertation addresses the feature tracking problem in image sequences acquired from cameras. Despite many alternatives to feature tracking problem, iterative least squares solution solving the optical flow equation has been the most popular approach used by many in the field. This dissertation attempts to leverage the former efforts to enhance feature tracking methods by introducing a view geometric constraint to the tracking problem, which provides collaboration among features. In contrast to alternative geometry based methods, the proposed approach provides an online solution to optical flow estimation in a collaborative fashion by exploiting Horn and Schunck flow estimation regularized by view geometric constraints. Proposed collaborative tracker estimates the motion of a feature based on the geometry of the scene and how the other features are moving. Alternative to this approach, a new closed form solution to tracking that combines the image appearance with the view geometry is also introduced. We particularly use invariants in the projective coordinates and conjecture that the traditional appearance solution can be significantly improved using view geometry. The geometric constraint is introduced by defining a new optical flow equation which exploits the scene geometry from a set drawn from tracked features. At the end of each tracking loop the quality of the tracked features is judged using both appearance similarity and geometric consistency. Our experiments demonstrate robust tracking performance even when the features are occluded or they undergo appearance changes due to projective deformation of the template. The proposed collaborative tracking method is also tested in the visual navigation

  9. Effect of virtual image projection distance on the accomodative response of the eye.

    Science.gov (United States)

    Chisum, G T; Morway, P E

    1977-09-01

    Virtual image displays utilize either aircraft-mounted or helmet-mounted beam splitters, or combining screens. The effect on the accomodative response of the projection distance of the virtual image was measured by photographing the first and fourth Purkinje images of a source. The results indicate possible effect on the accomodation response. Further exploration of the problem is indicated.

  10. Standardization of seismic tomographic models and earthquake focal mechanisms data sets based on web technologies, visualization with keyhole markup language

    Science.gov (United States)

    Postpischl, Luca; Danecek, Peter; Morelli, Andrea; Pondrelli, Silvia

    2011-01-01

    We present two projects in seismology that have been ported to web technologies, which provide results in Keyhole Markup Language (KML) visualization layers. These use the Google Earth geo-browser as the flexible platform that can substitute specialized graphical tools to perform qualitative visual data analyses and comparisons. The Network of Research Infrastructures for European Seismology (NERIES) Tomographic Earth Model Repository contains data sets from over 20 models from the literature. A hierarchical structure of folders that represent the sets of depths for each model is implemented in KML, and this immediately results into an intuitive interface for users to navigate freely and to compare tomographic plots. The KML layer for the European-Mediterranean Regional Centroid-Moment Tensor Catalog displays the focal mechanism solutions or moderate-magnitude Earthquakes from 1997 to the present. Our aim in both projects was to also propose standard representations of scientific data sets. Here, the general semantic approach of an XML framework has an important impact that must be further explored, although we find the KML syntax to more emphasis on aspects of detailed visualization. We have thus used, and propose the use of, Javascript Object Notation (JSON), another semantic notation that stems from the web-development community that provides a compact, general-purpose, data-exchange format.

  11. Wide-Field, Deep UV Raman Hyperspectral Imager Project

    Data.gov (United States)

    National Aeronautics and Space Administration — ChemImage Sensor Systems (CISS), teaming with the University of South Carolina, proposes a revolutionary wide-field Raman hyperspectral imaging system capable of...

  12. Fluorescence guided lymph node biopsy in large animals using direct image projection device

    Science.gov (United States)

    Ringhausen, Elizabeth; Wang, Tylon; Pitts, Jonathan; Akers, Walter J.

    2016-03-01

    The use of fluorescence imaging for aiding oncologic surgery is a fast growing field in biomedical imaging, revolutionizing open and minimally invasive surgery practices. We have designed, constructed, and tested a system for fluorescence image acquisition and direct display on the surgical field for fluorescence guided surgery. The system uses a near-infrared sensitive CMOS camera for image acquisition, a near-infra LED light source for excitation, and DLP digital projector for projection of fluorescence image data onto the operating field in real time. Instrument control was implemented in Matlab for image capture, processing of acquired data and alignment of image parameters with the projected pattern. Accuracy of alignment was evaluated statistically to demonstrate sensitivity to small objects and alignment throughout the imaging field. After verification of accurate alignment, feasibility for clinical application was demonstrated in large animal models of sentinel lymph node biopsy. Indocyanine green was injected subcutaneously in Yorkshire pigs at various locations to model sentinel lymph node biopsy in gynecologic cancers, head and neck cancer, and melanoma. Fluorescence was detected by the camera system during operations and projected onto the imaging field, accurately identifying tissues containing the fluorescent tracer at up to 15 frames per second. Fluorescence information was projected as binary green regions after thresholding and denoising raw intensity data. Promising results with this initial clinical scale prototype provided encouraging results for the feasibility of optical projection of acquired luminescence during open oncologic surgeries.

  13. CT Image Reconstruction from Sparse Projections Using Adaptive TpV Regularization

    Directory of Open Access Journals (Sweden)

    Hongliang Qi

    2015-01-01

    Full Text Available Radiation dose reduction without losing CT image quality has been an increasing concern. Reducing the number of X-ray projections to reconstruct CT images, which is also called sparse-projection reconstruction, can potentially avoid excessive dose delivered to patients in CT examination. To overcome the disadvantages of total variation (TV minimization method, in this work we introduce a novel adaptive TpV regularization into sparse-projection image reconstruction and use FISTA technique to accelerate iterative convergence. The numerical experiments demonstrate that the proposed method suppresses noise and artifacts more efficiently, and preserves structure information better than other existing reconstruction methods.

  14. Semantically supporting data discovery, markup and aggregation in the European Marine Observation and Data Network (EMODnet)

    Science.gov (United States)

    Lowry, Roy; Leadbetter, Adam

    2014-05-01

    The semantic content of the NERC Vocabulary Server (NVS) has been developed over thirty years. It has been used to mark up metadata and data in a wide range of international projects, including the European Commission (EC) Framework Programme 7 projects SeaDataNet and The Open Service Network for Marine Environmental Data (NETMAR). Within the United States, the National Science Foundation projects Rolling Deck to Repository and Biological & Chemical Data Management Office (BCO-DMO) use concepts from NVS for markup. Further, typed relationships between NVS concepts and terms served by the Marine Metadata Interoperability Ontology Registry and Repository. The vast majority of the concepts publicly served from NVS (35% of ~82,000) form the British Oceanographic Data Centre (BODC) Parameter Usage Vocabulary (PUV). The PUV is instantiated on the NVS as a SKOS concept collection. These terms are used to describe the individual channels in data and metadata served by, for example, BODC, SeaDataNet and BCO-DMO. The PUV terms are designed to be very precise and may contain a high level of detail. Some users have reported that the PUV is difficult to navigate due to its size and complexity (a problem CSIRO have begun to address by deploying a SISSVoc interface to the NVS), and it has been difficult to aggregate data as multiple PUV terms can - with full validity - be used to describe the same data channels. Better approaches to data aggregation are required as a use case for the PUV from the EC European Marine Observation and Data Network (EMODnet) Chemistry project. One solution, proposed and demonstrated during the course of the NETMAR project, is to build new SKOS concept collections which formalise the desired aggregations for given applications, and uses typed relationships to state which PUV concepts contribute to a specific aggregation. Development of these new collections requires input from a group of experts in the application domain who can decide which PUV

  15. The duality of XML Markup and Programming notation

    DEFF Research Database (Denmark)

    Nørmark, Kurt

    2003-01-01

    In web projects it is often necessary to mix XML notation and program notation in a single document or program. In mono-lingual situations, the XML notation is either subsumed in the program or the program notation is subsumed in the XML document. As an introduction we analyze XML notation and pr...

  16. The duality of XML Markup and Programming notation

    DEFF Research Database (Denmark)

    Nørmark, Kurt

    2003-01-01

    In web projects it is often necessary to mix XML notation and program notation in a single document or program. In mono-lingual situations, the XML notation is either subsumed in the program or the program notation is subsumed in the XML document. As an introduction we analyze XML notation and pr...

  17. Research of inverse synthetic aperture imaging lidar based on filtered back-projection tomography technique

    Science.gov (United States)

    Liu, Zhi-chao; Yang, Jin-hua

    2014-07-01

    In order to obtain clear two-dimensional image under the conditions without using heterodyne interferometry by inverse synthetic aperture lidar(ISAL), designed imaging algorithms based on filtered back projection tomography technique, and the target "A" was reconstructed with simulation algorithm by the system in the turntable model. Analyzed the working process of ISAL, and the function of the reconstructed image was given. Detail analysis of the physical meaning of the various parameters in the process of echo data, and its parameters affect the reconstructed image. The image in test area was reconstructed by the one-dimensional distance information with filtered back projection tomography technique. When the measured target rotated, the sum of the echo light intensity at the same distance was constituted by the different position of the measured target. When the total amount collected is large enough, multiple equations can be solved change. Filtered back-projection image of the ideal image is obtained through MATLAB simulation, and analyzed that the angle intervals affected the reconstruction of image. The ratio of the intensity of echo light and loss light affected the reconstruction of image was analyzed. Simulation results show that, when the sampling angle is smaller, the resolution of the reconstructed image of measured target is higher. And the ratio of the intensity of echo light and loss light is greater, the resolution of the reconstructed image of measured target is higher. In conclusion after some data processing, the reconstructed image basically meets be effective identification requirements.

  18. A portable image overlay projection device for computer-aided open liver surgery.

    Science.gov (United States)

    Gavaghan, Kate A; Peterhans, Matthias; Oliveira-Santos, Thiago; Weber, Stefan

    2011-06-01

    Image overlay projection is a form of augmented reality that allows surgeons to view underlying anatomical structures directly on the patient surface. It improves intuitiveness of computer-aided surgery by removing the need for sight diversion between the patient and a display screen and has been reported to assist in 3-D understanding of anatomical structures and the identification of target and critical structures. Challenges in the development of image overlay technologies for surgery remain in the projection setup. Calibration, patient registration, view direction, and projection obstruction remain unsolved limitations to image overlay techniques. In this paper, we propose a novel, portable, and handheld-navigated image overlay device based on miniature laser projection technology that allows images of 3-D patient-specific models to be projected directly onto the organ surface intraoperatively without the need for intrusive hardware around the surgical site. The device can be integrated into a navigation system, thereby exploiting existing patient registration and model generation solutions. The position of the device is tracked by the navigation system's position sensor and used to project geometrically correct images from any position within the workspace of the navigation system. The projector was calibrated using modified camera calibration techniques and images for projection are rendered using a virtual camera defined by the projectors extrinsic parameters. Verification of the device's projection accuracy concluded a mean projection error of 1.3 mm. Visibility testing of the projection performed on pig liver tissue found the device suitable for the display of anatomical structures on the organ surface. The feasibility of use within the surgical workflow was assessed during open liver surgery. We show that the device could be quickly and unobtrusively deployed within the sterile environment.

  19. ArdenML: The Arden Syntax Markup Language (or Arden Syntax: It's Not Just Text Any More!)

    Science.gov (United States)

    Sailors, R. Matthew

    2001-01-01

    It is no longer necessary to think of Arden Syntax as simply a text-based knowledge base format. The development of ArdenML (Arden Syntax Markup Language), an XML-based markup language allows structured access to most of the maintenance and library categories without the need to write or buy a compiler may lead to the development of simple commercial and freeware tools for processing Arden Syntax Medical Logic Modules (MLMs)

  20. Anisotropic conductivity imaging with MREIT using equipotential projection algorithm.

    Science.gov (United States)

    Değirmenci, Evren; Eyüboğlu, B Murat

    2007-12-21

    Magnetic resonance electrical impedance tomography (MREIT) combines magnetic flux or current density measurements obtained by magnetic resonance imaging (MRI) and surface potential measurements to reconstruct images of true conductivity with high spatial resolution. Most of the biological tissues have anisotropic conductivity; therefore, anisotropy should be taken into account in conductivity image reconstruction. Almost all of the MREIT reconstruction algorithms proposed to date assume isotropic conductivity distribution. In this study, a novel MREIT image reconstruction algorithm is proposed to image anisotropic conductivity. Relative anisotropic conductivity values are reconstructed iteratively, using only current density measurements without any potential measurement. In order to obtain true conductivity values, only either one potential or conductivity measurement is sufficient to determine a scaling factor. The proposed technique is evaluated on simulated data for isotropic and anisotropic conductivity distributions, with and without measurement noise. Simulation results show that the images of both anisotropic and isotropic conductivity distributions can be reconstructed successfully.

  1. A study on the GPU based parallel computation of a projection image

    Science.gov (United States)

    Lee, Hyunjeong; Han, Miseon; Kim, Jeongtae

    2017-05-01

    Fast computation of projection images is crucial in many applications such as medical image reconstruction and light field image processing. To do that, parallelization of the computation and efficient implementation of the computation using a parallel processor such as GPGPU (General-Purpose computing on Graphics Processing Units) is essential. In this research, we investigate methods for parallel computation of projection images and efficient implementation of the methods using CUDA (Compute Unified Device Architecture). We also study how to efficiently use the memory of GPU for the parallel processing.

  2. Multiparameter image visualization by projection pursuit (Proceedings Only)

    Science.gov (United States)

    Harikumar, G.; Bresler, Yoram

    1992-09-01

    This paper addresses the display of multi-parameter medical image data, such as arises in MRI or multimodality image fusion. MRI or multi modality studies produce several different images of a given cross-section of the body, each providing different levels of contrast sensitivity between different tissues. The question then arises as to how to present this wealth of data to the diagnostician. While each of the different images may be misleading (as illustrated later by an example), in combination they may contain the correct information. Unfortunately, a human observer is not likely to be able to extract this information when presented with a parallel display of the distinct images. Given the sequential nature of detailed visual examination of a picture, a human observer is quite ineffective at integrating complex visual data from parallel sources. The development of a display technology that overcomes this difficulty by synthesizing a display method matched to the capabilities of the human observer is the subject of this paper. The ultimate goal of diagnostic imaging is the detection, localization, and quantification of abnormality. An intermediate goal, which is the one we address, is to present the diagnostician with an image that will maximize his changes to classify correctly different regions in the image as belonging to different tissue types. Our premise is that the diagnostician is able to bring to bear all his knowledge and experience, which are difficult to capture in a computer program, on the final analysis process. This is often key to the detection of subtle and otherwise elusive features in the image. We therefore rule out the generation of an automatically segmented image, which not only fails to include this knowledge, but also would deprive the diagnostician of the opportunity to exercise it, by presenting him with a hard-labeled segmentation. Instead we concentrate on the fusion of the multiple images of the same cross-section into a single

  3. EUV Doppler Imaging for CubeSat Platforms Project

    Data.gov (United States)

    National Aeronautics and Space Administration — Mature the design and fabricate the Flare Initiation Doppler Imager (FIDI) instrument to demonstrate low-spacecraft-resource EUV technology (most notably,...

  4. Chemical markup, XML, and the World Wide Web. 5. Applications of chemical metadata in RSS aggregators.

    Science.gov (United States)

    Murray-Rust, Peter; Rzepa, Henry S; Williamson, Mark J; Willighagen, Egon L

    2004-01-01

    Examples of the use of the RSS 1.0 (RDF Site Summary) specification together with CML (Chemical Markup Language) to create a metadata based alerting service termed CMLRSS for molecular content are presented. CMLRSS can be viewed either using generic software or with modular opensource chemical viewers and editors enhanced with CMLRSS modules. We discuss the more automated use of CMLRSS as a component of a World Wide Molecular Matrix of semantically rich chemical information.

  5. Development of Markup Language for Medical Record Charting: A Charting Language.

    Science.gov (United States)

    Jung, Won-Mo; Chae, Younbyoung; Jang, Bo-Hyoung

    2015-01-01

    Nowadays a lot of trials for collecting electronic medical records (EMRs) exist. However, structuring data format for EMR is an especially labour-intensive task for practitioners. Here we propose a new mark-up language for medical record charting (called Charting Language), which borrows useful properties from programming languages. Thus, with Charting Language, the text data described in dynamic situation can be easily used to extract information.

  6. Standard generalized markup language: A guide for transmitting encoded bibliographic records

    Energy Technology Data Exchange (ETDEWEB)

    1994-09-01

    This document provides the guidance necessary to transmit to DOE`s Office of Scientific and Technical Information (OSTI) an encoded bibliographic record that conforms to International Standard ISO 8879, Information Processing -- Text and office systems -- Standard Generalized Markup Language (SGML). Included in this document are element and attribute tag definitions, sample bibliographic records, the bibliographic document type definition, and instructions on how to transmit a bibliographic record electronically to OSTI.

  7. Hypertext markup language as an authoring tool for CD-ROM production.

    Science.gov (United States)

    Lynch, P J; Horton, S

    1998-01-01

    The Hypertext Markup Language (HTML) used to create Web pages is an attractive alternative to the proprietary authoring software that is now widely used to produce multimedia content for CD-ROMs. This paper describes the advantages and limitations of HTML as a non-proprietary and cross-platform CD-ROM authoring system, and the more general advantages of HTML as data standard for biocommunications content.

  8. Semi-automated XML markup of biosystematic legacy literature with the GoldenGATE editor.

    Science.gov (United States)

    Sautter, Guido; Böhm, Klemens; Agosti, Donat

    2007-01-01

    Today, digitization of legacy literature is a big issue. This also applies to the domain of biosystematics, where this process has just started. Digitized biosystematics literature requires a very precise and fine grained markup in order to be useful for detailed search, data linkage and mining. However, manual markup on sentence level and below is cumbersome and time consuming. In this paper, we present and evaluate the GoldenGATE editor, which is designed for the special needs of marking up OCR output with XML. It is built in order to support the user in this process as far as possible: Its functionality ranges from easy, intuitive tagging through markup conversion to dynamic binding of configurable plug-ins provided by third parties. Our evaluation shows that marking up an OCR document using GoldenGATE is three to four times faster than with an off-the-shelf XML editor like XML-Spy. Using domain-specific NLP-based plug-ins, these numbers are even higher.

  9. Automated Text Markup for Information Retrieval from an Electronic Textbook of Infectious Disease

    Science.gov (United States)

    Berrios, Daniel C.; Kehler, Andrew; Kim, David K.; Yu, Victor L.; Fagan, Lawrence M.

    1998-01-01

    The information needs of practicing clinicians frequently require textbook or journal searches. Making these sources available in electronic form improves the speed of these searches, but precision (i.e., the fraction of relevant to total documents retrieved) remains low. Improving the traditional keyword search by transforming search terms into canonical concepts does not improve search precision greatly. Kim et al. have designed and built a prototype system (MYCIN II) for computer-based information retrieval from a forthcoming electronic textbook of infectious disease. The system requires manual indexing by experts in the form of complex text markup. However, this mark-up process is time consuming (about 3 person-hours to generate, review, and transcribe the index for each of 218 chapters). We have designed and implemented a system to semiautomate the markup process. The system, information extraction for semiautomated indexing of documents (ISAID), uses query models and existing information-extraction tools to provide support for any user, including the author of the source material, to mark up tertiary information sources quickly and accurately.

  10. Eccentricity on AN Image Caused by Projection of a Circle and a Sphere

    Science.gov (United States)

    Matsuoka, R.; Maruyama, S.

    2016-06-01

    Circular targets on a plane are often utilized in photogrammetry, particularly in close range photogrammetry, while spherical targets are sometimes utilized in industrial applications. Both a circle and a sphere are projected as an ellipse onto an image. There is an eccentricity on an image between the centre of the projected ellipse and the projected location of the centre of a circle or a sphere. Since only the centre of the projected ellipse is measured, the correction of the eccentricity is considered to be necessary for highly accurate measurement. This paper shows a process to derive general formulae to calculate an eccentricity of a circle and a sphere using the size and the location of a circle or a sphere, and the focal length, the position and the attitude of a camera. Furthermore the paper shows methods to estimate the eccentricity of a circle and a sphere from the equation of the projected ellipse of a circle or a sphere on an image.

  11. Research interface for experimental ultrasound imaging - the CFU grabber project

    DEFF Research Database (Denmark)

    Pedersen, Mads Møller; Hemmsen, Martin Christian; Jensen, Jørgen Arendt

    PC aquires pre processed data from the scanner in real time. Further post processing is required to create the final images. In house software (CFU Grabber tool) was developed to review and store the pre processed data. Using MatLab image processing with a new post post processing method the final...

  12. Correction of projective distortion in long-image-sequence mosaics without prior information

    Science.gov (United States)

    Yang, Chenhui; Mao, Hongwei; Abousleman, Glen; Si, Jennie

    2010-04-01

    Image mosaicking is the process of piecing together multiple video frames or still images from a moving camera to form a wide-area or panoramic view of the scene being imaged. Mosaics have widespread applications in many areas such as security surveillance, remote sensing, geographical exploration, agricultural field surveillance, virtual reality, digital video, and medical image analysis, among others. When mosaicking a large number of still images or video frames, the quality of the resulting mosaic is compromised by projective distortion. That is, during the mosaicking process, the image frames that are transformed and pasted to the mosaic become significantly scaled down and appear out of proportion with respect to the mosaic. As more frames continue to be transformed, important target information in the frames can be lost since the transformed frames become too small, which eventually leads to the inability to continue further. Some projective distortion correction techniques make use of prior information such as GPS information embedded within the image, or camera internal and external parameters. Alternatively, this paper proposes a new algorithm to reduce the projective distortion without using any prior information whatsoever. Based on the analysis of the projective distortion, we approximate the projective matrix that describes the transformation between image frames using an affine model. Using singular value decomposition, we can deduce the affine model scaling factor that is usually very close to 1. By resetting the image scale of the affine model to 1, the transformed image size remains unchanged. Even though the proposed correction introduces some error in the image matching, this error is typically acceptable and more importantly, the final mosaic preserves the original image size after transformation. We demonstrate the effectiveness of this new correction algorithm on two real-world unmanned air vehicle (UAV) sequences. The proposed method is

  13. Using Heliospheric Imager Data to Improve Space Weather Forecasting Project

    Data.gov (United States)

    National Aeronautics and Space Administration — The objective of this project is to test a new approach for selecting the optimal solar wind model from an ensemble of model runs.Because of the paucity of...

  14. Enhanced Reliability MEMS Deformable Mirrors for Space Imaging Applications Project

    Data.gov (United States)

    National Aeronautics and Space Administration — The goal of this project is to develop and demonstrate a reliable, fault-tolerant wavefront control system that will fill a critical technology gap in NASA's vision...

  15. A novel stereoscopic projection display system for CT images of fractures.

    Science.gov (United States)

    Liu, Xiujuan; Jiang, Hong; Lang, Yuedong; Wang, Hongbo; Sun, Na

    2013-06-01

    The present study proposed a novel projection display system based on a virtual reality enhancement environment. The proposed system displays stereoscopic images of fractures and enhances the computed tomography (CT) images. The diagnosis and treatment of fractures primarily depend on the post-processing of CT images. However, two-dimensional (2D) images do not show overlapping structures in fractures since they are displayed without visual depth and these structures are too small to be simultaneously observed by a group of clinicians. Stereoscopic displays may solve this problem and allow clinicians to obtain more information from CT images. Hardware with which to generate stereoscopic images was designed. This system utilized the conventional equipment found in meeting rooms. The off-axis algorithm was adopted to convert the CT images into stereo image pairs, which were used as the input for a stereo generator. The final stereoscopic images were displayed using a projection system. Several CT fracture images were imported into the system for comparison with traditional 2D CT images. The results showed that the proposed system aids clinicians in group discussions by producing large stereoscopic images. The results demonstrated that the enhanced stereoscopic CT images generated by the system appear clearer and smoother, such that the sizes, displacement and shapes of bone fragments are easier to assess. Certain fractures that were previously not visible on 2D CT images due to vision overlap became vividly evident in the stereo images. The proposed projection display system efficiently, economically and accurately displayed three-dimensional (3D) CT images. The system may help clinicians improve the diagnosis and treatment of fractures.

  16. Corrosion science general-purpose data model and interface (Ⅱ): OOD design and corrosion data markup language (CDML)

    Institute of Scientific and Technical Information of China (English)

    2008-01-01

    With object oriented design/analysis, a general purpose corrosion data model (GPCDM) and a corrosion data markup language (CDML) are created to meet the increasing demand of multi-source corrosion data integration and sharing. "Cor- rosion data island" is proposed to model the corrosion data of comprehensiveness and self-contained. The island of tree-liked structure contains six first-level child nodes to characterize every important aspect of the corrosion data. Each first-level node holds more child nodes recursively as data containers. The design of data structure inside the island is intended to decrease the learning curve and break the acceptance barrier of GPCDM and CDML. A detailed explanation about the role and meaning of the first-level nodes are presented with examples chosen carefully in order to review the design goals and requirements proposed in the previous paper. Then, CDML tag structure and CDML application programming interface (API) are introduced in logic order. At the end, the roles of GPCDM, CDML and its API in the multi-source corrosion data integration and information sharing are highlighted and projected.

  17. Corrosion science general-purpose data model and interface (Ⅱ): OOD design and corrosion data markup language (CDML)

    Institute of Scientific and Technical Information of China (English)

    TANG ZiLong

    2008-01-01

    With object oriented design/analysis, a general purpose corrosion data model (GPCDM) and a corrosion data markup language (CDML) are created to meet the increasing demand of multi-source corrosion data integration and sharing. "Cor-rosion data island" is proposed to model the corrosion data of comprehensiveness and self-contained. The island of tree-liked structure contains six first-level child nodes to characterize every important aspect of the corrosion data. Each first-level node holds more child nodes recursively as data containers. The design of data structure inside the island is intended to decrease the learning curve and break the acceptance barrier of GPCDM and CDML. A detailed explanation about the role and meaning of the first-level nodes are presented with examples chosen carefully in order to review the design goals and requirements proposed in the previous paper. Then, CDML tag structure and CDML application programming interface (API) are introduced in logic order. At the end, the roles of GPCDM, CDML and its API in the multi-source corrosion data integration and information sharing are highlighted and projected.

  18. Satellite image eavesdropping: a multidisciplinary science education project

    Energy Technology Data Exchange (ETDEWEB)

    Friedt, Jean-Michel [Association Projet Aurore, UFR-ST La Bouloie, 16, route de Gray, 25030 Besancon Cedex (France)

    2005-11-01

    Amateur reception of satellite images gathers a wide number of concepts and technologies which makes it attractive as an educational tool. We here introduce the reception of images emitted from NOAA series low-altitude Earth-orbiting satellites. We tackle various issues including the identification and prediction of the pass time of visible satellites, the building of the radio-frequency receiver and antenna after modelling their radiation pattern, and then the demodulation of the resulting audio signal for finally displaying an image of the Earth as seen from space.

  19. Research interface for experimental ultrasound imaging - the CFU grabber project

    DEFF Research Database (Denmark)

    Pedersen, Mads Møller; Hemmsen, Martin Christian; Jensen, Jørgen Arendt

    system RASMUS. Furthermore precise scanner settings are stored for inter- and intra-observer studies. The resulting images are used for clinical evaluation. Method and materials The ultrasound scanners research interface is connected to a graphical grabber card in a Windows PC (Grabber PC). The grabber...... PC aquires pre processed data from the scanner in real time. Further post processing is required to create the final images. In house software (CFU Grabber tool) was developed to review and store the pre processed data. Using MatLab image processing with a new post post processing method the final...

  20. Development of the Plate Tectonics and Seismology markup languages with XML

    Science.gov (United States)

    Babaie, H.; Babaei, A.

    2003-04-01

    The Extensible Markup Language (XML) and its specifications such as the XSD Schema, allow geologists to design discipline-specific vocabularies such as Seismology Markup Language (SeismML) or Plate Tectonics Markup Language (TectML). These languages make it possible to store and interchange structured geological information over the Web. Development of a geological markup language requires mapping geological concepts, such as "Earthquake" or "Plate" into a UML object model, applying a modeling and design environment. We have selected four inter-related geological concepts: earthquake, fault, plate, and orogeny, and developed four XML Schema Definitions (XSD), that define the relationships, cardinalities, hierarchies, and semantics of these concepts. In such a geological concept model, the UML object "Earthquake" is related to one or more "Wave" objects, each arriving to a seismic station at a specific "DateTime", and relating to a specific "Epicenter" object that lies at a unique "Location". The "Earthquake" object occurs along a "Segment" of a "Fault" object, which is related to a specific "Plate" object. The "Fault" has its own associations with such things as "Bend", "Step", and "Segment", and could be of any kind (e.g., "Thrust", "Transform'). The "Plate" is related to many other objects such as "MOR", "Subduction", and "Forearc", and is associated with an "Orogeny" object that relates to "Deformation" and "Strain" and several other objects. These UML objects were mapped into XML Metadata Interchange (XMI) formats, which were then converted into four XSD Schemas. The schemas were used to create and validate the XML instance documents, and to create a relational database hosting the plate tectonics and seismological data in the Microsoft Access format. The SeismML and TectML allow seismologists and structural geologists, among others, to submit and retrieve structured geological data on the Internet. A seismologist, for example, can submit peer-reviewed and

  1. Robust CCSDS Image Data to JPEG2K Transcoding Project

    Data.gov (United States)

    National Aeronautics and Space Administration — Images from space satellites are often compressed in a lossy manner to ease transmission requirements such as power, error-rate, and data stream size. These...

  2. Robust CCSDS Image Data to JPEG2K Transcoding Project

    Data.gov (United States)

    National Aeronautics and Space Administration — Images from space satellites, whether deep space or planetary, are often compressed in a lossy manner to ease transmission requirements such as power, error-rate,...

  3. Apodized Occulting and Pupil Masks for Imaging Coronagraphs Project

    Data.gov (United States)

    National Aeronautics and Space Administration — The technical challenge of imaging planets in other star systems is resolving these dim objects in the close vicinity of a bright star. This challenge requires the...

  4. A Compact Extreme Ultraviolet Imager (C-EUVI) Project

    Data.gov (United States)

    National Aeronautics and Space Administration — We propose to evaluate the Intevac Photonics NightVista® M711 Low Light Level Camera as the baseline detector of a new Compact EUV imager (C–EUVI). ...

  5. Advanced Calibration Source for Planetary and Earth Observing Imaging Project

    Data.gov (United States)

    National Aeronautics and Space Administration — Radiometric calibration is critical to many NASA activities.  At NASA SSC, imaging cameras have been used in-situ to monitor propulsion test stand...

  6. Electro-Optic Imaging Fourier Transform Spectral Polarimeter Project

    Data.gov (United States)

    National Aeronautics and Space Administration — Boulder Nonlinear Systems, Inc. (BNS) proposes to develop an Electro-Optic Imaging Fourier Transform Spectral Polarimeter (E-O IFTSP). The polarimetric system is...

  7. Rad Hard Imaging Array with Picosecond Timing Project

    Data.gov (United States)

    National Aeronautics and Space Administration — For a wide range of remote sensing applications, there is a critical need to develop imaging arrays that simultaneously achieve high spatial resolution, high...

  8. Airborne Wide Area Imager for Wildfire Mapping and Detection Project

    Data.gov (United States)

    National Aeronautics and Space Administration — An autonomous airborne imaging system for earth science research, disaster response, and fire detection is proposed. The primary goal is to improve information to...

  9. Gamma-Ray Imager Polarimeter for Solar Flares Project

    Data.gov (United States)

    National Aeronautics and Space Administration — We propose here to develop the Gamma-Ray Imager/Polarimeter for Solar flares (GRIPS), the next-generation instrument for high-energy solar observations. GRIPS will...

  10. Low-Mass Planar Photonic Imaging Sensor Project

    Data.gov (United States)

    National Aeronautics and Space Administration — We propose a revolutionary electro-optical (EO) imaging sensor concept that provides a low-mass, low-volume alternative to the traditional bulky optical telescope...

  11. Hyperspectral Foveated Imaging Sensor for Objects Identification and Tracking Project

    Data.gov (United States)

    National Aeronautics and Space Administration — Optical tracking and identification sensors have numerous NASA and non-NASA applications. For example, airborne or spaceborne imaging sensors are used to visualize...

  12. Three-Dimensional Backscatter X-Ray Imaging System Project

    Data.gov (United States)

    National Aeronautics and Space Administration — The NASA application requires a system that can generate 3D images of non-metallic material when access is limited to one side of the material. The objective of this...

  13. Three-Dimensional Backscatter X-Ray Imaging System Project

    Data.gov (United States)

    National Aeronautics and Space Administration — The overall objective of the proposal is to design, develop and demonstrate a potentially portable Compton x-ray scatter 3D-imaging system by using specially...

  14. Rapid Acquisition Imaging Spectrograph (RAISE) Renewal Proposal Project

    Data.gov (United States)

    National Aeronautics and Space Administration — The optical design of RAISE is based on a new class of UV/EUV imaging spectrometers that use  only two reflections to provide quasi-stigmatic performance...

  15. High Resolution, Range/Range-Rate Imager Project

    Data.gov (United States)

    National Aeronautics and Space Administration — Visidyne proposes to develop a design for a small, lightweight, high resolution, in x, y, and z Doppler imager to assist in the guidance, navigation and control...

  16. Kernel based subspace projection of near infrared hyperspectral images of maize kernels

    DEFF Research Database (Denmark)

    Larsen, Rasmus; Arngren, Morten; Hansen, Per Waaben;

    2009-01-01

    In this paper we present an exploratory analysis of hyper- spectral 900-1700 nm images of maize kernels. The imaging device is a line scanning hyper spectral camera using a broadband NIR illumi- nation. In order to explore the hyperspectral data we compare a series of subspace projection methods...

  17. Kernel based subspace projection of near infrared hyperspectral images of maize kernels

    DEFF Research Database (Denmark)

    Larsen, Rasmus; Arngren, Morten; Hansen, Per Waaben

    2009-01-01

    In this paper we present an exploratory analysis of hyper- spectral 900-1700 nm images of maize kernels. The imaging device is a line scanning hyper spectral camera using a broadband NIR illumi- nation. In order to explore the hyperspectral data we compare a series of subspace projection methods...

  18. Registering aerial video images using the projective constraint.

    Science.gov (United States)

    Jackson, Brian P; Goshtasby, A Ardeshir

    2010-03-01

    To separate object motion from camera motion in an aerial video, consecutive frames are registered at their planar background. Feature points are selected in consecutive frames and those that belong to the background are identified using the projective constraint. Corresponding background feature points are then used to register and align the frames. By aligning video frames at the background and knowing that objects move against the background, a means to detect and track moving objects is provided. Only scenes with planar background are considered in this study. Experimental results show improvement in registration accuracy when using the projective constraint to determine the registration parameters as opposed to finding the registration parameters without the projective constraint.

  19. Image restoration by the method of convex projections: part 2 applications and numerical results.

    Science.gov (United States)

    Sezan, M I; Stark, H

    1982-01-01

    The image restoration theory discussed in a previous paper by Youla and Webb [1] is applied to a simulated image and the results compared with the well-known method known as the Gerchberg-Papoulis algorithm. The results show that the method of image restoration by projection onto convex sets, by providing a convenient technique for utilizing a priori information, performs significantly better than the Gerchberg-Papoulis method.

  20. A MATLAB package for the EIDORS project to reconstruct two-dimensional EIT images.

    Science.gov (United States)

    Vauhkonen, M; Lionheart, W R; Heikkinen, L M; Vauhkonen, P J; Kaipio, J P

    2001-02-01

    The EIDORS (electrical impedance and diffuse optical reconstruction software) project aims to produce a software system for reconstructing images from electrical or diffuse optical data. MATLAB is a software that is used in the EIDORS project for rapid prototyping, graphical user interface construction and image display. We have written a MATLAB package (http://venda.uku.fi/ vauhkon/) which can be used for two-dimensional mesh generation, solving the forward problem and reconstructing and displaying the reconstructed images (resistivity or admittivity). In this paper we briefly describe the mathematical theory on which the codes are based on and also give some examples of the capabilities of the package.

  1. Quantitative estimation of brain atrophy and function with PET and MRI two-dimensional projection images

    Energy Technology Data Exchange (ETDEWEB)

    Saito, Reiko; Uemura, Koji; Uchiyama, Akihiko [Waseda Univ., Tokyo (Japan). School of Science and Engineering; Toyama, Hinako; Ishii, Kenji; Senda, Michio

    2001-05-01

    The purpose of this paper is to estimate the extent of atrophy and the decline in brain function objectively and quantitatively. Two-dimensional (2D) projection images of three-dimensional (3D) transaxial images of positron emission tomography (PET) and magnetic resonance imaging (MRI) were made by means of the Mollweide method which keeps the area of the brain surface. A correlation image was generated between 2D projection images of MRI and cerebral blood flow (CBF) or {sup 18}F-fluorodeoxyglucose (FDG) PET images and the sulcus was extracted from the correlation image clustered by K-means method. Furthermore, the extent of atrophy was evaluated from the extracted sulcus on 2D-projection MRI and the cerebral cortical function such as blood flow or glucose metabolic rate was assessed in the cortex excluding sulcus on 2D-projection PET image, and then the relationship between the cerebral atrophy and function was evaluated. This method was applied to the two groups, the young and the aged normal subjects, and the relationship between the age and the rate of atrophy or the cerebral blood flow was investigated. This method was also applied to FDG-PET and MRI studies in the normal controls and in patients with corticobasal degeneration. The mean rate of atrophy in the aged group was found to be higher than that in the young. The mean value and the variance of the cerebral blood flow for the young are greater than those of the aged. The sulci were similarly extracted using either CBF or FDG PET images. The purposed method using 2-D projection images of MRI and PET is clinically useful for quantitative assessment of atrophic change and functional disorder of cerebral cortex. (author)

  2. The Multidimensional Integrated Intelligent Imaging project (MI-3)

    Energy Technology Data Exchange (ETDEWEB)

    Allinson, N.; Anaxagoras, T. [Vision and Information Engineering, University of Sheffield (United Kingdom); Aveyard, J. [Laboratory for Environmental Gene Regulation, University of Liverpool (United Kingdom); Arvanitis, C. [Radiation Physics, University College, London (United Kingdom); Bates, R.; Blue, A. [Experimental Particle Physics, University of Glasgow (United Kingdom); Bohndiek, S. [Radiation Physics, University College, London (United Kingdom); Cabello, J. [Centre for Vision, Speech and Signal Processing, University of Surrey, Guildford (United Kingdom); Chen, L. [Electron Optics, Applied Electromagnetics and Electron Optics, University of York (United Kingdom); Chen, S. [MRC Laboratory for Molecular Biology, Cambridge (United Kingdom); Clark, A. [STFC Rutherford Appleton Laboratories (United Kingdom); Clayton, C. [Vision and Information Engineering, University of Sheffield (United Kingdom); Cook, E. [Radiation Physics, University College, London (United Kingdom); Cossins, A. [Laboratory for Environmental Gene Regulation, University of Liverpool (United Kingdom); Crooks, J. [STFC Rutherford Appleton Laboratories (United Kingdom); El-Gomati, M. [Electron Optics, Applied Electromagnetics and Electron Optics, University of York (United Kingdom); Evans, P.M. [Institute of Cancer Research, Sutton, Surrey SM2 5PT (United Kingdom)], E-mail: phil.evans@icr.ac.uk; Faruqi, W. [MRC Laboratory for Molecular Biology, Cambridge (United Kingdom); French, M. [STFC Rutherford Appleton Laboratories (United Kingdom); Gow, J. [Imaging for Space and Terrestrial Applications, Brunel University, London (United Kingdom)] (and others)

    2009-06-01

    MI-3 is a consortium of 11 universities and research laboratories whose mission is to develop complementary metal-oxide semiconductor (CMOS) active pixel sensors (APS) and to apply these sensors to a range of imaging challenges. A range of sensors has been developed: On-Pixel Intelligent CMOS (OPIC)-designed for in-pixel intelligence; FPN-designed to develop novel techniques for reducing fixed pattern noise; HDR-designed to develop novel techniques for increasing dynamic range; Vanilla/PEAPS-with digital and analogue modes and regions of interest, which has also been back-thinned; Large Area Sensor (LAS)-a novel, stitched LAS; and eLeNA-which develops a range of low noise pixels. Applications being developed include autoradiography, a gamma camera system, radiotherapy verification, tissue diffraction imaging, X-ray phase-contrast imaging, DNA sequencing and electron microscopy.

  3. Semantic markup of sensor capabilities: how simple it too simple?

    Science.gov (United States)

    Rueda-Velasquez, C. A.; Janowicz, K.; Fredericks, J.

    2016-12-01

    Semantics plays a key role for the publication, retrieval, integration, and reuse of observational data across the geosciences. In most cases, one can safely assume that the providers of such data, e.g., individual scientists, understand the observation context in which their data are collected,e.g., the used observation procedure, the sampling strategy, the feature of interest being studied, and so forth. However, can we expect that the same is true for the technical details of the used sensors and especially the nuanced changes that can impact observations in often unpredictable ways? Should the burden of annotating the sensor capabilities, firmware, operation ranges, and so forth be really part of a scientist's responsibility? Ideally, semantic annotations should be provided by the parties that understand these details and have a vested interest in maintaining these data. With manufactures providing semantically-enabled metadata for their sensors and instruments, observations could more easily be annotated and thereby enriched using this information. Unfortunately, today's sensor ontologies and tool chains developed for the Semantic Web community require expertise beyond the knowledge and interest of most manufacturers. Consequently, knowledge engineers need to better understand the sweet spot between simple ontologies/vocabularies and sufficient expressivity as well as the tools required to enable manufacturers to share data about their sensors. Here, we report on the current results of EarthCube's X-Domes project that aims to address the questions outlined above.

  4. Optimized acquisition scheme for multi-projection correlation imaging of breast cancer

    Science.gov (United States)

    Chawla, Amarpreet S.; Samei, Ehsan; Saunders, Robert S.; Lo, Joseph Y.; Singh, Swatee

    2008-03-01

    We are reporting the optimized acquisition scheme of multi-projection breast Correlation Imaging (CI) technique, which was pioneered in our lab at Duke University. CI is similar to tomosynthesis in its image acquisition scheme. However, instead of analyzing the reconstructed images, the projection images are directly analyzed for pathology. Earlier, we presented an optimized data acquisition scheme for CI using mathematical observer model. In this article, we are presenting a Computer Aided Detection (CADe)-based optimization methodology. Towards that end, images from 106 subjects recruited for an ongoing clinical trial for tomosynthesis were employed. For each patient, 25 angular projections of each breast were acquired. Projection images were supplemented with a simulated 3 mm 3D lesion. Each projection was first processed by a traditional CADe algorithm at high sensitivity, followed by a reduction of false positives by combining geometrical correlation information available from the multiple images. Performance of the CI system was determined in terms of free-response receiver operating characteristics (FROC) curves and the area under ROC curves. For optimization, the components of acquisition such as the number of projections, and their angular span were systematically changed to investigate which one of the many possible combinations maximized the sensitivity and specificity. Results indicated that the performance of the CI system may be maximized with 7-11 projections spanning an angular arc of 44.8°, confirming our earlier findings using observer models. These results indicate that an optimized CI system may potentially be an important diagnostic tool for improved breast cancer detection.

  5. Augmented reality-guided neurosurgery: accuracy and intraoperative application of an image projection technique.

    Science.gov (United States)

    Besharati Tabrizi, Leila; Mahvash, Mehran

    2015-07-01

    An augmented reality system has been developed for image-guided neurosurgery to project images with regions of interest onto the patient's head, skull, or brain surface in real time. The aim of this study was to evaluate system accuracy and to perform the first intraoperative application. Images of segmented brain tumors in different localizations and sizes were created in 10 cases and were projected to a head phantom using a video projector. Registration was performed using 5 fiducial markers. After each registration, the distance of the 5 fiducial markers from the visualized tumor borders was measured on the virtual image and on the phantom. The difference was considered a projection error. Moreover, the image projection technique was intraoperatively applied in 5 patients and was compared with a standard navigation system. Augmented reality visualization of the tumors succeeded in all cases. The mean time for registration was 3.8 minutes (range 2-7 minutes). The mean projection error was 0.8 ± 0.25 mm. There were no significant differences in accuracy according to the localization and size of the tumor. Clinical feasibility and reliability of the augmented reality system could be proved intraoperatively in 5 patients (projection error 1.2 ± 0.54 mm). The augmented reality system is accurate and reliable for the intraoperative projection of images to the head, skull, and brain surface. The ergonomic advantage of this technique improves the planning of neurosurgical procedures and enables the surgeon to use direct visualization for image-guided neurosurgery.

  6. Regularized Iterative Weighted Filtered Back-Projection for Few-View Data Photoacoustic Imaging

    Science.gov (United States)

    Peng, Dong

    2016-01-01

    Photoacoustic imaging is an emerging noninvasive imaging technique with great potential for a wide range of biomedical imaging applications. However, with few-view data the filtered back-projection method will create streak artifacts. In this study, the regularized iterative weighted filtered back-projection method was applied to our photoacoustic imaging of the optical absorption in phantom from few-view data. This method is based on iterative application of a nonexact 2DFBP. By adding a regularization operation in the iterative loop, the streak artifacts have been reduced to a great extent and the convergence properties of the iterative scheme have been improved. Results of numerical simulations demonstrated that the proposed method was superior to the iterative FBP method in terms of both accuracy and robustness to noise. The quantitative image evaluation studies have shown that the proposed method outperforms conventional iterative methods. PMID:27594896

  7. Image reconstruction for digital breast tomosynthesis (DBT) by using projection-angle-dependent filter functions

    Energy Technology Data Exchange (ETDEWEB)

    Park, Yeonok; Park, Chulkyu; Cho, Hyosung; Je, Uikyu; Hong, Daeki; Lee, Minsik; Cho, Heemoon; Choi, Sungil; Koo, Yangseo [Yonsei University, Wonju (Korea, Republic of)

    2014-09-15

    Digital breast tomosynthesis (DBT) is considered in clinics as a standard three-dimensional imaging modality, allowing the earlier detection of cancer. It typically acquires only 10-30 projections over a limited angle range of 15 - 60 .deg. with a stationary detector and typically uses a computationally-efficient filtered-backprojection (FBP) algorithm for image reconstruction. However, a common FBP algorithm yields poor image quality resulting from the loss of average image value and the presence of severe image artifacts due to the elimination of the dc component of the image by the ramp filter and to the incomplete data, respectively. As an alternative, iterative reconstruction methods are often used in DBT to overcome these difficulties, even though they are still computationally expensive. In this study, as a compromise, we considered a projection-angle dependent filtering method in which one-dimensional geometry-adapted filter kernels are computed with the aid of a conjugate-gradient method and are incorporated into the standard FBP framework. We implemented the proposed algorithm and performed systematic simulation works to investigate the imaging characteristics. Our results indicate that the proposed method is superior to a conventional FBP method for DBT imaging and has a comparable computational cost, while preserving good image homogeneity and edge sharpening with no serious image artifacts.

  8. Impact of the zero-markup drug policy on hospitalisation expenditure in western rural China: an interrupted time series analysis.

    Science.gov (United States)

    Yang, Caijun; Shen, Qian; Cai, Wenfang; Zhu, Wenwen; Li, Zongjie; Wu, Lina; Fang, Yu

    2017-02-01

    To assess the long-term effects of the introduction of China's zero-markup drug policy on hospitalisation expenditure and hospitalisation expenditures after reimbursement. An interrupted time series was used to evaluate the impact of the zero-markup drug policy on hospitalisation expenditure and hospitalisation expenditure after reimbursement at primary health institutions in Fufeng County of Shaanxi Province, western China. Two regression models were developed. Monthly average hospitalisation expenditure and monthly average hospitalisation expenditure after reimbursement in primary health institutions were analysed covering the period 2009 through to 2013. For the monthly average hospitalisation expenditure, the increasing trend was slowed down after the introduction of the zero-markup drug policy (coefficient = -16.49, P = 0.009). For the monthly average hospitalisation expenditure after reimbursement, the increasing trend was slowed down after the introduction of the zero-markup drug policy (coefficient = -10.84, P = 0.064), and a significant decrease in the intercept was noted after the second intervention of changes in reimbursement schemes of the new rural cooperative medical insurance (coefficient = -220.64, P markup drug policy in western China. However, hospitalisation expenditure and hospitalisation expenditure after reimbursement were still increasing. More effective policies are needed to prevent these costs from continuing to rise. © 2016 John Wiley & Sons Ltd.

  9. Versatile, Compact, Low-Cost, MEMS-Based Image Stabilization for Imaging Sensor Performance Enhancement Project

    Data.gov (United States)

    National Aeronautics and Space Administration — LW Microsystems proposes to develop a compact, low-cost image stabilization system suitable for use with a wide range of focal-plane imaging systems in remote...

  10. Robust automatic detection and removal of fiducial projections in fluoroscopy images: an integrated solution.

    Science.gov (United States)

    Zhang, Xuan; Zheng, Guoyan

    2008-01-01

    Automatic detection and removal of fiducial projections in fluoroscopy images is an essential prerequisite for fluoroscopy-based navigation and image-based 3D-2D registration. This paper presents an integrated solution to fulfill this task. A custom-designed calibration cage with a two-plane pattern of fiducials is utilized in our solution. The cage is attached to the C-arm image intensifier and the projections of the fiducials are automatically detected and removed by an on-line algorithm consisting of following 6 steps: image binarization, connected-component labeling, region classification, adaptive template matching, shape analysis, and fiducial projection removal. A similarity measure which is proposed previously for image-based 3D-2D registration is employed in the adaptive template matching to improve the accuracy of the detection. Shape analysis based on the geometrical constraints satisfied by those fiducials in the calibration cage is used to further improve the robustness of the detection. An image inpainting technique based on the fast marching method for level set applications is used to remove the detected fiducial projections. Our in vitro experiments show on average 4 seconds execution time on a Pentium IV machine, a zero false-detection rate, a miss-detection rate of 1.6+/-2.3%, and a sub-pixel localization error.

  11. Image reconstruction of simulated specimens using convolution back projection

    Directory of Open Access Journals (Sweden)

    Mohd. Farhan Manzoor

    2001-04-01

    Full Text Available This paper reports about the reconstruction of cross-sections of composite structures. The convolution back projection (CBP algorithm has been used to capture the attenuation field over the specimen. Five different test cases have been taken up for evaluation. These cases represent varying degrees of complexity. In addition, the role of filters on the nature of the reconstruction errors has also been discussed. Numerical results obtained in the study reveal that CBP algorithm is a useful tool for qualitative as well as quantitative assessment of composite regions encountered in engineering applications.

  12. Contrast and resolution analysis of angular domain imaging for iterative optical projection tomography reconstruction

    Science.gov (United States)

    Ng, Eldon; Vasefi, Fartash; Kaminska, Bozena; Chapman, Glenn H.; Carson, Jeffrey J. L.

    2010-02-01

    Angular domain imaging (ADI) generates a projection image of an attenuating target within a turbid medium by employing a silicon micro-tunnel array to reject photons that have deviated from the initial propagation direction. In this imaging method, image contrast and resolution are position dependent. The objective of this work was to first characterize the contrast and resolution of the ADI system at a multitude of locations within the imaging plane. The second objective was to compare the reconstructions of different targets using filtered back projection and iterative reconstruction algorithms. The ADI system consisted of a diode laser laser (808nm, CW, ThorLabs) with a beam expander for illumination of the sample cuvette. At the opposite side of the cuvette, an Angular Filter Array (AFA) of 80 μm x 80 μm square-shaped tunnels 1 cm in length was used to reject the transmitted scattered light. Image-forming light exiting the AFA was detected by a linear CCD (16-bit, Mightex). Our approach was to translate two point attenuators (0.5 mm graphite rod, 0.368 mm drill bit) submerged in a 0.6% IntralipidTM dilution using a SCARA robot (Epson E2S351S) to cover a 37x37 and 45x45 matrix of grid points in the imaging plane within the 1 cm path length sample cuvette. At each grid point, a one-dimensional point-spread distribution was collected and system contrast and resolution were measured. Then, the robot was used to rotate the target to collect projection images at several projection angles of various objects, and reconstructed with a filtered back projection and an iterative reconstruction algorithm.

  13. Chemical markup, XML, and the World Wide Web. 4. CML schema.

    Science.gov (United States)

    Murray-Rust, Peter; Rzepa, Henry S

    2003-01-01

    A revision to Chemical Markup Language (CML) is presented as a XML Schema compliant form, modularized into nonchemical and chemical components. STMML contains generic concepts for numeric data and scientific units, while CMLCore retains most of the chemical functionality of the original CML 1.0 and extends it by adding handlers for chemical substances, extended bonding models and names. We propose extension via new namespaced components for chemical queries, reactions, spectra, and computational chemistry. The conformance with XML schemas allows much greater control over datatyping, document validation, and structure.

  14. cluML: A markup language for clustering and cluster validity assessment of microarray data.

    Science.gov (United States)

    Bolshakova, Nadia; Cunningham, Pádraig

    2005-01-01

    cluML is a new markup language for microarray data clustering and cluster validity assessment. The XML-based format has been designed to address some of the limitations observed in traditional formats, such as inability to store multiple clustering (including biclustering) and validation results within a dataset. cluML is an effective tool to support biomedical knowledge representation in gene expression data analysis. Although cluML was developed for DNA microarray analysis applications, it can be effectively used for the representation of clustering and for the validation of other biomedical and physical data that has no limitations.

  15. The carbohydrate sequence markup language (CabosML): an XML description of carbohydrate structures.

    Science.gov (United States)

    Kikuchi, Norihiro; Kameyama, Akihiko; Nakaya, Shuuichi; Ito, Hiromi; Sato, Takashi; Shikanai, Toshihide; Takahashi, Yoriko; Narimatsu, Hisashi

    2005-04-15

    Bioinformatics resources for glycomics are very poor as compared with those for genomics and proteomics. The complexity of carbohydrate sequences makes it difficult to define a common language to represent them, and the development of bioinformatics tools for glycomics has not progressed. In this study, we developed a carbohydrate sequence markup language (CabosML), an XML description of carbohydrate structures. The language definition (XML Schema) and an experimental database of carbohydrate structures using an XML database management system are available at http://www.phoenix.hydra.mki.co.jp/CabosDemo.html kikuchi@hydra.mki.co.jp.

  16. A Converter from the Systems Biology Markup Language to the Synthetic Biology Open Language.

    Science.gov (United States)

    Nguyen, Tramy; Roehner, Nicholas; Zundel, Zach; Myers, Chris J

    2016-06-17

    Standards are important to synthetic biology because they enable exchange and reproducibility of genetic designs. This paper describes a procedure for converting between two standards: the Systems Biology Markup Language (SBML) and the Synthetic Biology Open Language (SBOL). SBML is a standard for behavioral models of biological systems at the molecular level. SBOL describes structural and basic qualitative behavioral aspects of a biological design. Converting SBML to SBOL enables a consistent connection between behavioral and structural information for a biological design. The conversion process described in this paper leverages Systems Biology Ontology (SBO) annotations to enable inference of a designs qualitative function.

  17. Generating Systems Biology Markup Language Models from the Synthetic Biology Open Language.

    Science.gov (United States)

    Roehner, Nicholas; Zhang, Zhen; Nguyen, Tramy; Myers, Chris J

    2015-08-21

    In the context of synthetic biology, model generation is the automated process of constructing biochemical models based on genetic designs. This paper discusses the use cases for model generation in genetic design automation (GDA) software tools and introduces the foundational concepts of standards and model annotation that make this process useful. Finally, this paper presents an implementation of model generation in the GDA software tool iBioSim and provides an example of generating a Systems Biology Markup Language (SBML) model from a design of a 4-input AND sensor written in the Synthetic Biology Open Language (SBOL).

  18. Pharmacometrics Markup Language (PharmML): Opening New Perspectives for Model Exchange in Drug Development.

    Science.gov (United States)

    Swat, M J; Moodie, S; Wimalaratne, S M; Kristensen, N R; Lavielle, M; Mari, A; Magni, P; Smith, M K; Bizzotto, R; Pasotti, L; Mezzalana, E; Comets, E; Sarr, C; Terranova, N; Blaudez, E; Chan, P; Chard, J; Chatel, K; Chenel, M; Edwards, D; Franklin, C; Giorgino, T; Glont, M; Girard, P; Grenon, P; Harling, K; Hooker, A C; Kaye, R; Keizer, R; Kloft, C; Kok, J N; Kokash, N; Laibe, C; Laveille, C; Lestini, G; Mentré, F; Munafo, A; Nordgren, R; Nyberg, H B; Parra-Guillen, Z P; Plan, E; Ribba, B; Smith, G; Trocóniz, I F; Yvon, F; Milligan, P A; Harnisch, L; Karlsson, M; Hermjakob, H; Le Novère, N

    2015-06-01

    The lack of a common exchange format for mathematical models in pharmacometrics has been a long-standing problem. Such a format has the potential to increase productivity and analysis quality, simplify the handling of complex workflows, ensure reproducibility of research, and facilitate the reuse of existing model resources. Pharmacometrics Markup Language (PharmML), currently under development by the Drug Disease Model Resources (DDMoRe) consortium, is intended to become an exchange standard in pharmacometrics by providing means to encode models, trial designs, and modeling steps.

  19. Computational Challenges and Collaborative Projects in the NCI Quantitative Imaging Network.

    Science.gov (United States)

    Farahani, Keyvan; Kalpathy-Cramer, Jayashree; Chenevert, Thomas L; Rubin, Daniel L; Sunderland, John J; Nordstrom, Robert J; Buatti, John; Hylton, Nola

    2016-12-01

    The Quantitative Imaging Network (QIN) of the National Cancer Institute (NCI) conducts research in development and validation of imaging tools and methods for predicting and evaluating clinical response to cancer therapy. Members of the network are involved in examining various imaging and image assessment parameters through network-wide cooperative projects. To more effectively use the cooperative power of the network in conducting computational challenges in benchmarking of tools and methods and collaborative projects in analytical assessment of imaging technologies, the QIN Challenge Task Force has developed policies and procedures to enhance the value of these activities by developing guidelines and leveraging NCI resources to help their administration and manage dissemination of results. Challenges and Collaborative Projects (CCPs) are further divided into technical and clinical CCPs. As the first NCI network to engage in CCPs, we anticipate a variety of CCPs to be conducted by QIN teams in the coming years. These will be aimed to benchmark advanced software tools for clinical decision support, explore new imaging biomarkers for therapeutic assessment, and establish consensus on a range of methods and protocols in support of the use of quantitative imaging to predict and assess response to cancer therapy.

  20. Normal Vector Projection Method used for Convex Optimization of Chan-Vese Model for Image Segmentation

    Science.gov (United States)

    Wei, W. B.; Tan, L.; Jia, M. Q.; Pan, Z. K.

    2017-01-01

    The variational level set method is one of the main methods of image segmentation. Due to signed distance functions as level sets have to keep the nature of the functions through numerical remedy or additional technology in an evolutionary process, it is not very efficient. In this paper, a normal vector projection method for image segmentation using Chan-Vese model is proposed. An equivalent formulation of Chan-Vese model is used by taking advantage of property of binary level set functions and combining with the concept of convex relaxation. Threshold method and projection formula are applied in the implementation. It can avoid the above problems and obtain a global optimal solution. Experimental results on both synthetic and real images validate the effects of the proposed normal vector projection method, and show advantages over traditional algorithms in terms of computational efficiency.

  1. The Italian project for a proton imaging device

    Science.gov (United States)

    Cirrone, G. A. P.; Candiano, G.; Cuttone, G.; Lo Nigro, S.; Lo Presti, D.; Randazzo, N.; Sipala, V.; Russo, M.; Aiello, S.; Bruzzi, M.; Menichelli, D.; Scaringella, M.; Miglio, S.; Bucciolini, M.; Talamonti, C.; Pallotta, S.

    2007-06-01

    Proton Computed Tomography (or pCT) is a new imaging technique based on the use of high energy proton beams (200-250 MeV) replacing of the commonly adopted X-rays CT. pCT that was firstly proposed in the 1960s but only nowadays, with the continued establishing of new proton therapy centers around the world, the interest in it is growing. The use of protons for tomographic images can represent, in fact, a big enhancement in the quality of a proton therapy treatment either in the patient positioning as well as in the accuracy of the dose calculation for the treatment planning phase. In this paper, after a brief introduction on pCT principles, the main hardware and software characteristics of a first pCT prototype in development by our group (the Italian PRIMA collaboration) will be presented. The role of Monte Carlo simulation in developing will be also emphasized, using the GEANT4 simulation toolkit.

  2. The Italian project for a proton imaging device

    Energy Technology Data Exchange (ETDEWEB)

    Cirrone, G.A.P. [Laboratori Nazionali del Sud - National Institute for Nuclear Physics, Catania (Italy)]. E-mail: cirrone@lns.infn.it; Candiano, G. [Laboratori Nazionali del Sud - National Institute for Nuclear Physics, Catania (Italy); Cuttone, G. [Laboratori Nazionali del Sud - National Institute for Nuclear Physics, Catania (Italy); Lo Nigro, S. [Physics and Astronomy Department, University of Catania, Catania (Italy); Lo Presti, D. [Physics and Astronomy Department, University of Catania, Catania (Italy); Randazzo, N. [Physics and Astronomy Department, University of Catania, Catania (Italy); Sipala, V. [Physics and Astronomy Department, University of Catania, Catania (Italy); Russo, M. [Physics and Astronomy Department, University of Catania, Catania (Italy); Aiello, S. [Physics and Astronomy Department, University of Catania, Catania (Italy); Bruzzi, M. [Energetic Department, University of Florence, Florence (Italy); Menichelli, D. [Energetic Department, University of Florence, Florence (Italy); Scaringella, M. [Energetic Department, University of Florence, Florence (Italy); Miglio, S. [Energetic Department, University of Florence, Florence (Italy); Bucciolini, M. [University of Florence and INFN, Dipartimento di Fisiopatologia Clinica, Florence (Italy); Talamonti, C. [University of Florence and INFN, Dipartimento di Fisiopatologia Clinica, Florence (Italy); Pallotta, S. [University of Florence and INFN, Dipartimento di Fisiopatologia Clinica, Florence (Italy)

    2007-06-11

    Proton Computed Tomography (or pCT) is a new imaging technique based on the use of high energy proton beams (200-250 MeV) replacing of the commonly adopted X-rays CT. pCT that was firstly proposed in the 1960s but only nowadays, with the continued establishing of new proton therapy centers around the world, the interest in it is growing. The use of protons for tomographic images can represent, in fact, a big enhancement in the quality of a proton therapy treatment either in the patient positioning as well as in the accuracy of the dose calculation for the treatment planning phase. In this paper, after a brief introduction on pCT principles, the main hardware and software characteristics of a first pCT prototype in development by our group (the Italian PRIMA collaboration) will be presented. The role of Monte Carlo simulation in developing will be also emphasized, using the GEANT4 simulation toolkit.

  3. The Infrared Imaging Surveyor (Iris) Project: Astro-F

    Science.gov (United States)

    Shibai, H.

    IRIS (Infrared Imaging Surveyor) is the first Japanese satellite dedicated solely to infrared astronomy. The telescope has 70-cm aperture, and is cooled down to 6 K with super-fluid helium assisted by two-stage Stirling cycle coolers. On the focal plane, the two instruments, the InfraRed Camera (IRC) and the Far-Infrared Surveyor (FIS), are mounted. IRC is a near- and mid-infrared camera for deep imaging-surveys in the wavelength region from 2 to 25 microns. FIS is a far-infrared instrument for a whole sky survey in the wavelength region from 50 to 200 microns. The diffraction-limited spatial resolution is achieved except in the shortest waveband. The point source sensitivity and the survey coverage are significantly improved compared to previous missions. The primary scientific objective is to investigate birth and evolution of galaxies in the early universe by surveys of young normal galaxies and starburst galaxies. IRIS is thrown by a Japanese M-V rocket into a sun-synchronous orbit, in which the cooled telescope can avoid huge emissions from the Sun and the Earth. The expected holding time of the super-fluid helium is more than one year. After consumption of the helium, the near-infrared observation can be continued by the mechanical coolers

  4. Defense Advanced Research Projects Agency (DARPA) Agent Markup Language Computer Aided Knowledge Acquisition

    Science.gov (United States)

    2005-06-01

    rdfs:label> <rdfs:subClassOf rdf:resource="#ItalianTank"/> </owl:Class> <owl:Class rdf:ID="CentauroB1"> <rdfs:comment> Centauro B-1</rdfs:comment...rdfs:label> Centauro B-1</rdfs:label> <rdfs:subClassOf rdf:resource="#LightTankRecon"/> </owl:Class> <owl:Class rdf:ID="M1985Lighttank

  5. Joint Projection Filling method for occlusion handling in Depth-Image-Based Rendering

    OpenAIRE

    Jantet, Vincent; Guillemot, Christine; Morin, Luce

    2011-01-01

    International audience; This paper addresses the disocclusion problem which may occur when using Depth-Image-Based Rendering (DIBR) techniques in 3DTV and Free-Viewpoint TV applications. A new DIBR technique is proposed, which combines three methods: a Joint Projection Filling (JPF) method to handle disocclusions in synthesized depth maps; a backward projection to synthesize virtual views; and a full-Z depth-aided inpainting to fill in disoccluded areas in textures. The JPF method performs th...

  6. New Watermarking Scheme for Security and Transmission of Medical Images for PocketNeuro Project

    OpenAIRE

    M.S. Bouhlel; J. C. Lapayre; C. Chemak

    2007-01-01

    We describe a new Watermarking system of medical information security and terminal mobile phone adaptation for PocketNeuro project. The later term refers to a Project created for the service of neurological diseases. It consists of transmitting information about patients \\"Desk of Patients\\" to a doctor\\'s mobile phone when he is visiting or examining his patient. This system is capable of embedding medical information inside diagnostic images for security purposes. Our system applies JPEG Co...

  7. Results Of The IMAGES Project 1986-1989; Facts And Fallacies

    Science.gov (United States)

    Ottes, Fenno P.; de Valk, Jan P.; Lodder, Herman; Stut, W. J.; van der Horst-Bruinsma, I.; Hofland, Paul L.; van Poppel, Bas M.; Ter Haar Romeny, Bart M.; Bakker, Albert R.

    1989-05-01

    A concise overview is presented on the results of the total IMAGIS (IMAGe Information System) research as carried out by BAZIS. This paper is intended as a continuation of the IMAGIS presentation at the 'Dutch PACS session' during the SPIE Medical Imaging H Conference 1988. That session was jointly organized by the Utrecht University Hospital (AZU), BAZIS and Philips Medical Systems, the partners within the Dutch PACS project. The HIS-PACS coupling/integration project has resulted in a HIS-PACS coupling that is used to transfer data from the BAZIS HIS to the prototype Philips PACS. The modelling and simulation project has resulted in a modelling and simulation package (MIRACLES), which has been used to suggest performance improvements of existing and future PACS's. To support the technology assessment project a PC program has been developed that calculates the financial consequences of the introduction of a PACS (CAPACITY). The diagnostic image quality evaluation project has provided research protocols and a software package (FEASIBLE) that have been used as an aid to execute observer performance studies. A software package (FRACTALS) has been developed so that a standard computer (113M RT PC) can be used as a simple image workstation for radiological research. Furthermore, a number of general issues concerning the development, the acceptance and the introduction of PACS are discussed.

  8. Estimating High Frequency Energy Radiation of Large Earthquakes by Image Deconvolution Back-Projection

    Science.gov (United States)

    Wang, Dun; Takeuchi, Nozomu; Kawakatsu, Hitoshi; Mori, Jim

    2017-04-01

    With the recent establishment of regional dense seismic arrays (e.g., Hi-net in Japan, USArray in the North America), advanced digital data processing has enabled improvement of back-projection methods that have become popular and are widely used to track the rupture process of moderate to large earthquakes. Back-projection methods can be classified into two groups, one using time domain analyses, and the other frequency domain analyses. There are minor technique differences in both groups. Here we focus on the back-projection performed in the time domain using seismic waveforms recorded at teleseismic distances (30-90 degree). For the standard back-projection (Ishii et al., 2005), teleseismic P waves that are recorded on vertical components of a dense seismic array are analyzed. Since seismic arrays have limited resolutions and we make several assumptions (e.g., only direct P waves at the observed waveforms, and every trace has completely identical waveform), the final images from back-projections show the stacked amplitudes (or correlation coefficients) that are often smeared in both time and space domains. Although it might not be difficult to reveal overall source processes for a giant seismic source such as the 2004 Mw 9.0 Sumatra earthquake where the source extent is about 1400 km (Ishii et al., 2005; Krüger and Ohrnberger, 2005), there are more problems in imaging detailed processes of earthquakes with smaller source dimensions, such as a M 7.5 earthquake with a source extent of 100-150 km. For smaller earthquakes, it is more difficult to resolve space distributions of the radiated energies. We developed a new inversion method, Image Deconvolution Back-Projection (IDBP) to determine the sources of high frequency energy radiation by linear inversion of observed images from a back-projection approach. The observed back-projection image for multiple sources is considered as a convolution of the image of the true radiated energy and the array response for a

  9. An adaptive nonlocal filtering for low-dose CT in both image and projection domains

    Directory of Open Access Journals (Sweden)

    Yingmei Wang

    2015-04-01

    Full Text Available An important problem in low-dose CT is the image quality degradation caused by photon starvation. There are a lot of algorithms in sinogram domain or image domain to solve this problem. In view of strong self-similarity contained in the special sinusoid-like strip data in the sinogram space, we propose a novel non-local filtering, whose average weights are related to both the image FBP (filtered backprojection reconstructed from restored sinogram data and the image directly FBP reconstructed from noisy sinogram data. In the process of sinogram restoration, we apply a non-local method with smoothness parameters adjusted adaptively to the variance of noisy sinogram data, which makes the method much effective for noise reduction in sinogram domain. Simulation experiments show that our proposed method by filtering in both image and projection domains has a better performance in noise reduction and details preservation in reconstructed images.

  10. Non-iterative phase hologram computation for low speckle holographic image projection

    OpenAIRE

    Ürey, Hakan; Ulusoy, Erdem; Mengü, Deniz

    2016-01-01

    Phase-only spatial light modulators (SLMs) are widely used in holographic display applications, including holographic image projection (HIP). Most phase computer generated hologram (CGH) calculation algorithms have an iterative structure with a high computational load, and also are prone to speckle noise, as a result of the random phase terms applied on the desired images to mitigate the encoding noise. In this paper, we present a non-iterative algorithm, where simple Discrete Fourier Transfo...

  11. Standard protocol for exchange of health-checkup data based on SGML: the Health-checkup Data Markup Language (HDML).

    Science.gov (United States)

    Sugimori, H; Yoshida, K; Hara, S; Furumi, K; Tofukuji, I; Kubodera, T; Yoda, T; Kawai, M

    2002-01-01

    To develop a health/medical data interchange model for efficient electronic exchange of data among health-checkup facilities. A Health-checkup Data Markup Language (HDML) was developed on the basis of the Standard Generalized Markup Language (SGML), and a feasibility study carried out, involving data exchange between two health checkup facilities. The structure of HDML is described. The transfer of numerical lab data, summary findings and health status assessment was successful. HDML is an improvement to laboratory data exchange. Further work has to address the exchange of qualitative and textual data.

  12. Photoacoustic projection imaging using a 64-channel fiber optic detector array

    Science.gov (United States)

    Bauer-Marschallinger, Johannes; Felbermayer, Karoline; Bouchal, Klaus-Dieter; Veres, Istvan A.; Grün, Hubert; Burgholzer, Peter; Berer, Thomas

    2015-03-01

    In this work we present photoacoustic projection imaging with a 64-channel integrating line detector array, which average the pressure over cylindrical surfaces. For imaging, the line detectors are arranged parallel to each other on a cylindrical surface surrounding a specimen. Thereby, the three-dimensional imaging problem is reduced to a twodimensional problem, facilitating projection imaging. After acquisition of a dataset of pressure signals, a twodimensional photoacoustic projection image is reconstructed. The 64 channel line detector array is realized using optical fibers being part of interferometers. The parts of the interferometers used to detect the ultrasonic pressure waves consist of graded-index polymer-optical fibers (POFs), which exhibit better sensitivity than standard glass-optical fibers. Ultrasonic waves impinging on the POFs change the phase of light in the fiber-core due to the strain-optic effect. This phase shifts, representing the pressure signals, are demodulated using high-bandwidth balanced photo-detectors. The 64 detectors are optically multiplexed to 16 detection channels, thereby allowing fast imaging. Results are shown on a Rhodamine B dyed microsphere.

  13. Ontology aided modeling of organic reaction mechanisms with flexible and fragment based XML markup procedures.

    Science.gov (United States)

    Sankar, Punnaivanam; Aghila, Gnanasekaran

    2007-01-01

    The mechanism models for primary organic reactions encoding the structural fragments undergoing substitution, addition, elimination, and rearrangements are developed. In the proposed models, each and every structural component of mechanistic pathways is represented with flexible and fragment based markup technique in XML syntax. A significant feature of the system is the encoding of the electron movements along with the other components like charges, partial charges, half bonded species, lone pair electrons, free radicals, reaction arrows, etc. needed for a complete representation of reaction mechanism. The rendering of reaction schemes described with the proposed methodology is achieved with a concise XML extension language interoperating with the structure markup. The reaction scheme is visualized as 2D graphics in a browser by converting them into SVG documents enabling the desired layouts normally perceived by the chemists conventionally. An automatic representation of the complex patterns of the reaction mechanism is achieved by reusing the knowledge in chemical ontologies and developing artificial intelligence components in terms of axioms.

  14. Simulation Experiment Description Markup Language (SED-ML) Level 1 Version 2.

    Science.gov (United States)

    Bergmann, Frank T; Cooper, Jonathan; Le Novère, Nicolas; Nickerson, David; Waltemath, Dagmar

    2015-09-04

    The number, size and complexity of computational models of biological systems are growing at an ever increasing pace. It is imperative to build on existing studies by reusing and adapting existing models and parts thereof. The description of the structure of models is not sufficient to enable the reproduction of simulation results. One also needs to describe the procedures the models are subjected to, as recommended by the Minimum Information About a Simulation Experiment (MIASE) guidelines. This document presents Level 1 Version 2 of the Simulation Experiment Description Markup Language (SED-ML), a computer-readable format for encoding simulation and analysis experiments to apply to computational models. SED-ML files are encoded in the Extensible Markup Language (XML) and can be used in conjunction with any XML-based model encoding format, such as CellML or SBML. A SED-ML file includes details of which models to use, how to modify them prior to executing a simulation, which simulation and analysis procedures to apply, which results to extract and how to present them. Level 1 Version 2 extends the format by allowing the encoding of repeated and chained procedures.

  15. SBRML: a markup language for associating systems biology data with models.

    Science.gov (United States)

    Dada, Joseph O; Spasić, Irena; Paton, Norman W; Mendes, Pedro

    2010-04-01

    Research in systems biology is carried out through a combination of experiments and models. Several data standards have been adopted for representing models (Systems Biology Markup Language) and various types of relevant experimental data (such as FuGE and those of the Proteomics Standards Initiative). However, until now, there has been no standard way to associate a model and its entities to the corresponding datasets, or vice versa. Such a standard would provide a means to represent computational simulation results as well as to frame experimental data in the context of a particular model. Target applications include model-driven data analysis, parameter estimation, and sharing and archiving model simulations. We propose the Systems Biology Results Markup Language (SBRML), an XML-based language that associates a model with several datasets. Each dataset is represented as a series of values associated with model variables, and their corresponding parameter values. SBRML provides a flexible way of indexing the results to model parameter values, which supports both spreadsheet-like data and multidimensional data cubes. We present and discuss several examples of SBRML usage in applications such as enzyme kinetics, microarray gene expression and various types of simulation results. The XML Schema file for SBRML is available at http://www.comp-sys-bio.org/SBRML under the Academic Free License (AFL) v3.0.

  16. Sfm_georef: Automating image measurement of ground control points for SfM-based projects

    Science.gov (United States)

    James, Mike R.

    2016-04-01

    Deriving accurate DEM and orthomosaic image products from UAV surveys generally involves the use of multiple ground control points (GCPs). Here, we demonstrate the automated collection of GCP image measurements for SfM-MVS processed projects, using sfm_georef software (James & Robson, 2012; http://www.lancaster.ac.uk/staff/jamesm/software/sfm_georef.htm). Sfm_georef was originally written to provide geo-referencing procedures for SfM-MVS projects. It has now been upgraded with a 3-D patch-based matching routine suitable for automating GCP image measurement in both aerial and ground-based (oblique) projects, with the aim of reducing the time required for accurate geo-referencing. Sfm_georef is compatible with a range of SfM-MVS software and imports the relevant files that describe the image network, including camera models and tie points. 3-D survey measurements of ground control are then provided, either for natural features or artificial targets distributed over the project area. Automated GCP image measurement is manually initiated through identifying a GCP position in an image by mouse click; the GCP is then represented by a square planar patch in 3-D, textured from the image and oriented parallel to the local topographic surface (as defined by the 3-D positions of nearby tie points). Other images are then automatically examined by projecting the patch into the images (to account for differences in viewing geometry) and carrying out a sub-pixel normalised cross-correlation search in the local area. With two or more observations of a GCP, its 3-D co-ordinates are then derived by ray intersection. With the 3-D positions of three or more GCPs identified, an initial geo-referencing transform can be derived to relate the SfM-MVS co-ordinate system to that of the GCPs. Then, if GCPs are symmetric and identical, image texture from one representative GCP can be used to search automatically for all others throughout the image set. Finally, the GCP observations can be

  17. Optimal resolution in maximum entropy image reconstruction from projections with multigrid acceleration

    Science.gov (United States)

    Limber, Mark A.; Manteuffel, Thomas A.; Mccormick, Stephen F.; Sholl, David S.

    1993-01-01

    We consider the problem of image reconstruction from a finite number of projections over the space L(sup 1)(Omega), where Omega is a compact subset of the set of Real numbers (exp 2). We prove that, given a discretization of the projection space, the function that generates the correct projection data and maximizes the Boltzmann-Shannon entropy is piecewise constant on a certain discretization of Omega, which we call the 'optimal grid'. It is on this grid that one obtains the maximum resolution given the problem setup. The size of this grid grows very quickly as the number of projections and number of cells per projection grow, indicating fast computational methods are essential to make its use feasible. We use a Fenchel duality formulation of the problem to keep the number of variables small while still using the optimal discretization, and propose a multilevel scheme to improve convergence of a simple cyclic maximization scheme applied to the dual problem.

  18. Remote Imaging Applied to Schistosomiasis Control: The Anning River Project

    Science.gov (United States)

    Seto, Edmund Y. W.; Maszle, Don R.; Spear, Robert C.; Gong, Peng

    1997-01-01

    The use of satellite imaging to remotely detect areas of high risk for transmission of infectious disease is an appealing prospect for large-scale monitoring of these diseases. The detection of large-scale environmental determinants of disease risk, often called landscape epidemiology, has been motivated by several authors (Pavlovsky 1966; Meade et al. 1988). The basic notion is that large-scale factors such as population density, air temperature, hydrological conditions, soil type, and vegetation can determine in a coarse fashion the local conditions contributing to disease vector abundance and human contact with disease agents. These large-scale factors can often be remotely detected by sensors or cameras mounted on satellite or aircraft platforms and can thus be used in a predictive model to mark high risk areas of transmission and to target control or monitoring efforts. A review of satellite technologies for this purpose was recently presented by Washino and Wood (1994) and Hay (1997) and Hay et al. (1997).

  19. Smart Images in a Web 2.0 World: The Virtual Astronomy Multimedia Project (VAMP)

    Science.gov (United States)

    Hurt, R. L.; Christensen, L. L.; Gauthier, A.; Wyatt, R.

    2008-06-01

    High quality astronomical images, accompanied by rich caption and background information, abound on the web and yet are notoriously difficult to locate efficiently using common search engines. ``Flat'' searches can return dozens of hits for a single popular image but miss equally important related images from other observatories. The Virtual Astronomy Multimedia Project (VAMP) is developing the architecture for an online index of astronomical imagery and video that will simplify access and provide a service around which innovative applications can be developed (e.g. digital planetariums). Current progress includes design prototyping around existing Astronomy Visualization Metadata (AVM) standards. Growing VAMP partnerships include a cross-section of observatories, data centers, and planetariums.

  20. Twin image removal in digital in-line holography based on iterative inter-projections

    Science.gov (United States)

    Chen, Bing Kuan; Chen, Tai-Yu; Hung, Shau Gang; Huang, Sheng-Lung; Lin, Jiunn-Yuan

    2016-06-01

    A simple and efficient phase retrieval method based on the iterative inter-projections of the recorded Fourier modulus between two effective holographic planes is developed to eliminate the twin image in digital in-line holography. The proposed algorithm converges stably in phase extraction procedures without requiring any prior knowledge or sophisticated support of the object and is applicable to lensless Gabor and Fourier holography as well as holographic microscopy with imaging lenses. Numerical and experimental results suggest that the spatial resolution enhancement on the reconstructed image can be achieved with this technique due to the capability of recovering the diffraction phases of low-intensity signals.

  1. Challenges and solutions in the calibration of projection lens pupil-image metrology tools

    Science.gov (United States)

    Slonaker, Steve; Riffel, Bryan; Nishinaga, Hisashi

    2009-03-01

    As imaging requirements and limits continue to be pushed tighter and lower, it has become imperative that accurate and repeatable measurement of the projection lens (PL) pupil be readily available. These are important for setup and adjustment of the illumination distribution, measurement and optimization of the lens aberrations, and verification of lens NA and transmission. Accurate testing of these items is critical during initial installation and setup of a photolithography tool, but it continues to prove useful each time any projection lens pupil-image measurement is made. The basic raw data from any such measurement is in the form of a pixelized 'image' captured by a projection lens pupil microscope. Such images have typically been referred to as pupilgrams1, and many prior works have reported on their application and analysis1,2,3. Each of these measurements can be affected by errors in the measurement tool used. The error modes can be broadly divided into two distinct groups: uncompensated transmission loss, and uncompensated distortion (or remapping) error. For instance, in illuminator measurements, the first will yield intensity error and the second will yield image shape mapping error. These errors may or may not lie exclusively in the optics of the measurement tool. But, regardless of their source, they will propagate through the analysis of the pupilgram images. For this reason, at minimum they must be measured and judged for relative impact, if only to confirm that the errors do not change the conclusions or results. In this paper, we will discuss and present methods for measuring the image distortion present in a PL pupil-image microscope. These data are used to build a 'map' of errors vs. position in the lens pupil. The maps then serve as the basis for image-processing-based compensation that can be applied to all subsequent microscope images. A novel vitally important feature of the technique presented is that it calibrates the distortion of the

  2. Lossless image compression with projection-based and adaptive reversible integer wavelet transforms.

    Science.gov (United States)

    Deever, Aaron T; Hemami, Sheila S

    2003-01-01

    Reversible integer wavelet transforms are increasingly popular in lossless image compression, as evidenced by their use in the recently developed JPEG2000 image coding standard. In this paper, a projection-based technique is presented for decreasing the first-order entropy of transform coefficients and improving the lossless compression performance of reversible integer wavelet transforms. The projection technique is developed and used to predict a wavelet transform coefficient as a linear combination of other wavelet transform coefficients. It yields optimal fixed prediction steps for lifting-based wavelet transforms and unifies many wavelet-based lossless image compression results found in the literature. Additionally, the projection technique is used in an adaptive prediction scheme that varies the final prediction step of the lifting-based transform based on a modeling context. Compared to current fixed and adaptive lifting-based transforms, the projection technique produces improved reversible integer wavelet transforms with superior lossless compression performance. It also provides a generalized framework that explains and unifies many previous results in wavelet-based lossless image compression.

  3. The Standardisation and Sequencing of Solar Eclipse Images for the Eclipse Megamovie Project

    CERN Document Server

    Krista, Larisza

    2015-01-01

    We present a new tool, the Solar Eclipse Image Standardisation and Sequencing (SEISS), developed to process multi-source total solar eclipse images by adjusting them to the same standard of size, resolution, and orientation. Furthermore, by analysing the eclipse images we can determine the relative time between the observations and order them to create a movie of the observed total solar eclipse sequence. We successfully processed images taken at the 14 November 2012 total solar eclipse that occurred in Queensland, Australia, and created a short eclipse proto-movie. The SEISS tool was developed for the Eclipse Megamovie Project (EMP: https://www.eclipsemegamovie.org), with the goal of processing thousands of images taken by the public during solar eclipse events. EMP is a collaboration among multiple institutes aiming to engage and advance the public interest in solar eclipses and the science of the Sun-Earth connection.

  4. EISCAT Aperture Synthesis Imaging (EASI _3D) for the EISCAT_3D Project

    Science.gov (United States)

    La Hoz, Cesar; Belyey, Vasyl

    2012-07-01

    Aperture Synthesis Imaging Radar (ASIR) is one of the technologies adopted by the EISCAT_3D project to endow it with imaging capabilities in 3-dimensions that includes sub-beam resolution. Complemented by pulse compression, it will provide 3-dimensional images of certain types of incoherent scatter radar targets resolved to about 100 metres at 100 km range, depending on the signal-to-noise ratio. This ability will open new research opportunities to map small structures associated with non-homogeneous, unstable processes such as aurora, summer and winter polar radar echoes (PMSE and PMWE), Natural Enhanced Ion Acoustic Lines (NEIALs), structures excited by HF ionospheric heating, meteors, space debris, and others. The underlying physico-mathematical principles of the technique are the same as the technique employed in radioastronomy to image stellar objects; both require sophisticated inversion techniques to obtain reliable images.

  5. Polarization Imaging Apparatus for Cell and Tissue Imaging and Diagnostics Project

    Data.gov (United States)

    National Aeronautics and Space Administration — This work proposes to capitalize on our Phase I success in a novel visible-near infrared Stokes polarization imaging technology based on high performance fast...

  6. Heart Modeling, Computational Physiology and the IUPS Physiome Project

    Science.gov (United States)

    Hunter, Peter J.

    The Physiome Project of the International Union of Physiological Sciences (IUPS) is attempting to provide a comprehensive framework for modelling the human body using computational methods which can incorporate the biochemistry, biophysics and anatomy of cells, tissues and organs. A major goal of the project is to use computational modelling to analyse integrative biological function in terms of underlying structure and molecular mechanisms. To support that goal the project is developing XML markup languages (CellML & FieldML) for encoding models, and software tools for creating, visualizing and executing these models. It is also establishing web-accessible physiological databases dealing with model-related data at the cell, tissue, organ and organ system levels. Two major developments in current medicine are, on the one hand, the much publicised genomics (and soon proteomics) revolution and, on the other, the revolution in medical imaging in which the physiological function of the human body can be studied with a plethora of imaging devices such as MRI, CT, PET, ultrasound, electrical mapping, etc. The challenge for the Physiome Project is to link these two developments for an individual - to use complementary genomic and medical imaging data, together with computational modelling tailored to the anatomy, physiology and genetics of that individual, for patient-specific diagnosis and treatment.

  7. Projection-reflection ultrasound images using PE-CMOS sensor: a preliminary bone fracture study

    Science.gov (United States)

    Lo, Shih-Chung B.; Liu, Chu-Chuan; Freedman, Matthew T.; Mun, Seong-Ki; Kula, John; Lasser, Marvin E.; Lasser, Bob; Wang, Yue Joseph

    2008-03-01

    In this study, we investigated the characteristics of the ultrasound reflective image obtained by a CMOS sensor array coated with piezoelectric material (PE-CMOS). The laboratory projection-reflection ultrasound prototype consists of five major components: an unfocused ultrasound transducer, an acoustic beam splitter, an acoustic compound lens, a PE-CMOS ultrasound sensing array (Model I400, Imperium Inc. Silver Spring, MD), and a readout circuit system. The prototype can image strong reflective materials such as bone and metal. We found this projection-reflection ultrasound prototype is able to reveal hairline bone fractures with and without intact skin and tissue. When compared, the image generated from a conventional B-scan ultrasound on the same bone fracture is less observable. When it is observable with the B-scan system, the fracture or crack on the surface only show one single spot of echo due to its scan geometry. The corresponding image produced from the projection-reflection ultrasound system shows a bright blooming strip on the image clearly indicating the fracture on the surface of the solid material. Speckles of the bone structure are also observed in the new ultrasound prototype. A theoretical analysis is provided to link the signals as well as speckles detected in both systems.

  8. A new compact, cost-efficient concept for underwater range-gated imaging: the UTOFIA project

    Science.gov (United States)

    Mariani, Patrizio; Quincoces, Iñaki; Galparsoro, Ibon; Bald, Juan; Gabiña, Gorka; Visser, Andy; Jónasdóttir, Sigrun; Haugholt, Karl Henrik; Thorstensen, Jostein; Risholm, Petter; Thielemann, Jens

    2017-04-01

    Underwater Time Of Flight Image Acquisition system (UTOFIA) is a recently launched H2020 project (H2020 - 633098) to develop a compact and cost-effective underwater imaging system especially suited for observations in turbid environments. The UTOFIA project targets technology that can overcome the limitations created by scattering, by introducing cost-efficient range-gated imaging for underwater applications. This technology relies on a image acquisition principle that can extends the imaging range of the cameras 2-3 times respect to other cameras. Moreover, the system will simultaneously capture 3D information of the observed objects. Today range-gated imaging is not widely used, as it relies on specialised optical components making systems large and costly. Recent technology developments have made it possible a significant (2-3 times) reduction in size, complexity and cost of underwater imaging systems, whilst addressing the scattering issues at the same time. By acquiring simultaneous 3D data, the system allows to accurately measure the absolute size of marine life and their spatial relationship to their habitat, enhancing the precision of fish stock monitoring and ecology assessment, hence supporting proper management of marine resources. Additionally, the larger observed volume and the improved image quality make the system suitable for cost-effective underwater surveillance operations in e.g. fish farms, underwater infrastructures. The system can be integrated into existing ocean observatories for real time acquisition and can greatly advance present efforts in developing species recognition algorithms, given the additional features provided, the improved image quality and the independent illumination source based on laser. First applications of the most recent prototype of the imaging system will be provided including inspection of underwater infrastructures and observations of marine life under different environmental conditions.

  9. A NEW APPROACH FOR UNSUPERVISED RESTORING IMAGES BASED ON WAVELET-DOMAIN PROJECTION PURSUIT LEARNING NETWORK

    Institute of Scientific and Technical Information of China (English)

    Lin Wei; Tian Zheng; Wen Xianbin

    2003-01-01

    The Wavelet-Domain Projection Pursuit Learning Network (WDPPLN) is proposedfor restoring degraded image. The new network combines the advantages of both projectionpursuit and wavelet shrinkage. Restoring image is very difficult when little is known about apriori knowledge for multisource degraded factors. WDPPLN successfully resolves this problemby separately processing wavelet coefficients and scale coefficients. Parameters in WDPPLN,which are used to simulate degraded factors, are estimated via WDPPLN training, using scalecoefficients. Also, WDPPLN uses soft-threshold of wavelet shrinkage technique to suppress noisein three high frequency subbands. The new method is compared with the traditional methodsand the Projection Pursuit Learning Network (PPLN) method. Experimental results demonstratethat it is an effective method for unsupervised restoring degraded image.

  10. Detection of pulmonary nodules at paediatric CT: maximum intensity projections and axial source images are complementary

    Energy Technology Data Exchange (ETDEWEB)

    Kilburn-Toppin, Fleur; Arthurs, Owen J.; Tasker, Angela D.; Set, Patricia A.K. [Addenbrooke' s Hospital, Cambridge University Teaching Hospitals NHS Foundation Trust, Department of Radiology, Box 219, Cambridge (United Kingdom)

    2013-07-15

    Maximum intensity projection (MIP) images might be useful in helping to differentiate small pulmonary nodules from adjacent vessels on thoracic multidetector CT (MDCT). The aim was to evaluate the benefits of axial MIP images over axial source images for the paediatric chest in an interobserver variability study. We included 46 children with extra-pulmonary solid organ malignancy who had undergone thoracic MDCT. Three radiologists independently read 2-mm axial and 10-mm MIP image datasets, recording the number of nodules, size and location, overall time taken and confidence. There were 83 nodules (249 total reads among three readers) in 46 children (mean age 10.4 {+-} 4.98 years, range 0.3-15.9 years; 24 boys). Consensus read was used as the reference standard. Overall, three readers recorded significantly more nodules on MIP images (228 vs. 174; P < 0.05), improving sensitivity from 67% to 77.5% (P < 0.05) but with lower positive predictive value (96% vs. 85%, P < 0.005). MIP images took significantly less time to read (71.6 {+-} 43.7 s vs. 92.9 {+-} 48.7 s; P < 0.005) but did not improve confidence levels. Using 10-mm axial MIP images for nodule detection in the paediatric chest enhances diagnostic performance, improving sensitivity and reducing reading time when compared with conventional axial thin-slice images. Axial MIP and axial source images are complementary in thoracic nodule detection. (orig.)

  11. Using Extensible Markup Language (XML) for the Single Source Delivery of Educational Resources by Print and Online: A Case Study

    Science.gov (United States)

    Walsh, Lucas

    2007-01-01

    This article seeks to provide an introduction to Extensible Markup Language (XML) by looking at its use in a single source publishing approach to the provision of teaching resources in both hardcopy and online. Using the development of the International Baccalaureate Organisation's online Economics Subject Guide as a practical example, this…

  12. Extensible Markup Language: How Might It Alter the Software Documentation Process and the Role of the Technical Communicator?

    Science.gov (United States)

    Battalio, John T.

    2002-01-01

    Describes the influence that Extensible Markup Language (XML) will have on the software documentation process and subsequently on the curricula of advanced undergraduate and master's programs in technical communication. Recommends how curricula of advanced undergraduate and master's programs in technical communication ought to change in order to…

  13. Using Extensible Markup Language (XML) for the Single Source Delivery of Educational Resources by Print and Online: A Case Study

    Science.gov (United States)

    Walsh, Lucas

    2007-01-01

    This article seeks to provide an introduction to Extensible Markup Language (XML) by looking at its use in a single source publishing approach to the provision of teaching resources in both hardcopy and online. Using the development of the International Baccalaureate Organisation's online Economics Subject Guide as a practical example, this…

  14. Progressive versus Random Projections for Compressive Capture of Images, Lightfields and Higher Dimensional Visual Signals

    CERN Document Server

    Pandharkar, Rohit; Raskar, Ramesh

    2011-01-01

    Computational photography involves sophisticated capture methods. A new trend is to capture projection of higher dimensional visual signals such as videos, multi-spectral data and lightfields on lower dimensional sensors. Carefully designed capture methods exploit the sparsity of the underlying signal in a transformed domain to reduce the number of measurements and use an appropriate reconstruction method. Traditional progressive methods may capture successively more detail using a sequence of simple projection basis, such as DCT or wavelets and employ straightforward backprojection for reconstruction. Randomized projection methods do not use any specific sequence and use L0 minimization for reconstruction. In this paper, we analyze the statistical properties of natural images, videos, multi-spectral data and light-fields and compare the effectiveness of progressive and random projections. We define effectiveness by plotting reconstruction SNR against compression factor. The key idea is a procedure to measure...

  15. Visual Access to Visual Images: The UC Berkeley Image Database Project.

    Science.gov (United States)

    Besser, Howard

    1990-01-01

    Discusses the problem of access in managing image collections and describes a prototype system for the University of California Berkeley which would include the University Art Museum, Architectural Slide Library, Geography Department's Map Library and Lowie Museum of Anthropology photographs. The system combines an online public access catalog…

  16. Visual Access to Visual Images: The UC Berkeley Image Database Project.

    Science.gov (United States)

    Besser, Howard

    1990-01-01

    Discusses the problem of access in managing image collections and describes a prototype system for the University of California Berkeley which would include the University Art Museum, Architectural Slide Library, Geography Department's Map Library and Lowie Museum of Anthropology photographs. The system combines an online public access catalog…

  17. SBML-PET-MPI: a parallel parameter estimation tool for Systems Biology Markup Language based models.

    Science.gov (United States)

    Zi, Zhike

    2011-04-01

    Parameter estimation is crucial for the modeling and dynamic analysis of biological systems. However, implementing parameter estimation is time consuming and computationally demanding. Here, we introduced a parallel parameter estimation tool for Systems Biology Markup Language (SBML)-based models (SBML-PET-MPI). SBML-PET-MPI allows the user to perform parameter estimation and parameter uncertainty analysis by collectively fitting multiple experimental datasets. The tool is developed and parallelized using the message passing interface (MPI) protocol, which provides good scalability with the number of processors. SBML-PET-MPI is freely available for non-commercial use at http://www.bioss.uni-freiburg.de/cms/sbml-pet-mpi.html or http://sites.google.com/site/sbmlpetmpi/.

  18. SBML-SAT: a systems biology markup language (SBML) based sensitivity analysis tool.

    Science.gov (United States)

    Zi, Zhike; Zheng, Yanan; Rundell, Ann E; Klipp, Edda

    2008-08-15

    It has long been recognized that sensitivity analysis plays a key role in modeling and analyzing cellular and biochemical processes. Systems biology markup language (SBML) has become a well-known platform for coding and sharing mathematical models of such processes. However, current SBML compatible software tools are limited in their ability to perform global sensitivity analyses of these models. This work introduces a freely downloadable, software package, SBML-SAT, which implements algorithms for simulation, steady state analysis, robustness analysis and local and global sensitivity analysis for SBML models. This software tool extends current capabilities through its execution of global sensitivity analyses using multi-parametric sensitivity analysis, partial rank correlation coefficient, SOBOL's method, and weighted average of local sensitivity analyses in addition to its ability to handle systems with discontinuous events and intuitive graphical user interface. SBML-SAT provides the community of systems biologists a new tool for the analysis of their SBML models of biochemical and cellular processes.

  19. The semantics of Chemical Markup Language (CML) for computational chemistry : CompChem.

    Science.gov (United States)

    Phadungsukanan, Weerapong; Kraft, Markus; Townsend, Joe A; Murray-Rust, Peter

    2012-08-07

    : This paper introduces a subdomain chemistry format for storing computational chemistry data called CompChem. It has been developed based on the design, concepts and methodologies of Chemical Markup Language (CML) by adding computational chemistry semantics on top of the CML Schema. The format allows a wide range of ab initio quantum chemistry calculations of individual molecules to be stored. These calculations include, for example, single point energy calculation, molecular geometry optimization, and vibrational frequency analysis. The paper also describes the supporting infrastructure, such as processing software, dictionaries, validation tools and database repositories. In addition, some of the challenges and difficulties in developing common computational chemistry dictionaries are discussed. The uses of CompChem are illustrated by two practical applications.

  20. The markup is the model: reasoning about systems biology models in the Semantic Web era.

    Science.gov (United States)

    Kell, Douglas B; Mendes, Pedro

    2008-06-07

    Metabolic control analysis, co-invented by Reinhart Heinrich, is a formalism for the analysis of biochemical networks, and is a highly important intellectual forerunner of modern systems biology. Exchanging ideas and exchanging models are part of the international activities of science and scientists, and the Systems Biology Markup Language (SBML) allows one to perform the latter with great facility. Encoding such models in SBML allows their distributed analysis using loosely coupled workflows, and with the advent of the Internet the various software modules that one might use to analyze biochemical models can reside on entirely different computers and even on different continents. Optimization is at the core of many scientific and biotechnological activities, and Reinhart made many major contributions in this area, stimulating our own activities in the use of the methods of evolutionary computing for optimization.

  1. Extending the Concepts of Normalization from Relational Databases to Extensible-Markup-Language Databases Model

    Directory of Open Access Journals (Sweden)

    H.J. F. El-Sofany

    2008-01-01

    Full Text Available In this study we have studied the problem of how to extend the concepts of Functional Dependency (FD and normalization in relational databases to include the eXtensible Markup Language (XML model. We shown that, like relational databases, XML documents may contain redundant information and this redundancy may cause update anomalies. Furthermore, such problems are caused by certain functional dependencies among paths in the document. Our goal is to find a way for converting an arbitrary XML Schema to a well-designed one, that avoids these problems. We introduced new definitions of FD and normal forms of XML Schema (X-1NF, X-2NF, X-3NF and X-BCNF. We shown that our normal forms are necessary and sufficient to ensure all conforming XML documents have no redundancies.

  2. Treating metadata as annotations: separating the content markup from the content

    Directory of Open Access Journals (Sweden)

    Fredrik Paulsson

    2007-11-01

    Full Text Available The use of digital learning resources creates an increasing need for semantic metadata, describing the whole resource, as well as parts of resources. Traditionally, schemas such as Text Encoding Initiative (TEI have been used to add semantic markup for parts of resources. This is not sufficient for use in a ”metadata ecology”, where metadata is distributed, coherent to different Application Profiles, and added by different actors. A new methodology, where metadata is “pointed in” as annotations, using XPointers, and RDF is proposed. A suggestion for how such infrastructure can be implemented, using existing open standards for metadata, and for the web is presented. We argue that such methodology and infrastructure is necessary to realize the decentralized metadata infrastructure needed for a “metadata ecology".

  3. The semantics of Chemical Markup Language (CML for computational chemistry : CompChem

    Directory of Open Access Journals (Sweden)

    Phadungsukanan Weerapong

    2012-08-01

    Full Text Available Abstract This paper introduces a subdomain chemistry format for storing computational chemistry data called CompChem. It has been developed based on the design, concepts and methodologies of Chemical Markup Language (CML by adding computational chemistry semantics on top of the CML Schema. The format allows a wide range of ab initio quantum chemistry calculations of individual molecules to be stored. These calculations include, for example, single point energy calculation, molecular geometry optimization, and vibrational frequency analysis. The paper also describes the supporting infrastructure, such as processing software, dictionaries, validation tools and database repositories. In addition, some of the challenges and difficulties in developing common computational chemistry dictionaries are discussed. The uses of CompChem are illustrated by two practical applications.

  4. Monte Carlo evaluation of the Filtered Back Projection method for image reconstruction in proton computed tomography

    Energy Technology Data Exchange (ETDEWEB)

    Cirrone, G.A.P., E-mail: cirrone@lns.infn.it [Laboratori Nazionali del Sud - National Instiute for Nuclear Physics INFN (INFN-LNS), Via S.Sofia 64, 95100 Catania (Italy); Bucciolini, M. [Department of ' Fisiopatologia Clinica' , University of Florence, V.le Morgagni 85, I-50134 Florence (Italy); Bruzzi, M. [Energetic Department, University of Florence, Via S. Marta 3, I-50139 Florence (Italy); Candiano, G. [Laboratorio di Tecnologie Oncologiche HSR, Giglio Contrada, Pietrapollastra-Pisciotto, 90015 Cefalu, Palermo (Italy); Civinini, C. [National Institute for Nuclear Physics INFN, Section of Florence, Via G. Sansone 1, Sesto Fiorentino, I-50019 Florence (Italy); Cuttone, G. [Laboratori Nazionali del Sud - National Instiute for Nuclear Physics INFN (INFN-LNS), Via S.Sofia 64, 95100 Catania (Italy); Guarino, P. [Nuclear Engineering Department, University of Palermo, Via... Palermo (Italy); Laboratori Nazionali del Sud - National Instiute for Nuclear Physics INFN (INFN-LNS), Via S.Sofia 64, 95100 Catania (Italy); Lo Presti, D. [Physics Department, University of Catania, Via S. Sofia 64, I-95123, Catania (Italy); Mazzaglia, S.E. [Laboratori Nazionali del Sud - National Instiute for Nuclear Physics INFN (INFN-LNS), Via S.Sofia 64, 95100 Catania (Italy); Pallotta, S. [Department of ' Fisiopatologia Clinica' , University of Florence, V.le Morgagni 85, I-50134 Florence (Italy); Randazzo, N. [National Institute for Nuclear Physics INFN, Section of Catania, Via S.Sofia 64, 95123 Catania (Italy); Sipala, V. [National Institute for Nuclear Physics INFN, Section of Catania, Via S.Sofia 64, 95123 Catania (Italy); Physics Department, University of Catania, Via S. Sofia 64, I-95123, Catania (Italy); Stancampiano, C. [National Institute for Nuclear Physics INFN, Section of Catania, Via S.Sofia 64, 95123 Catania (Italy); and others

    2011-12-01

    In this paper the use of the Filtered Back Projection (FBP) Algorithm, in order to reconstruct tomographic images using the high energy (200-250 MeV) proton beams, is investigated. The algorithm has been studied in detail with a Monte Carlo approach and image quality has been analysed and compared with the total absorbed dose. A proton Computed Tomography (pCT) apparatus, developed by our group, has been fully simulated to exploit the power of the Geant4 Monte Carlo toolkit. From the simulation of the apparatus, a set of tomographic images of a test phantom has been reconstructed using the FBP at different absorbed dose values. The images have been evaluated in terms of homogeneity, noise, contrast, spatial and density resolution.

  5. Gene Fusion Markup Language: a prototype for exchanging gene fusion data.

    Science.gov (United States)

    Kalyana-Sundaram, Shanker; Shanmugam, Achiraman; Chinnaiyan, Arul M

    2012-10-16

    An avalanche of next generation sequencing (NGS) studies has generated an unprecedented amount of genomic structural variation data. These studies have also identified many novel gene fusion candidates with more detailed resolution than previously achieved. However, in the excitement and necessity of publishing the observations from this recently developed cutting-edge technology, no community standardization approach has arisen to organize and represent the data with the essential attributes in an interchangeable manner. As transcriptome studies have been widely used for gene fusion discoveries, the current non-standard mode of data representation could potentially impede data accessibility, critical analyses, and further discoveries in the near future. Here we propose a prototype, Gene Fusion Markup Language (GFML) as an initiative to provide a standard format for organizing and representing the significant features of gene fusion data. GFML will offer the advantage of representing the data in a machine-readable format to enable data exchange, automated analysis interpretation, and independent verification. As this database-independent exchange initiative evolves it will further facilitate the formation of related databases, repositories, and analysis tools. The GFML prototype is made available at http://code.google.com/p/gfml-prototype/. The Gene Fusion Markup Language (GFML) presented here could facilitate the development of a standard format for organizing, integrating and representing the significant features of gene fusion data in an inter-operable and query-able fashion that will enable biologically intuitive access to gene fusion findings and expedite functional characterization. A similar model is envisaged for other NGS data analyses.

  6. Modeling of the positioning system and visual mark-up of historical cadastral maps

    Directory of Open Access Journals (Sweden)

    Tomislav Jakopec

    2013-03-01

    Full Text Available The aim of the paper is to present of the possibilities of positioning and visual markup of historical cadastral maps onto Google maps using open source software. The corpus is stored in the Croatian State Archives in Zagreb, in the Maps Archive for Croatia and Slavonia. It is part of cadastral documentation that consists of cadastral material from the period of first cadastral survey conducted in the Kingdom of Croatia and Slavonia from 1847 to 1877, and which is used extensively according to the data provided by the customer service of the Croatian State Archives. User needs on the one side and the possibilities of innovative implementation of ICT on the other have motivated the development of the system which would use digital copies of original cadastral maps and connect them with systems like Google maps, and thus both protect the original materials and open up new avenues of research related to the use of originals. With this aim in mind, two cadastral map presentation models have been created. Firstly, there is a detailed display of the original, which enables its viewing using dynamic zooming. Secondly, the interactive display is facilitated through blending the cadastral maps with Google maps, which resulted in establishing links between the coordinates of the digital and original plans through transformation. The transparency of the original can be changed, and the user can intensify the visibility of the underlying layer (Google map or the top layer (cadastral map, which enables direct insight into parcel dynamics over a longer time-span. The system also allows for the mark-up of cadastral maps, which can lead to the development of the cumulative index of all terms found on cadastral maps. The paper is an example of the implementation of ICT for providing new services, strengthening cooperation with the interested public and related institutions, familiarizing the public with the archival material, and offering new possibilities for

  7. Dear-Mama: A photon counting X-ray imaging project for medical applications

    Science.gov (United States)

    Blanchot, G.; Chmeissani, M.; Díaz, A.; Díaz, F.; Fernández, J.; García, E.; García, J.; Kainberger, F.; Lozano, M.; Maiorino, M.; Martínez, R.; Montagne, J. P.; Moreno, I.; Pellegrini, G.; Puigdengoles, C.; Sentís, M.; Teres, L.; Tortajada, M.; Ullán, M.

    2006-12-01

    Dear-Mama ( Detection of Early Markers in Mammography) is an EU funded project devoted to develop an X-ray Medical imaging device based on room temperature solid-state pixel detector coupled to photon counting readout electronics via bump bonding. The technology being used leads to signal-to-noise ratio enhancement and thus the ability to detect low contrast anomalies such as micro-calcifications. The Dear-Mama machine is currently being evaluated and preliminary results show an excellent MTF response. Dear-Mama consortium is made up from six European institutions, the project runs from December 2001 to March 2006.

  8. Dear-Mama: A photon counting X-ray imaging project for medical applications

    Energy Technology Data Exchange (ETDEWEB)

    Blanchot, G. [Institute De Fisica D' Altes Energies, UAB Campus, 08193 Bellaterra (Spain); Chmeissani, M. [Institute De Fisica D' Altes Energies, UAB Campus, 08193 Bellaterra (Spain)]. E-mail: mokhtar@ifae.es; Diaz, A. [Sedecal SA, C/ Pelaya 9, Pol. Ind. Rio de Janeiro, 28110 Algete (Spain); Diaz, F. [Sedecal SA, C/ Pelaya 9, Pol. Ind. Rio de Janeiro, 28110 Algete (Spain); Fernandez, J. [UDIAT, Corporacion Sanitaria Parc Tauli, s/n. 08208-Sabadell (Spain); Garcia, E. [Sedecal SA, C/ Pelaya 9, Pol. Ind. Rio de Janeiro, 28110 Algete (Spain); Garcia, J. [Institute De Fisica D' Altes Energies, UAB Campus, 08193 Bellaterra (Spain); Kainberger, F. [Medical University of Vienna AKH, A-1090 Vienna (Austria); Lozano, M. [CNM-CSIC, UAB Campus, 08193 Bellaterra (Spain); Maiorino, M. [Institute De Fisica D' Altes Energies, UAB Campus, 08193 Bellaterra (Spain); Martinez, R. [CNM-CSIC, UAB Campus, 08193 Bellaterra (Spain); Montagne, J.P. [Hopital D' enfants Armand Trousseau, 75571 Paris Cedex (France); Moreno, I. [Sedecal SA, C/ Pelaya 9, Pol. Ind. Rio de Janeiro, 28110 Algete (Spain); Pellegrini, G. [CNM-CSIC, UAB Campus, 08193 Bellaterra (Spain); Puigdengoles, C. [Institute De Fisica D' Altes Energies, UAB Campus, 08193 Bellaterra (Spain); Sentis, M. [UDIAT, Corporacion Sanitaria Parc Tauli, s/n. 08208-Sabadell (Spain); Teres, L. [CNM-CSIC, UAB Campus, 08193 Bellaterra (Spain); Tortajada, M. [UDIAT, Corporacion Sanitaria Parc Tauli, s/n. 08208-Sabadell (Spain); Ullan, M. [CNM-CSIC, UAB Campus, 08193 Bellaterra (Spain)

    2006-12-10

    Dear-Mama (Detection of Early Markers in Mammography) is an EU funded project devoted to develop an X-ray Medical imaging device based on room temperature solid-state pixel detector coupled to photon counting readout electronics via bump bonding. The technology being used leads to signal-to-noise ratio enhancement and thus the ability to detect low contrast anomalies such as micro-calcifications. The Dear-Mama machine is currently being evaluated and preliminary results show an excellent MTF response. Dear-Mama consortium is made up from six European institutions, the project runs from December 2001 to March 2006.

  9. Studies on filtered back-projection imaging reconstruction based on a modified wavelet threshold function

    Science.gov (United States)

    Wang, Zhengzi; Ren, Zhong; Liu, Guodong

    2016-10-01

    In this paper, the wavelet threshold denoising method was used into the filtered back-projection algorithm of imaging reconstruction. To overcome the drawbacks of the traditional soft- and hard-threshold functions, a modified wavelet threshold function was proposed. The modified wavelet threshold function has two threshold values and two variants. To verify the feasibility of the modified wavelet threshold function, the standard test experiments were performed by using the software platform of MATLAB. Experimental results show that the filtered back-projection reconstruction algorithm based on the modified wavelet threshold function has better reconstruction effect because of more flexible advantage.

  10. 自动测试描述语言标准研究及其应用%Study on Automatic Test Markup Language (ATML) Standards And Its Application

    Institute of Scientific and Technical Information of China (English)

    康占祥; 戴嫣青; 杨占才

    2012-01-01

    Firstly, several major problems for automatic test markup language (ATML) standard are discussed, such as background, purpose, and model structure. Secondly, all the model definition methods and the expression manner for ATML standard are discussed, such as common, test results and session information markup language (TRML), diagnostics model markup language (DML), test description markup language (TDML), instrument description markup language (IDML), test configuration description markup language (TCML), UUT description markup language (UDML), test station markup language ( TSML), and test adapter markup language (TAML). Lastly, the automatic test system application method of ATML standards is introduced. The technology foundation is supplied for the testing resources share of the maintenance level.%分析了制定ATML标准的背景、目的及模型结构,并且对构成ATML标准所有子模型定义方法和表示方式进行了说明,其中主要包括通用要素模型(Common)、测试结果和会话信息模型(TRML)、诊断模型(DML)、测试描述模型(TDML)、仪器描述模型(IDML)、测试配置模型(TCML)、UUT描述模型(UDML)、测试站模型(TSML)及测试适配器模型(TAML)等,最后给出了ATML在自动测试系统中的应用方法,为实现武器装备各种维护级别的测试资源的共享奠定了技术基础.

  11. Software tools of the Computis European project to process mass spectrometry images.

    Science.gov (United States)

    Robbe, Marie-France; Both, Jean-Pierre; Prideaux, Brendan; Klinkert, Ivo; Picaud, Vincent; Schramm, Thorsten; Hester, Atfons; Guevara, Victor; Stoeckli, Markus; Roempp, Andreas; Heeren, Ron M A; Spengler, Bernhard; Gala, Olivier; Haan, Serge

    2014-01-01

    Among the needs usually expressed by teams using mass spectrometry imaging, one that often arises is that for user-friendly software able to manage huge data volumes quickly and to provide efficient assistance for the interpretation of data. To answer this need, the Computis European project developed several complementary software tools to process mass spectrometry imaging data. Data Cube Explorer provides a simple spatial and spectral exploration for matrix-assisted laser desorption/ionisation-time of flight (MALDI-ToF) and time of flight-secondary-ion mass spectrometry (ToF-SIMS) data. SpectViewer offers visualisation functions, assistance to the interpretation of data, classification functionalities, peak list extraction to interrogate biological database and image overlay, and it can process data issued from MALDI-ToF, ToF-SIMS and desorption electrospray ionisation (DESI) equipment. EasyReg2D is able to register two images, in American Standard Code for Information Interchange (ASCII) format, issued from different technologies. The collaboration between the teams was hampered by the multiplicity of equipment and data formats, so the project also developed a common data format (imzML) to facilitate the exchange of experimental data and their interpretation by the different software tools. The BioMap platform for visualisation and exploration of MALDI-ToF and DESI images was adapted to parse imzML files, enabling its access to all project partners and, more globally, to a larger community of users. Considering the huge advantages brought by the imzML standard format, a specific editor (vBrowser) for imzML files and converters from proprietary formats to imzML were developed to enable the use of the imzML format by a broad scientific community. This initiative paves the way toward the development of a large panel of software tools able to process mass spectrometry imaging datasets in the future.

  12. 3D fingerprint imaging system based on full-field fringe projection profilometry

    Science.gov (United States)

    Huang, Shujun; Zhang, Zonghua; Zhao, Yan; Dai, Jie; Chen, Chao; Xu, Yongjia; Zhang, E.; Xie, Lili

    2014-01-01

    As an unique, unchangeable and easily acquired biometrics, fingerprint has been widely studied in academics and applied in many fields over the years. The traditional fingerprint recognition methods are based on the obtained 2D feature of fingerprint. However, fingerprint is a 3D biological characteristic. The mapping from 3D to 2D loses 1D information and causes nonlinear distortion of the captured fingerprint. Therefore, it is becoming more and more important to obtain 3D fingerprint information for recognition. In this paper, a novel 3D fingerprint imaging system is presented based on fringe projection technique to obtain 3D features and the corresponding color texture information. A series of color sinusoidal fringe patterns with optimum three-fringe numbers are projected onto a finger surface. From another viewpoint, the fringe patterns are deformed by the finger surface and captured by a CCD camera. 3D shape data of the finger can be obtained from the captured fringe pattern images. This paper studies the prototype of the 3D fingerprint imaging system, including principle of 3D fingerprint acquisition, hardware design of the 3D imaging system, 3D calibration of the system, and software development. Some experiments are carried out by acquiring several 3D fingerprint data. The experimental results demonstrate the feasibility of the proposed 3D fingerprint imaging system.

  13. Rigid motion correction of dual opposed planar projections in single photon imaging

    Science.gov (United States)

    Angelis, G. I.; Ryder, W. J.; Gillam, J. E.; Boisson, F.; Kyme, A. Z.; Fulton, R. R.; Meikle, S. R.; Kench, P. L.

    2017-05-01

    Awake and/or freely moving small animal single photon emission imaging allows the continuous study of molecules exhibiting slow kinetics without the need to restrain or anaesthetise the animals. Estimating motion free projections in freely moving small animal planar imaging can be considered as a limited angle tomography problem, except that we wish to estimate the 2D planar projections rather than the 3D volume, where the angular sampling in all three axes depends on the rotational motion of the animal. In this study, we hypothesise that the motion corrected planar projections estimated by reconstructing an estimate of the 3D volume using an iterative motion compensating reconstruction algorithm and integrating it along the projection path, will closely match the true, motion-less, planar distribution regardless of the object motion. We tested this hypothesis for the case of rigid motion using Monte-Carlo simulations and experimental phantom data based on a dual opposed detector system, where object motion was modelled with 6 degrees of freedom. In addition, we investigated the quantitative accuracy of the regional activity extracted from the geometric mean of opposing motion corrected planar projections. Results showed that it is feasible to estimate qualitatively accurate motion-corrected projections for a wide range of motions around all 3 axes. Errors in the geometric mean estimates of regional activity were relatively small and within 10% of expected true values. In addition, quantitative regional errors were dependent on the observed motion, as well as on the surrounding activity of overlapping organs. We conclude that both qualitatively and quantitatively accurate motion-free projections of the tracer distribution in a rigidly moving object can be estimated from dual opposed detectors using a correction approach within an iterative reconstruction framework and we expect this approach can be extended to the case of non-rigid motion.

  14. Robust Depth Image Acquisition Using Modulated Pattern Projection and Probabilistic Graphical Models

    Directory of Open Access Journals (Sweden)

    Jaka Kravanja

    2016-10-01

    Full Text Available Depth image acquisition with structured light approaches in outdoor environments is a challenging problem due to external factors, such as ambient sunlight, which commonly affect the acquisition procedure. This paper presents a novel structured light sensor designed specifically for operation in outdoor environments. The sensor exploits a modulated sequence of structured light projected onto the target scene to counteract environmental factors and estimate a spatial distortion map in a robust manner. The correspondence between the projected pattern and the estimated distortion map is then established using a probabilistic framework based on graphical models. Finally, the depth image of the target scene is reconstructed using a number of reference frames recorded during the calibration process. We evaluate the proposed sensor on experimental data in indoor and outdoor environments and present comparative experiments with other existing methods, as well as commercial sensors.

  15. Projective rectification of infrared images from air-cooled condenser temperature measurement by using projection profile features and cross-ratio invariability.

    Science.gov (United States)

    Xu, Lijun; Chen, Lulu; Li, Xiaolu; He, Tao

    2014-10-01

    In this paper, we propose a projective rectification method for infrared images obtained from the measurement of temperature distribution on an air-cooled condenser (ACC) surface by using projection profile features and cross-ratio invariability. In the research, the infrared (IR) images acquired by the four IR cameras utilized are distorted to different degrees. To rectify the distorted IR images, the sizes of the acquired images are first enlarged by means of bicubic interpolation. Then, uniformly distributed control points are extracted in the enlarged images by constructing quadrangles with detected vertical lines and detected or constructed horizontal lines. The corresponding control points in the anticipated undistorted IR images are extracted by using projection profile features and cross-ratio invariability. Finally, a third-order polynomial rectification model is established and the coefficients of the model are computed with the mapping relationship between the control points in the distorted and anticipated undistorted images. Experimental results obtained from an industrial ACC unit show that the proposed method performs much better than any previous method we have adopted. Furthermore, all rectified images are stitched together to obtain a complete image of the whole ACC surface with a much higher spatial resolution than that obtained by using a single camera, which is not only useful but also necessary for more accurate and comprehensive analysis of ACC performance and more reliable optimization of ACC operations.

  16. Image reconstruction using projections from a few views by discrete steering combined with DART

    Science.gov (United States)

    Kwon, Junghyun; Song, Samuel M.; Kauke, Brian; Boyd, Douglas P.

    2012-03-01

    In this paper, we propose an algebraic reconstruction technique (ART) based discrete tomography method to reconstruct an image accurately using projections from a few views. We specifically consider the problem of reconstructing an image of bottles filled with various types of liquids from X-ray projections. By exploiting the fact that bottles are usually filled with homogeneous material, we show that it is possible to obtain accurate reconstruction with only a few projections by an ART based algorithm. In order to deal with various types of liquids in our problem, we first introduce our discrete steering method which is a generalization of the binary steering approach for our proposed multi-valued discrete reconstruction. The main idea of the steering approach is to use slowly varying thresholds instead of fixed thresholds. We further improve reconstruction accuracy by reducing the number of variables in ART by combining our discrete steering with the discrete ART (DART) that fixes the values of interior pixels of segmented regions considered as reliable. By simulation studies, we show that our proposed discrete steering combined with DART yields superior reconstruction than both discrete steering only and DART only cases. The resulting reconstructions are quite accurate even with projections using only four views.

  17. Optimal neighbor graph-based orthogonal tensor locality preserving projection for image recognition

    Science.gov (United States)

    Yuan, Sen; Mao, Xia

    2016-11-01

    As a typical multilinear dimensionality reduction (DR) method, tensor locality preserving projection (TLPP) has been successfully applied in many practical problems. However, TLPP depends mainly on preserving its local neighbor graph which often suffers from the following issues: (1) the neighbor graph is constructed with the Euclidean distance which fails to consider the relationships among different coordinates for tensor data; (2) the affinity matrix only focuses on the local structure information of samples while ignoring the existing label information; (3) the projection matrices are nonorthogonal, thus it is difficult to preserve the local manifold structure. To address these problems, a multilinear DR algorithm called optimal neighbor graph-based orthogonal tensor locality preserving projection (OG-OTLPP) is proposed. In OG-OTLPP, an optimal neighbor graph is first built according to tensor distance. Then the affinity matrix of data is defined by utilizing both the label information and the intrinsic local geometric properties of the data. Finally, in order to improve the manifold preserving ability, an efficient and stable scheme is designed to iteratively learn the orthogonal projections. We evaluate the proposed algorithm by applying it to image recognition. The experimental results on five public image databases demonstrate the effectiveness of our algorithm.

  18. Simulated lesion, human observer performance comparison between thin-section dedicated breast CT images versus computed thick-section simulated projection images of the breast

    Science.gov (United States)

    Chen, L.; Boone, J. M.; Abbey, C. K.; Hargreaves, J.; Bateni, C.; Lindfors, K. K.; Yang, K.; Nosratieh, A.; Hernandez, A.; Gazi, P.

    2015-04-01

    The objective of this study was to compare the lesion detection performance of human observers between thin-section computed tomography images of the breast, with thick-section (>40 mm) simulated projection images of the breast. Three radiologists and six physicists each executed a two alterative force choice (2AFC) study involving simulated spherical lesions placed mathematically into breast images produced on a prototype dedicated breast CT scanner. The breast image data sets from 88 patients were used to create 352 pairs of image data. Spherical lesions with diameters of 1, 2, 3, 5, and 11 mm were simulated and adaptively positioned into 3D breast CT image data sets; the native thin section (0.33 mm) images were averaged to produce images with different slice thicknesses; average section thicknesses of 0.33, 0.71, 1.5 and 2.9 mm were representative of breast CT; the average 43 mm slice thickness served to simulate simulated projection images of the breast. The percent correct of the human observer’s responses were evaluated in the 2AFC experiments. Radiologists lesion detection performance was significantly (p physicist observer, however trends in performance were similar. Human observers demonstrate significantly better mass-lesion detection performance on thin-section CT images of the breast, compared to thick-section simulated projection images of the breast.

  19. Patient Perceptions of Participating in the RSNA Image Share Project: a Preliminary Study.

    Science.gov (United States)

    Hiremath, Atheeth; Awan, Omer; Mendelson, David; Siegel, Eliot L

    2016-04-01

    The purpose of this study was to gauge patient perceptions of the RSNA Image Share Project (ISP), a pilot program that provides patients access to their imaging studies online via secure Personal Health Record (PHR) accounts. Two separate Institutional Review Board exempted surveys were distributed to patients depending on whether they decided to enroll or opt out of enrollment in the ISP. For patients that enrolled, a survey gauged baseline computer usage, perceptions of online access to images through the ISP, effect of patient access to images on patient-physician relationships, and interest in alternative use of images. The other survey documented the age and reasons for declining participation for those that opted out of enrolling in the ISP. Out of 564 patients, 470 enrolled in the ISP (83 % participation rate) and 456 of these 470 individuals completed the survey for a survey participation rate of 97 %. Patients who enrolled overwhelmingly perceived access to online images as beneficial and felt it bolstered their patient-physician relationship. Out of 564 patients, 94 declined enrollment in the ISP and all 94 individuals completed the survey for a survey participation rate of 100 %. Patients who declined to participate in the ISP cited unreliable access to Internet and existing availability of non-web-based intra-network images to their physicians. Patients who participated in the ISP found having a measure of control over their images to be beneficial and felt that patient-physician relationships could be negatively affected by challenges related to image accessibility.

  20. Image reconstruction based on L1 regularization and projection methods for electrical impedance tomography.

    Science.gov (United States)

    Wang, Qi; Wang, Huaxiang; Zhang, Ronghua; Wang, Jinhai; Zheng, Yu; Cui, Ziqiang; Yang, Chengyi

    2012-10-01

    Electrical impedance tomography (EIT) is a technique for reconstructing the conductivity distribution by injecting currents at the boundary of a subject and measuring the resulting changes in voltage. Image reconstruction in EIT is a nonlinear and ill-posed inverse problem. The Tikhonov method with L(2) regularization is always used to solve the EIT problem. However, the L(2) method always smoothes the sharp changes or discontinue areas of the reconstruction. Image reconstruction using the L(1) regularization allows addressing this difficulty. In this paper, a sum of absolute values is substituted for the sum of squares used in the L(2) regularization to form the L(1) regularization, the solution is obtained by the barrier method. However, the L(1) method often involves repeatedly solving large-dimensional matrix equations, which are computationally expensive. In this paper, the projection method is combined with the L(1) regularization method to reduce the computational cost. The L(1) problem is mainly solved in the coarse subspace. This paper also discusses the strategies of choosing parameters. Both simulation and experimental results of the L(1) regularization method were compared with the L(2) regularization method, indicating that the L(1) regularization method can improve the quality of image reconstruction and tolerate a relatively high level of noise in the measured voltages. Furthermore, the projected L(1) method can also effectively reduce the computational time without affecting the quality of reconstructed images.

  1. The Ilac-Project Supporting Ancient Coin Classification by Means of Image Analysis

    Science.gov (United States)

    Kavelar, A.; Zambanini, S.; Kampel, M.; Vondrovec, K.; Siegl, K.

    2013-07-01

    This paper presents the ILAC project, which aims at the development of an automated image-based classification system for ancient Roman Republican coins. The benefits of such a system are manifold: operating at the suture between computer vision and numismatics, ILAC can reduce the day-to-day workload of numismatists by assisting them in classification tasks and providing a preselection of suitable coin classes. This is especially helpful for large coin hoard findings comprising several thousands of coins. Furthermore, this system could be implemented in an online platform for hobby numismatists, allowing them to access background information about their coin collection by simply uploading a photo of obverse and reverse for the coin of interest. ILAC explores different computer vision techniques and their combinations for the use of image-based coin recognition. Some of these methods, such as image matching, use the entire coin image in the classification process, while symbol or legend recognition exploit certain characteristics of the coin imagery. An overview of the methods explored so far and the respective experiments is given as well as an outlook on the next steps of the project.

  2. Mammography image restoration based on a radiographic scattering model from a single projection: Experimental study

    Science.gov (United States)

    Kim, Kyuseok; Park, Soyoung; Kim, Guna; Cho, Hyosung; Je, Uikyu; Park, Chulkyu; Lim, Hyunwoo; Lee, Dongyeon; Lee, Hunwoo; Kang, Seokyoon

    2017-03-01

    In conventional mammography, contrast sensitivity remains limited due to the superimposition of breast tissue and scattered X-rays, which induces low visibility of lesions in the breast and, thus, an excessive number of false-positive findings. Several methods, including digital breast tomosynthesis as a multiplanar imaging modality, air-gap and slot techniques for the reduction of scatters, phase-contrast imaging as another image-contrast modality, etc., have been investigated in attempt to overcome these difficulties. However, those techniques typically require a higher imaging dose or special equipment. In this work, as an alternative, we propose a new image restoration method based on a radiographic scattering model in which the intensity of scattered X-rays and the direct transmission function of a given medium are estimated from a single projection by using the dark-channel prior. We implemented the proposed algorithm and performed an experiment to demonstrate its viability. Our results indicate that most of the structures in the examined breast were very discernable even with no adjustment in the display-window level, thus preserving superior image features and edge sharpening.

  3. Aid to Percutaneous Renal Access by Virtual Projection of the Ultrasound Puncture Tract onto Fluoroscopic Images

    CERN Document Server

    Mozer, Pierre; Leroy, Antoine; Baumann, Michael; Payan, Yohan; Troccaz, Jocelyne; Chartier-Kastler, Emmanuel; Richard, François

    2007-01-01

    Background and Purpose: Percutaneous renal access in the context of percutaneous nephrolithotomy (PCNL) is a difficult technique, requiring rapid and precise access to a particular calix. We present a computerized system designed to improve percutaneous renal access by projecting the ultrasound puncture tract onto fluoroscopic images. Materials and Methods: The system consists of a computer and a localizer allowing spatial localization of the position of the various instruments. Without any human intervention, the ultrasound nephrostomy tract is superimposed in real time onto fluoroscopic images acquired in various views. Results: We tested our approach by laboratory experiments on a phantom. Also, after approval by our institution's Ethics Committee, we validated this technique in the operating room during PCNL in one patient. Conclusion: Our system is reliable, and the absence of image-processing procedures makes it robust. We have initiated a prospective study to validate this technique both for PCNL speci...

  4. Adaptive robust image registration approach based on adequately sampling polar transform and weighted angular projection function

    Science.gov (United States)

    Wei, Zhao; Tao, Feng; Jun, Wang

    2013-10-01

    An efficient, robust, and accurate approach is developed for image registration, which is especially suitable for large-scale change and arbitrary rotation. It is named the adequately sampling polar transform and weighted angular projection function (ASPT-WAPF). The proposed ASPT model overcomes the oversampling problem of conventional log-polar transform. Additionally, the WAPF presented as the feature descriptor is robust to the alteration in the fovea area of an image, and reduces the computational cost of the following registration process. The experimental results show two major advantages of the proposed method. First, it can register images with high accuracy even when the scale factor is up to 10 and the rotation angle is arbitrary. However, the maximum scaling estimated by the state-of-the-art algorithms is 6. Second, our algorithm is more robust to the size of the sampling region while not decreasing the accuracy of the registration.

  5. Projection based image restoration, super-resolution and error correction codes

    Science.gov (United States)

    Bauer, Karl Gregory

    Super-resolution is the ability of a restoration algorithm to restore meaningful spatial frequency content beyond the diffraction limit of the imaging system. The Gerchberg-Papoulis (GP) algorithm is one of the most celebrated algorithms for super-resolution. The GP algorithm is conceptually simple and demonstrates the importance of using a priori information in the formation of the object estimate. In the first part of this dissertation the continuous GP algorithm is discussed in detail and shown to be a projection on convex sets algorithm. The discrete GP algorithm is shown to converge in the exactly-, over- and under-determined cases. A direct formula for the computation of the estimate at the kth iteration and at convergence is given. This analysis of the discrete GP algorithm sets the stage to connect super-resolution to error-correction codes. Reed-Solomon codes are used for error-correction in magnetic recording devices, compact disk players and by NASA for space communications. Reed-Solomon codes have a very simple description when analyzed with the Fourier transform. This signal processing approach to error- correction codes allows the error-correction problem to be compared with the super-resolution problem. The GP algorithm for super-resolution is shown to be equivalent to the correction of errors with a Reed-Solomon code over an erasure channel. The Restoration from Magnitude (RFM) problem seeks to recover a signal from the magnitude of the spectrum. This problem has applications to imaging through a turbulent atmosphere. The turbulent atmosphere causes localized changes in the index of refraction and introduces different phase delays in the data collected. Synthetic aperture radar (SAR) and hyperspectral imaging systems are capable of simultaneously recording multiple images of different polarizations or wavelengths. Each of these images will experience the same turbulent atmosphere and have a common phase distortion. A projection based restoration

  6. Automatic tracking of arbitrarily shaped implanted markers in kilovoltage projection images: A feasibility study

    Energy Technology Data Exchange (ETDEWEB)

    Regmi, Rajesh; Lovelock, D. Michael; Hunt, Margie; Zhang, Pengpeng; Pham, Hai; Xiong, Jianping; Yorke, Ellen D.; Mageras, Gig S., E-mail: magerasg@mskcc.org [Department of Medical Physics, Memorial Sloan-Kettering Cancer Center, New York, New York 10065 (United States); Goodman, Karyn A.; Rimner, Andreas [Department of Radiation Oncology, Memorial Sloan-Kettering Cancer Center, New York, New York 10065 (United States); Mostafavi, Hassan [Ginzton Technology Center, Varian Medical Systems, Palo Alto, California 94304 (United States)

    2014-07-15

    Purpose: Certain types of commonly used fiducial markers take on irregular shapes upon implantation in soft tissue. This poses a challenge for methods that assume a predefined shape of markers when automatically tracking such markers in kilovoltage (kV) radiographs. The authors have developed a method of automatically tracking regularly and irregularly shaped markers using kV projection images and assessed its potential for detecting intrafractional target motion during rotational treatment. Methods: Template-based matching used a normalized cross-correlation with simplex minimization. Templates were created from computed tomography (CT) images for phantom studies and from end-expiration breath-hold planning CT for patient studies. The kV images were processed using a Sobel filter to enhance marker visibility. To correct for changes in intermarker relative positions between simulation and treatment that can introduce errors in automatic matching, marker offsets in three dimensions were manually determined from an approximately orthogonal pair of kV images. Two studies in anthropomorphic phantom were carried out, one using a gold cylindrical marker representing regular shape, another using a Visicoil marker representing irregular shape. Automatic matching of templates to cone beam CT (CBCT) projection images was performed to known marker positions in phantom. In patient data, automatic matching was compared to manual matching as an approximate ground truth. Positional discrepancy between automatic and manual matching of less than 2 mm was assumed as the criterion for successful tracking. Tracking success rates were examined in kV projection images from 22 CBCT scans of four pancreas, six gastroesophageal junction, and one lung cancer patients. Each patient had at least one irregularly shaped radiopaque marker implanted in or near the tumor. In addition, automatic tracking was tested in intrafraction kV images of three lung cancer patients with irregularly shaped

  7. [Minimum intensity projection image and curved reformation image of the main pancreatic duct obtained by helical CT in patients with main pancreatic duct dilation].

    Science.gov (United States)

    Takeshita, K; Furui, S; Yamauchi, T; Harasawa, A; Kohtake, H; Sasaki, Y; Suzuki, S; Tanaka, H; Takeshita, T

    1999-03-01

    Contrast enhanced CT was performed in seven patients with pancreatic disease (chronic pancreatitis, n = 3; pancreatic head cancer, n = 2; mucin-producing pancreatic tumor, n = 2) who showed dilation of the main pancreatic duct (MPD) Minimum intensity projection (Min-IP) images of the pancreas were obtained using multi-projection volume reconstruction (MPVR) software by selecting an oblique slab that contained the entire MPD. Curved reformation (CR) images were obtained using multiplanar reformation (MPR) software by tracing the MPD on the Min-IP image. Both Min-IP images and CR images clearly showed the dilated main pancreatic duct in all seven patients. In three of the seven, obstruction of the MPD in the pancreatic head and the cause of obstruction (tumor mass, n = 2; caliculus, n = 1) were also clearly seen. Min-IP and CR images seem to be useful for the diagnosis of pancreatic diseases.

  8. Radiometric Compensation of Images Projected on Non-White Surfaces by Exploiting Chromatic Adaptation and Perceptual Anchoring.

    Science.gov (United States)

    Huang, Tai-Hsiang; Wang, Ting-Chun; Chen, Homer H

    2017-01-01

    Flat surfaces in our living environment to be used as replacements of a projection screen are not necessarily white. We propose a perceptual radiometric compensation method to counteract the effect of color projection surfaces on image appearance. It reduces color clipping while preserving the hue and brightness of images based on the anchoring property of human visual system. In addition, it considers the effect of chromatic adaptation on perceptual image quality and fixes the color distortion caused by non-white projection surfaces by properly shifting the color of the image pixels toward the complementary color of the projection surface. User ratings show that our method outperforms the existing methods in 974 out of 1020 subjective tests.

  9. XML (eXtensible Mark-up Language) Industrial Standard, Determining Architecture of the Next Generation of the Internet Software

    CERN Document Server

    Galaktionov, V V

    2000-01-01

    The past 1999 became the period of standing of the new Internet technology - XML (eXtensible Mark-up Language), the language of sectoring established by a Consortium WWW (http://www.w3.org) as a new industrial standard, determining architecture of the next generation Internet software. In this message the results of a research of this technology, basic opportunities XML, rules and recommendations for its application are given.

  10. The Effect of using Facebook Markup Language (FBML) for Designing an E-Learning Model in Higher Education

    OpenAIRE

    Mohammed Amasha; Salem Alkhalaf

    2015-01-01

    This study examines the use of Facebook Markup Language (FBML) to design an e-learning model to facilitate teaching and learning in an academic setting. The qualitative research study presents a case study on how, Facebook is used to support collaborative activities in higher education. We used FBML to design an e-learning model called processes for e-learning resources in the Specialist Learning Resources Diploma (SLRD) program. Two groups drawn from the SLRD program were used; First were th...

  11. Engaging stakeholder communities as body image intervention partners: The Body Project as a case example.

    Science.gov (United States)

    Becker, Carolyn Black; Perez, Marisol; Kilpela, Lisa Smith; Diedrichs, Phillippa C; Trujillo, Eva; Stice, Eric

    2016-03-11

    Despite recent advances in developing evidence-based psychological interventions, substantial changes are needed in the current system of intervention delivery to impact mental health on a global scale (Kazdin & Blase, 2011). Prevention offers one avenue for reaching large populations because prevention interventions often are amenable to scaling-up strategies, such as task-shifting to lay providers, which further facilitate community stakeholder partnerships. This paper discusses the dissemination and implementation of the Body Project, an evidence-based body image prevention program, across 6 diverse stakeholder partnerships that span academic, non-profit and business sectors at national and international levels. The paper details key elements of the Body Project that facilitated partnership development, dissemination and implementation, including use of community-based participatory research methods and a blended train-the-trainer and task-shifting approach. We observed consistent themes across partnerships, including: sharing decision making with community partners, engaging of community leaders as gatekeepers, emphasizing strengths of community partners, working within the community's structure, optimizing non-traditional and/or private financial resources, placing value on cost-effectiveness and sustainability, marketing the program, and supporting flexibility and creativity in developing strategies for evolution within the community and in research. Ideally, lessons learned with the Body Project can be generalized to implementation of other body image and eating disorder prevention programs.

  12. A coded structured light system based on primary color stripe projection and monochrome imaging.

    Science.gov (United States)

    Barone, Sandro; Paoli, Alessandro; Razionale, Armando Viviano

    2013-10-14

    Coded Structured Light techniques represent one of the most attractive research areas within the field of optical metrology. The coding procedures are typically based on projecting either a single pattern or a temporal sequence of patterns to provide 3D surface data. In this context, multi-slit or stripe colored patterns may be used with the aim of reducing the number of projected images. However, color imaging sensors require the use of calibration procedures to address crosstalk effects between different channels and to reduce the chromatic aberrations. In this paper, a Coded Structured Light system has been developed by integrating a color stripe projector and a monochrome camera. A discrete coding method, which combines spatial and temporal information, is generated by sequentially projecting and acquiring a small set of fringe patterns. The method allows the concurrent measurement of geometrical and chromatic data by exploiting the benefits of using a monochrome camera. The proposed methodology has been validated by measuring nominal primitive geometries and free-form shapes. The experimental results have been compared with those obtained by using a time-multiplexing gray code strategy.

  13. A Coded Structured Light System Based on Primary Color Stripe Projection and Monochrome Imaging

    Directory of Open Access Journals (Sweden)

    Armando Viviano Razionale

    2013-10-01

    Full Text Available Coded Structured Light techniques represent one of the most attractive research areas within the field of optical metrology. The coding procedures are typically based on projecting either a single pattern or a temporal sequence of patterns to provide 3D surface data. In this context, multi-slit or stripe colored patterns may be used with the aim of reducing the number of projected images. However, color imaging sensors require the use of calibration procedures to address crosstalk effects between different channels and to reduce the chromatic aberrations. In this paper, a Coded Structured Light system has been developed by integrating a color stripe projector and a monochrome camera. A discrete coding method, which combines spatial and temporal information, is generated by sequentially projecting and acquiring a small set of fringe patterns. The method allows the concurrent measurement of geometrical and chromatic data by exploiting the benefits of using a monochrome camera. The proposed methodology has been validated by measuring nominal primitive geometries and free-form shapes. The experimental results have been compared with those obtained by using a time-multiplexing gray code strategy.

  14. ABrIL - Advanced Brain Imaging Lab : a cloud based computation environment for cooperative neuroimaging projects.

    Science.gov (United States)

    Neves Tafula, Sérgio M; Moreira da Silva, Nádia; Rozanski, Verena E; Silva Cunha, João Paulo

    2014-01-01

    Neuroscience is an increasingly multidisciplinary and highly cooperative field where neuroimaging plays an important role. Neuroimaging rapid evolution is demanding for a growing number of computing resources and skills that need to be put in place at every lab. Typically each group tries to setup their own servers and workstations to support their neuroimaging needs, having to learn from Operating System management to specific neuroscience software tools details before any results can be obtained from each setup. This setup and learning process is replicated in every lab, even if a strong collaboration among several groups is going on. In this paper we present a new cloud service model - Brain Imaging Application as a Service (BiAaaS) - and one of its implementation - Advanced Brain Imaging Lab (ABrIL) - in the form of an ubiquitous virtual desktop remote infrastructure that offers a set of neuroimaging computational services in an interactive neuroscientist-friendly graphical user interface (GUI). This remote desktop has been used for several multi-institution cooperative projects with different neuroscience objectives that already achieved important results, such as the contribution to a high impact paper published in the January issue of the Neuroimage journal. The ABrIL system has shown its applicability in several neuroscience projects with a relatively low-cost, promoting truly collaborative actions and speeding up project results and their clinical applicability.

  15. Simulations of Aperture Synthesis Imaging Radar for the EISCAT_3D Project

    Science.gov (United States)

    La Hoz, C.; Belyey, V.

    2012-12-01

    EISCAT_3D is a project to build the next generation of incoherent scatter radars endowed with multiple 3-dimensional capabilities that will replace the current EISCAT radars in Northern Scandinavia. Aperture Synthesis Imaging Radar (ASIR) is one of the technologies adopted by the EISCAT_3D project to endow it with imaging capabilities in 3-dimensions that includes sub-beam resolution. Complemented by pulse compression, it will provide 3-dimensional images of certain types of incoherent scatter radar targets resolved to about 100 metres at 100 km range, depending on the signal-to-noise ratio. This ability will open new research opportunities to map small structures associated with non-homogeneous, unstable processes such as aurora, summer and winter polar radar echoes (PMSE and PMWE), Natural Enhanced Ion Acoustic Lines (NEIALs), structures excited by HF ionospheric heating, meteors, space debris, and others. To demonstrate the feasibility of the antenna configurations and the imaging inversion algorithms a simulation of synthetic incoherent scattering data has been performed. The simulation algorithm incorporates the ability to control the background plasma parameters with non-homogeneous, non-stationary components over an extended 3-dimensional space. Control over the positions of a number of separated receiving antennas, their signal-to-noise-ratios and arriving phases allows realistic simulation of a multi-baseline interferometric imaging radar system. The resulting simulated data is fed into various inversion algorithms. This simulation package is a powerful tool to evaluate various antenna configurations and inversion algorithms. Results applied to realistic design alternatives of EISCAT_3D will be described.

  16. Development and comparison of projection and image space 3D nodule insertion techniques

    Science.gov (United States)

    Robins, Marthony; Solomon, Justin; Sahbaee, Pooyan; Samei, Ehsan

    2016-04-01

    This study aimed to develop and compare two methods of inserting computerized virtual lesions into CT datasets. 24 physical (synthetic) nodules of three sizes and four morphologies were inserted into an anthropomorphic chest phantom (LUNGMAN, KYOTO KAGAKU). The phantom was scanned (Somatom Definition Flash, Siemens Healthcare) with and without nodules present, and images were reconstructed with filtered back projection and iterative reconstruction (SAFIRE) at 0.6 mm slice thickness using a standard thoracic CT protocol at multiple dose settings. Virtual 3D CAD models based on the physical nodules were virtually inserted (accounting for the system MTF) into the nodule-free CT data using two techniques. These techniques include projection-based and image-based insertion. Nodule volumes were estimated using a commercial segmentation tool (iNtuition, TeraRecon, Inc.). Differences were tested using paired t-tests and R2 goodness of fit between the virtually and physically inserted nodules. Both insertion techniques resulted in nodule volumes very similar to the real nodules (<3% difference) and in most cases the differences were not statistically significant. Also, R2 values were all <0.97 for both insertion techniques. These data imply that these techniques can confidently be used as a means of inserting virtual nodules in CT datasets. These techniques can be instrumental in building hybrid CT datasets composed of patient images with virtually inserted nodules.

  17. Real-time volumetric image reconstruction and 3D tumor localization based on a single x-ray projection image for lung cancer radiotherapy

    CERN Document Server

    Li, Ruijiang; Lewis, John H; Gu, Xuejun; Folkerts, Michael; Men, Chunhua; Jiang, Steve B

    2010-01-01

    Purpose: To develop an algorithm for real-time volumetric image reconstruction and 3D tumor localization based on a single x-ray projection image for lung cancer radiotherapy. Methods: Given a set of volumetric images of a patient at N breathing phases as the training data, we perform deformable image registration between a reference phase and the other N-1 phases, resulting in N-1 deformation vector fields (DVFs). These DVFs can be represented efficiently by a few eigenvectors and coefficients obtained from principal component analysis (PCA). By varying the PCA coefficients, we can generate new DVFs, which, when applied on the reference image, lead to new volumetric images. We then can reconstruct a volumetric image from a single projection image by optimizing the PCA coefficients such that its computed projection matches the measured one. The 3D location of the tumor can be derived by applying the inverted DVF on its position in the reference image. Our algorithm was implemented on graphics processing units...

  18. Evaluation of myocardial SPECT imaging reconstructed from 270deg projection data. A study using a cardiac phantom

    Energy Technology Data Exchange (ETDEWEB)

    Kashikura, Kenichi [Japan Science and Technology Corp., Akita (Japan). Akita Lab.; Kobayashi, Hideki; Kashikura, Akemi

    1997-01-01

    SPECT reconstruction is commonly performed using 360deg or 180deg projection data. However, it is also possible to reconstruct SPECT images using other projection data arcs. The purpose of this study was to characterize images obtained by limiting the projection data to 270deg by discarding the projection views with severe attenuation. A series of phantom studies was performed with and without plastic chambers simulating perfusion defects using {sup 201}Tl and {sup 99m}Tc. Images using 270deg, 360deg, and 180deg projection arcs were identically reconstructed from the same data. In the absence of plastic chambers, intraslice uniformity in a given slice was assessed by computing the coefficient of variation (CV) of average counts in 8 ROIs within the slice. Interslice uniformity was assessed by computing the CV of average counts in five short axial slices. With plastic chambers in place, the variability in defect contrasts was assessed by computing the CV of defect contrasts in 4 chambers, located on the anterior, lateral, inferoposterior, and septal walls. The intraslice uniformity of the 270deg images were considerably inferior to those of the 360deg and 180deg images. The interslice uniformity was highest in the 360deg images, and lowest in the 180deg images. The variation in defect contrasts in the 270deg image was higher than those of the other two images. The 270deg images showed a high defect contrast in the septum and high counts in the anterior and anteroseptal wall. Because a large variation in defect contrasts within a segment might result in false positive or negative in diagnosis, 270deg imaging is not recommended over 360deg or 180deg imaging. (author)

  19. Multi-scale volumetric cell and tissue imaging based on optical projection tomography (Conference Presentation)

    Science.gov (United States)

    Ban, Sungbea; Cho, Nam Hyun; Ryu, Yongjae; Jung, Sunwoo; Vavilin, Andrey; Min, Eunjung; Jung, Woonggyu

    2016-04-01

    Optical projection tomography is a new optical imaging method for visualizing small biological specimens in three dimension. The most important advantage of OPT is to fill the gap between MRI and confocal microscope for the specimen having the range of 1-10 mm. Thus, it has been mainly used for whole-mount small animals and developmental study since this imaging modality was developed. The ability of OPT delivering anatomical and functional information of relatively large tissue in 3D has made it a promising platform in biomedical research. Recently, the potential of OPT spans its coverage to cellular scale. Even though there are increasing demand to obtain better understanding of cellular dynamics, only few studies to visualize cellular structure, shape, size and functional morphology over tissue has been investigated in existing OPT system due to its limited field of view. In this study, we develop a novel optical imaging system for 3D cellular imaging with OPT integrated with dynamic focusing technique. Our tomographic setup has great potential to be used for identifying cell characteristic in tissue because it can provide selective contrast on dynamic focal plane allowing for fluorescence as well as absorption. While the dominant contrast of optical imaging technique is to use the fluorescence for detecting certain target only, the newly developed OPT system will offer considerable advantages over currently available method when imaging cellar molecular dynamics by permitting contrast variation. By achieving multi-contrast, it is expected for this new imaging system to play an important role in delivering better cytological information to pathologist.

  20. The Moon Zoo citizen science project: Preliminary results for the Apollo 17 landing site

    CERN Document Server

    Bugiolacchi, Roberto; Tar, Paul; Thacker, Neil; Crawford, Ian A; Joy, Katherine H; Grindrod, Peter M; Lintott, Chris

    2016-01-01

    Moon Zoo is a citizen science project that utilises internet crowd-sourcing techniques. Moon Zoo users are asked to review high spatial resolution images from the Lunar Reconnaissance Orbiter Camera (LROC), onboard NASAs LRO spacecraft, and perform characterisation such as measuring impact crater sizes and identify morphological features of interest. The tasks are designed to address issues in lunar science and to aid future exploration of the Moon. We have tested various methodologies and parameters therein to interrogate and reduce the Moon Zoo crater location and size dataset against a validated expert survey. We chose the Apollo 17 region as a test area since it offers a broad range of cratered terrains, including secondary-rich areas, older maria, and uplands. The assessment involved parallel testing in three key areas: (1) filtering of data to remove problematic mark-ups; (2) clustering methods of multiple notations per crater; and (3) derivation of alternative crater degradation indices, based on the s...

  1. Efficient Analysis of Systems Biology Markup Language Models of Cellular Populations Using Arrays.

    Science.gov (United States)

    Watanabe, Leandro; Myers, Chris J

    2016-08-19

    The Systems Biology Markup Language (SBML) has been widely used for modeling biological systems. Although SBML has been successful in representing a wide variety of biochemical models, the core standard lacks the structure for representing large complex regular systems in a standard way, such as whole-cell and cellular population models. These models require a large number of variables to represent certain aspects of these types of models, such as the chromosome in the whole-cell model and the many identical cell models in a cellular population. While SBML core is not designed to handle these types of models efficiently, the proposed SBML arrays package can represent such regular structures more easily. However, in order to take full advantage of the package, analysis needs to be aware of the arrays structure. When expanding the array constructs within a model, some of the advantages of using arrays are lost. This paper describes a more efficient way to simulate arrayed models. To illustrate the proposed method, this paper uses a population of repressilator and genetic toggle switch circuits as examples. Results show that there are memory benefits using this approach with a modest cost in runtime.

  2. A two-way interface between limited Systems Biology Markup Language and R.

    Science.gov (United States)

    Radivoyevitch, Tomas

    2004-12-07

    Systems Biology Markup Language (SBML) is gaining broad usage as a standard for representing dynamical systems as data structures. The open source statistical programming environment R is widely used by biostatisticians involved in microarray analyses. An interface between SBML and R does not exist, though one might be useful to R users interested in SBML, and SBML users interested in R. A model structure that parallels SBML to a limited degree is defined in R. An interface between this structure and SBML is provided through two function definitions: write.SBML() which maps this R model structure to SBML level 2, and read.SBML() which maps a limited range of SBML level 2 files back to R. A published model of purine metabolism is provided in this SBML-like format and used to test the interface. The model reproduces published time course responses before and after its mapping through SBML. List infrastructure preexisting in R makes it well-suited for manipulating SBML models. Further developments of this SBML-R interface seem to be warranted.

  3. Representing Information in Patient Reports Using Natural Language Processing and the Extensible Markup Language

    Science.gov (United States)

    Friedman, Carol; Hripcsak, George; Shagina, Lyuda; Liu, Hongfang

    1999-01-01

    Objective: To design a document model that provides reliable and efficient access to clinical information in patient reports for a broad range of clinical applications, and to implement an automated method using natural language processing that maps textual reports to a form consistent with the model. Methods: A document model that encodes structured clinical information in patient reports while retaining the original contents was designed using the extensible markup language (XML), and a document type definition (DTD) was created. An existing natural language processor (NLP) was modified to generate output consistent with the model. Two hundred reports were processed using the modified NLP system, and the XML output that was generated was validated using an XML validating parser. Results: The modified NLP system successfully processed all 200 reports. The output of one report was invalid, and 199 reports were valid XML forms consistent with the DTD. Conclusions: Natural language processing can be used to automatically create an enriched document that contains a structured component whose elements are linked to portions of the original textual report. This integrated document model provides a representation where documents containing specific information can be accurately and efficiently retrieved by querying the structured components. If manual review of the documents is desired, the salient information in the original reports can also be identified and highlighted. Using an XML model of tagging provides an additional benefit in that software tools that manipulate XML documents are readily available. PMID:9925230

  4. A new instrument to assess physician skill at thoracic ultrasound, including pleural effusion markup.

    Science.gov (United States)

    Salamonsen, Matthew; McGrath, David; Steiler, Geoff; Ware, Robert; Colt, Henri; Fielding, David

    2013-09-01

    To reduce complications and increase success, thoracic ultrasound is recommended to guide all chest drainage procedures. Despite this, no tools currently exist to assess proceduralist training or competence. This study aims to validate an instrument to assess physician skill at performing thoracic ultrasound, including effusion markup, and examine its validity. We developed an 11-domain, 100-point assessment sheet in line with British Thoracic Society guidelines: the Ultrasound-Guided Thoracentesis Skills and Tasks Assessment Test (UGSTAT). The test was used to assess 22 participants (eight novices, seven intermediates, seven advanced) on two occasions while performing thoracic ultrasound on a pleural effusion phantom. Each test was scored by two blinded expert examiners. Validity was examined by assessing the ability of the test to stratify participants according to expected skill level (analysis of variance) and demonstrating test-retest and intertester reproducibility by comparison of repeated scores (mean difference [95% CI] and paired t test) and the intraclass correlation coefficient. Mean scores for the novice, intermediate, and advanced groups were 49.3, 73.0, and 91.5 respectively, which were all significantly different (P < .0001). There were no significant differences between repeated scores. Procedural training on mannequins prior to unsupervised performance on patients is rapidly becoming the standard in medical education. This study has validated the UGSTAT, which can now be used to determine the adequacy of thoracic ultrasound training prior to clinical practice. It is likely that its role could be extended to live patients, providing a way to document ongoing procedural competence.

  5. Histoimmunogenetics Markup Language 1.0: Reporting next generation sequencing-based HLA and KIR genotyping.

    Science.gov (United States)

    Milius, Robert P; Heuer, Michael; Valiga, Daniel; Doroschak, Kathryn J; Kennedy, Caleb J; Bolon, Yung-Tsi; Schneider, Joel; Pollack, Jane; Kim, Hwa Ran; Cereb, Nezih; Hollenbach, Jill A; Mack, Steven J; Maiers, Martin

    2015-12-01

    We present an electronic format for exchanging data for HLA and KIR genotyping with extensions for next-generation sequencing (NGS). This format addresses NGS data exchange by refining the Histoimmunogenetics Markup Language (HML) to conform to the proposed Minimum Information for Reporting Immunogenomic NGS Genotyping (MIRING) reporting guidelines (miring.immunogenomics.org). Our refinements of HML include two major additions. First, NGS is supported by new XML structures to capture additional NGS data and metadata required to produce a genotyping result, including analysis-dependent (dynamic) and method-dependent (static) components. A full genotype, consensus sequence, and the surrounding metadata are included directly, while the raw sequence reads and platform documentation are externally referenced. Second, genotype ambiguity is fully represented by integrating Genotype List Strings, which use a hierarchical set of delimiters to represent allele and genotype ambiguity in a complete and accurate fashion. HML also continues to enable the transmission of legacy methods (e.g. site-specific oligonucleotide, sequence-specific priming, and Sequence Based Typing (SBT)), adding features such as allowing multiple group-specific sequencing primers, and fully leveraging techniques that combine multiple methods to obtain a single result, such as SBT integrated with NGS. Copyright © 2015 The Authors. Published by Elsevier Inc. All rights reserved.

  6. SBMLeditor: effective creation of models in the Systems Biology Markup language (SBML).

    Science.gov (United States)

    Rodriguez, Nicolas; Donizelli, Marco; Le Novère, Nicolas

    2007-03-06

    The need to build a tool to facilitate the quick creation and editing of models encoded in the Systems Biology Markup language (SBML) has been growing with the number of users and the increased complexity of the language. SBMLeditor tries to answer this need by providing a very simple, low level editor of SBML files. Users can create and remove all the necessary bits and pieces of SBML in a controlled way, that maintains the validity of the final SBML file. SBMLeditor is written in JAVA using JCompneur, a library providing interfaces to easily display an XML document as a tree. This decreases dramatically the development time for a new XML editor. The possibility to include custom dialogs for different tags allows a lot of freedom for the editing and validation of the document. In addition to Xerces, SBMLeditor uses libSBML to check the validity and consistency of SBML files. A graphical equation editor allows an easy manipulation of MathML. SBMLeditor can be used as a module of the Systems Biology Workbench. SBMLeditor contains many improvements compared to a generic XML editor, and allow users to create an SBML model quickly and without syntactic errors.

  7. The Systems Biology Markup Language (SBML) Level 3 Package: Layout, Version 1 Core.

    Science.gov (United States)

    Gauges, Ralph; Rost, Ursula; Sahle, Sven; Wengler, Katja; Bergmann, Frank Thomas

    2015-09-04

    Many software tools provide facilities for depicting reaction network diagrams in a visual form. Two aspects of such a visual diagram can be distinguished: the layout (i.e.: the positioning and connections) of the elements in the diagram, and the graphical form of the elements (for example, the glyphs used for symbols, the properties of the lines connecting them, and so on). For software tools that also read and write models in SBML (Systems Biology Markup Language) format, a common need is to store the network diagram together with the SBML representation of the model. This in turn raises the question of how to encode the layout and the rendering of these diagrams. The SBML Level 3 Version 1 Core specification does not provide a mechanism for explicitly encoding diagrams, but it does provide a mechanism for SBML packages to extend the Core specification and add additional syntactical constructs. The Layout package for SBML Level 3 adds the necessary features to SBML so that diagram layouts can be encoded in SBML files, and a companion package called SBML Rendering specifies how the graphical rendering of elements can be encoded. The SBML Layout package is based on the principle that reaction network diagrams should be described as representations of entities such as species and reactions (with direct links to the underlying SBML elements), and not as arbitrary drawings or graphs; for this reason, existing languages for the description of vector drawings (such as SVG) or general graphs (such as GraphML) cannot be used.

  8. Coding practice of the Journal Article Tag Suite extensible markup language

    Directory of Open Access Journals (Sweden)

    Sun Huh

    2014-08-01

    Full Text Available In general, the Journal Article Tag Suite (JATS extensible markup language (XML coding is processed automatically by an XML filtering program. In this article, the basic tagging in JATS is explained in terms of coding practice. A text editor that supports UTF-8 encoding is necessary to input JATS XML data that works in every language. Any character representable in Unicode can be used in JATS XML, and commonly available web browsers can be used to view JATS XML files. JATS XML files can refer to document type definitions, extensible stylesheet language files, and cascading style sheets, but they must specify the locations of those files. Tools for validating JATS XML files are available via the web sites of PubMed Central and ScienceCentral. Once these files are uploaded to a web server, they can be accessed from all over the world by anyone with a browser. Encoding an example article in JATS XML may help editors in deciding on the adoption of JATS XML.

  9. A two-way interface between limited Systems Biology Markup Language and R

    Directory of Open Access Journals (Sweden)

    Radivoyevitch Tomas

    2004-12-01

    Full Text Available Abstract Background Systems Biology Markup Language (SBML is gaining broad usage as a standard for representing dynamical systems as data structures. The open source statistical programming environment R is widely used by biostatisticians involved in microarray analyses. An interface between SBML and R does not exist, though one might be useful to R users interested in SBML, and SBML users interested in R. Results A model structure that parallels SBML to a limited degree is defined in R. An interface between this structure and SBML is provided through two function definitions: write.SBML( which maps this R model structure to SBML level 2, and read.SBML( which maps a limited range of SBML level 2 files back to R. A published model of purine metabolism is provided in this SBML-like format and used to test the interface. The model reproduces published time course responses before and after its mapping through SBML. Conclusions List infrastructure preexisting in R makes it well-suited for manipulating SBML models. Further developments of this SBML-R interface seem to be warranted.

  10. Grid Databases for Shared Image Analysis in the MammoGrid Project

    CERN Document Server

    Amendolia, S R; Hauer, T; Manset, D; McClatchey, R; Odeh, M; Reading, T; Rogulin, D; Schottlander, D; Solomonides, T

    2004-01-01

    The MammoGrid project aims to prove that Grid infrastructures can be used for collaborative clinical analysis of database-resident but geographically distributed medical images. This requires: a) the provision of a clinician-facing front-end workstation and b) the ability to service real-world clinician queries across a distributed and federated database. The MammoGrid project will prove the viability of the Grid by harnessing its power to enable radiologists from geographically dispersed hospitals to share standardized mammograms, to compare diagnoses (with and without computer aided detection of tumours) and to perform sophisticated epidemiological studies across national boundaries. This paper outlines the approach taken in MammoGrid to seamlessly connect radiologist workstations across a Grid using an "information infrastructure" and a DICOM-compliant object model residing in multiple distributed data stores in Italy and the UK

  11. Public financing of research projects in Poland – its image and consequences?

    Directory of Open Access Journals (Sweden)

    Feldy Marzena

    2016-12-01

    Full Text Available Both the size of appropriation as well as their distribution have had a profound impact on the shape and activities of the science sector. The creation of a fair system of distribution of public resources to research that will also facilitate the effective implementation of the pursued scientific policy goals represents a major challenge. The issue of the determination of the right proportions of individual distribution channels remains critical. Despite this task being the responsibility of the State, establishing cooperation in this respect with the scientific community is desirable. The implementation of solutions that raise the concerns of scientists leads to system instability and reduced effectiveness which is manifest among others in a lower level of indicators of scientific excellence and innovation in the country. These observations are pertinent to Poland where the manner in which scientific institutes operate were changed under the 2009–2011 reform. A neoliberal operating model based on competitiveness and rewarding of top rated scientific establishments and scientists was implemented. In light of these facts, the initiation of research that will provide information on how the implemented changes are perceived by the scientific community seems to be appropriate. The aim of this article is in particlar presenting how the project model of financing laid down under the reform is perceived and what kind of image has been shaped among Polish scientists. In order to gain a comprehensive picture of the situation, both the rational and emotional image was subject to analysis. The conclusions regarding the perception of the project model were drawn on the basis of empirical materials collected in a qualitative study the specifics of which will be presented in the chapter on methodology. Prior to that, the author discusses the basic models for the distribution of state support for science and characterises the most salient features of the

  12. Breast mass detection in tomosynthesis projection images using information-theoretic similarity measures

    Science.gov (United States)

    Singh, Swatee; Tourassi, Georgia D.; Lo, Joseph Y.

    2007-03-01

    The purpose of this project is to study Computer Aided Detection (CADe) of breast masses for digital tomosynthesis. It is believed that tomosynthesis will show improvement over conventional mammography in detection and characterization of breast masses by removing overlapping dense fibroglandular tissue. This study used the 60 human subject cases collected as part of on-going clinical trials at Duke University. Raw projections images were used to identify suspicious regions in the algorithm's high-sensitivity, low-specificity stage using a Difference of Gaussian (DoG) filter. The filtered images were thresholded to yield initial CADe hits that were then shifted and added to yield a 3D distribution of suspicious regions. These were further summed in the depth direction to yield a flattened probability map of suspicious hits for ease of scoring. To reduce false positives, we developed an algorithm based on information theory where similarity metrics were calculated using knowledge databases consisting of tomosynthesis regions of interest (ROIs) obtained from projection images. We evaluated 5 similarity metrics to test the false positive reduction performance of our algorithm, specifically joint entropy, mutual information, Jensen difference divergence, symmetric Kullback-Liebler divergence, and conditional entropy. The best performance was achieved using the joint entropy similarity metric, resulting in ROC A z of 0.87 +/- 0.01. As a whole, the CADe system can detect breast masses in this data set with 79% sensitivity and 6.8 false positives per scan. In comparison, the original radiologists performed with only 65% sensitivity when using mammography alone, and 91% sensitivity when using tomosynthesis alone.

  13. Image aberrations in optical three-dimensional measurement systems with fringe projection.

    Science.gov (United States)

    Brakhage, Peter; Notni, Gunther; Kowarschik, Richard

    2004-06-01

    In optical shape measurement systems, systematic errors appear as a result of imaging aberrations of the lens assemblies in the cameras and projectors. A mathematical description of this effect is intended to correct the whole measurement area with a few independent coefficients. We apply the ideas of photogrammetry to one- and two-dimensional fringe projection techniques. We also introduce some new terms for close-range applications and telecentric objectives. Further, an algorithm for distance-dependent corrections is introduced. Also, we describe a new method with which to determine coefficients of aberration with an optimization-based method.

  14. Advanced practice quality improvement project: how to influence physician radiologic imaging ordering behavior.

    Science.gov (United States)

    Durand, Daniel J; Kohli, Marc D

    2014-12-01

    With growing pressure on the health care sector to improve quality and reduce costs, the stakes associated with imaging appropriateness have never been higher. Although radiologists historically functioned as imaging gatekeepers, this role has been deprioritized in the recent past. This article discusses several potential practice quality improvement projects that can help radiologists regain their role as valued consultants and integral members of the care team. By applying the PDSA framework, radiologists can incrementally optimize their practice's consultation service. While it can be expected that different strategies will gain traction in different environments, it is our hope that the methodology described here will prove useful to most or all practices as a starting point. In addition, we discuss several other influencing techniques that extend beyond traditional consultation services.

  15. A new EU-funded project for enhanced real-time imaging for radiotherapy

    CERN Multimedia

    KTT Life Sciences Unit

    2011-01-01

    ENTERVISION (European training network for digital imaging for radiotherapy) is a new Marie Curie Initial Training Network coordinated by CERN, which brings together multidisciplinary researchers to carry out R&D in physics techniques for application in the clinical environment.   ENTERVISION was established in response to a critical need to reinforce research in online 3D digital imaging and to train professionals in order to deliver some of the key elements for early detection and more precise treatment of tumours. The main goal of the project is to train researchers who will help contribute to technical developments in an exciting multidisciplinary field, where expertise from physics, medicine, electronics, informatics, radiobiology and engineering merges and catalyses the advancement of cancer treatment. With this aim in mind, ENTERVISION brings together ten academic institutes and research centres, as well as the two leading European companies in particle therapy, IBA and Siemens. &ldq...

  16. Fundamental remote sensing science research program. Part 1: Status report of the mathematical pattern recognition and image analysis project

    Science.gov (United States)

    Heydorn, R. D.

    1984-01-01

    The Mathematical Pattern Recognition and Image Analysis (MPRIA) Project is concerned with basic research problems related to the study of the Earth from remotely sensed measurement of its surface characteristics. The program goal is to better understand how to analyze the digital image that represents the spatial, spectral, and temporal arrangement of these measurements for purposing of making selected inference about the Earth.

  17. Dendritic tree extraction from noisy maximum intensity projection images in C. elegans.

    Science.gov (United States)

    Greenblum, Ayala; Sznitman, Raphael; Fua, Pascal; Arratia, Paulo E; Oren, Meital; Podbilewicz, Benjamin; Sznitman, Josué

    2014-06-12

    Maximum Intensity Projections (MIP) of neuronal dendritic trees obtained from confocal microscopy are frequently used to study the relationship between tree morphology and mechanosensory function in the model organism C. elegans. Extracting dendritic trees from noisy images remains however a strenuous process that has traditionally relied on manual approaches. Here, we focus on automated and reliable 2D segmentations of dendritic trees following a statistical learning framework. Our dendritic tree extraction (DTE) method uses small amounts of labelled training data on MIPs to learn noise models of texture-based features from the responses of tree structures and image background. Our strategy lies in evaluating statistical models of noise that account for both the variability generated from the imaging process and from the aggregation of information in the MIP images. These noisy models are then used within a probabilistic, or Bayesian framework to provide a coarse 2D dendritic tree segmentation. Finally, some post-processing is applied to refine the segmentations and provide skeletonized trees using a morphological thinning process. Following a Leave-One-Out Cross Validation (LOOCV) method for an MIP databse with available "ground truth" images, we demonstrate that our approach provides significant improvements in tree-structure segmentations over traditional intensity-based methods. Improvements for MIPs under various imaging conditions are both qualitative and quantitative, as measured from Receiver Operator Characteristic (ROC) curves and the yield and error rates in the final segmentations. In a final step, we demonstrate our DTE approach on previously unseen MIP samples including the extraction of skeletonized structures, and compare our method to a state-of-the art dendritic tree tracing software. Overall, our DTE method allows for robust dendritic tree segmentations in noisy MIPs, outperforming traditional intensity-based methods. Such approach provides a

  18. Landsat Image Analysis of the Rebea Agricultural Project, Mosul Dam and Lake, Northern Iraq

    Science.gov (United States)

    Welsh, W.; Alassadi, F.

    2014-12-01

    An archive of 70 good-to-excellent quality Landsat TM and ETM+ images acquired between 1984 and 2011 were identified through visual examination of the GLOVIS web portal. After careful consideration of factors relevant to agriculture in the region (e.g., crop calendar) and associated image processing needs (e.g., preference for anniversary dates), the images deemed most appropriate were downloaded. Standard preprocessing, including visual quality and statistical inspection, sub-setting to the study area, was performed, and the results combined in a database with available GIS data. The resolution merge spatial enhancement technique was applied to any ETM+ imagery to improve visual clarity and interpretability. The NDVI was calculated for all images in the time series. Unsupervised classification of images was performed for dates ranging from 1987 just before the inception of the Rebea project in 1989 through 2011. In order to reduce uncertainty related to lack of detailed ancillary and/or ground reference data, simple land cover classes were mapped, specifically: surface water, agriculture, and other. Results were able to quantify and track areas of each class over time, and showed a marked decrease in agriculture between the Iraq invasion in 2003 to the end of the study period in 2011, despite massive efforts and capital by the United States and Iraqi governments to improve agriculture in the area. Complications to understanding the role of warfare and conflict on the environment in the Mosul region include severe drought and water shortages, including effects of the Turkish GAP water resource development project in the headwaters of the Tigris-Euphrates, as well as Mosul Dam structural problems associated with geologically-unsuitable conditions upon which the dam is constructed. Now, the Islamic State in Iraq and Syria (ISIS) likely captured the Mosul Dam on the day this abstract was submitted. Our Landsat-based monitoring and analysis of the Rebea Project and

  19. Imaging Seismic Source Variations Using Back-Projection Methods at El Tatio Geyser Field, Northern Chile

    Science.gov (United States)

    Kelly, C. L.; Lawrence, J. F.

    2014-12-01

    During October 2012, 51 geophones and 6 broadband seismometers were deployed in an ~50x50m region surrounding a periodically erupting columnar geyser in the El Tatio Geyser Field, Chile. The dense array served as the seismic framework for a collaborative project to study the mechanics of complex hydrothermal systems. Contemporaneously, complementary geophysical measurements (including down-hole temperature and pressure, discharge rates, thermal imaging, water chemistry, and video) were also collected. Located on the western flanks of the Andes Mountains at an elevation of 4200m, El Tatio is the third largest geyser field in the world. Its non-pristine condition makes it an ideal location to perform minutely invasive geophysical studies. The El Jefe Geyser was chosen for its easily accessible conduit and extremely periodic eruption cycle (~120s). During approximately 2 weeks of continuous recording, we recorded ~2500 nighttime eruptions which lack cultural noise from tourism. With ample data, we aim to study how the source varies spatially and temporally during each phase of the geyser's eruption cycle. We are developing a new back-projection processing technique to improve source imaging for diffuse signals. Our method was previously applied to the Sierra Negra Volcano system, which also exhibits repeating harmonic and diffuse seismic sources. We back-project correlated seismic signals from the receivers back to their sources, assuming linear source to receiver paths and a known velocity model (obtained from ambient noise tomography). We apply polarization filters to isolate individual and concurrent geyser energy associated with P and S phases. We generate 4D, time-lapsed images of the geyser source field that illustrate how the source distribution changes through the eruption cycle. We compare images for pre-eruption, co-eruption, post-eruption and quiescent periods. We use our images to assess eruption mechanics in the system (i.e. top-down vs. bottom-up) and

  20. Tomographic image reconstruction from incomplete view data by convex projections and direct fourier inversion.

    Science.gov (United States)

    Sezan, M; Stark, H

    1984-01-01

    We consider the problem of reconstructing CAT imagery by the direct Fourier method (DFM) when not all view data are available. To restore the missing information we use the method of projections onto convex sets (POCS). POCS is a recursive image restoration technique that finds a solution consistent with the measured data and a priori known constraints in both the space and Fourier domain. Because DFM reconstruction is a frequency-domain technique it is ideally matched to POCS restoration when, for one reason or another, we are forced to generate an image from a less than complete set of view data. We design and apply an algorithm (PRDF) which interpolates/extrapolates the missing Fourier domain information by POCS and reconstructs an image by DFM. A simulated human thorax cross section is restored and reconstructed. The restorations using POCS are compared with the Gerchberg-Papoulis extrapolation method and shown to be superior. Applications of PRDF to other types of medical imaging modalities are discussed.

  1. Projection x-ray imaging with photon energy weighting: experimental evaluation with a prototype detector.

    Science.gov (United States)

    Shikhaliev, Polad M

    2009-08-21

    The signal-to-noise ratio (SNR) in x-ray imaging can be increased using a photon counting detector which could allow for rejecting electronics noise and for weighting x-ray photons according to their energies. This approach, however, was not feasible for a long time because photon counting x-ray detectors with very high count rates, good energy resolution and a large number of small pixels were required. These problems have been addressed with the advent of new detector materials, fast readout electronics and powerful computers. In this work, we report on the experimental evaluation of projection x-ray imaging with a photon counting cadmium-zinc-telluride (CZT) detector with energy resolving capabilities. The detector included two rows of pixels with 128 pixels per row with 0.9 x 0.9 mm(2) pixel size, and a 2 Mcount pixel(-1) s(-1) count rate. The x-ray tube operated at 120 kVp tube voltage with 2 mm Al-equivalent inherent filtration. The x-ray spectrum was split into five regions, and five independent x-ray images were acquired at a time. These five quasi-monochromatic x-ray images were used for x-ray energy weighting and material decomposition. A tissue-equivalent phantom was used including contrast elements simulating adipose, calcifications, iodine and air. X-ray energy weighting improved the SNR of calcifications and iodine by a factor of 1.32 and 1.36, respectively, as compared to charge integrating. Material decomposition was performed by dual energy subtraction. The low- and high-energy images were generated in the energy ranges of 25-60 keV and 60-120 keV, respectively, by combining five monochromatic image data into two. X-ray energy weighting was applied to low- and high-energy images prior to subtraction, and this improved the SNR of calcifications and iodine in dual energy subtracted images by a factor of 1.34 and 1.25, respectively, as compared to charge integrating. The detector energy resolution, spatial resolution, linearity, count rate, noise and

  2. Multidetector CT evaluation of central airways stenoses: Comparison of virtual bronchoscopy, minimal-intensity projection, and multiplanar reformatted images

    OpenAIRE

    Sundarakumar, Dinesh K; Bhalla, Ashu S; Raju Sharma; Smriti Hari; Randeep Guleria; Khilnani, Gopi C.

    2011-01-01

    Aims: To evaluate the diagnostic utility of virtual bronchoscopy, multiplanar reformatted images, and minimal-intensity projection in assessing airway stenoses. Settings and Design: It was a prospective study involving 150 patients with symptoms of major airway disease. Materials and Methods: Fifty-six patients were selected for analysis based on the detection of major airway lesions on fiber-optic bronchoscopy (FB) or routine axial images. Comparisons were made between axial images, virtual ...

  3. A study to evaluate the reliability of using two-dimensional photographs, three-dimensional images, and stereoscopic projected three-dimensional images for patient assessment.

    Science.gov (United States)

    Zhu, S; Yang, Y; Khambay, B

    2017-03-01

    Clinicians are accustomed to viewing conventional two-dimensional (2D) photographs and assume that viewing three-dimensional (3D) images is similar. Facial images captured in 3D are not viewed in true 3D; this may alter clinical judgement. The aim of this study was to evaluate the reliability of using conventional photographs, 3D images, and stereoscopic projected 3D images to rate the severity of the deformity in pre-surgical class III patients. Forty adult patients were recruited. Eight raters assessed facial height, symmetry, and profile using the three different viewing media and a 100-mm visual analogue scale (VAS), and appraised the most informative viewing medium. Inter-rater consistency was above good for all three media. Intra-rater reliability was not significantly different for rating facial height using 2D (P=0.704), symmetry using 3D (P=0.056), and profile using projected 3D (P=0.749). Using projected 3D for rating profile and symmetry resulted in significantly lower median VAS scores than either 3D or 2D images (all Pstereoscopic 3D projection was the preferred method for rating. The reliability of assessing specific characteristics was dependent on the viewing medium. Clinicians should be aware that the visual information provided when viewing 3D images is not the same as when viewing 2D photographs, especially for facial depth, and this may change the clinical impression.

  4. Online monitoring of gas-solid two-phase flow using projected CG method in ECT image reconstruction

    Institute of Scientific and Technical Information of China (English)

    Qi wang; Chengyi Yang; Huaxiang Wang; Ziqiang Cui; Zhentao Gao

    2013-01-01

    Electrical capacitance tomography (ECT) is a promising technique for multi-phase flow measurement due to its high speed,low cost and non-intrusive sensing.Image reconstruction for ECT is an inverse problem of finding the permittivity distribution of an object by measuring the electrical capacitances between sets of electrodes placed around its periphery.The conjugate gradient (CG) method is a popular image reconstruction method for ECT,in spite of its low convergence rate.In this paper,an advanced version of the CG method,the projected CG method,is used for image reconstruction of an ECT system.The solution space is projected into the Krylov subspace and the inverse problem is solved by the CG method in a low-dimensional specific subspace.Both static and dynamic experiments were carried out for gas-solid two-phase flows.The flow regimes are identified using the reconstructed images obtained with the projected CG method.The results obtained indicate that the projected CG method improves the quality of reconstructed images and dramatically reduces computation time,as compared to the traditional sensitivity,Landweber,and CG methods.Furthermore,the projected CG method was also used to estimate the important parameters of the pneumatic conveying process,such as the volume concentration,flow velocity and mass flow rate of the solid phase.Therefore,the projected CG method is considered suitable for online gas-solid two-phase flow measurement.

  5. Using the Mars Student Imaging Project to Integrate Science and English into Middle School Classrooms

    Science.gov (United States)

    Lindgren, C. F.; Troy, M. T.; Valderrama, P.

    2005-12-01

    Bringing science to life in a middle school classroom, and getting students excited about writing an English research paper can be a challenge. We met the challenge by using the exploration of Mars with Arizona State University`s (ASU) Mars Student Imaging Project (MSIP). We replaced individuals writing their own research papers with teams writing scientific proposals for use of the 2001 Mars Odyssey Orbiter. The 126 students on our academic team divided themselves into 26 teams. Each team selected a Leader, Archivist, Publicist, and Bibliographer. I was the Principal Investigator for each team. For twelve weeks the teams formally met once a week to discuss their progress and plan strategies for the following week. We created a website to communicate our progress. During the twelve weeks, the major task was to narrow each general topic such as ``Volcanoes on Mars," to a specific topic that could be answered by an 18km by 60km visible light image such as ``Is it Possible to Find the Relative Age of Volcanic Depressions in a Lava Flow Using a Mars Odyssey Image?" In addition to traditional research methods, we also participated in four teleconferences with ASU scientists chaired by Paige Valderrama, Assistant Director of the Mars Education Program. As the project evolved, I guided the teams with content, while the English teacher provided strategies for writing a meaningful persuasive essay, using citations, and recording bibliographical entries. When the proposals were completed, each team created a PowerPoint presentation to introduce their proposal to everyone for peer review. The students were hard, but fair with their evaluations. In several cases, they did not cast one of their three votes for their own! They decided that ten proposals met the criteria established by ASU. Those teams selected one member to use the JMARS software to target locations on Mars. The imagers spent two intensive days learning the software and targeting the surface. When we received

  6. Adaptive Subspace-based Inverse Projections via Division into Multiple Sub-problems for Missing Image Data Restoration.

    Science.gov (United States)

    Ogawa, Takahiro; Haseyama, Miki

    2016-10-10

    This paper presents adaptive subspace-based inverse projections via division into multiple sub-problems (ASIP-DIMS) for missing image data restoration. In the proposed method, a target problem for estimating missing image data is divided into multiple sub-problems, and each sub-problem is iteratively solved with constraints of other known image data. By projection into a subspace model of image patches, the solution of each subproblem is calculated, where we call this procedure "subspacebased inverse projection" for simplicity. The proposed method can use higher-dimensional subspaces for finding unique solutions in each sub-problem, and successful restoration becomes feasible since a high level of image representation performance can be preserved. This is the biggest contribution of this paper. Furthermore, the proposed method generates several subspaces from known training examples and enables derivation of a new criterion in the above framework to adaptively select the optimal subspace for each target patch. In this way, the proposed method realizes missing image data restoration using ASIP-DIMS. Since our method can estimate any kind of missing image data, its potential in two image restoration tasks, image inpainting and super-resolution, based on several methods for multivariate analysis is also shown in this paper.

  7. Making YOHKOH SXT Images Available to the Public: The YOHKOH Public Outreach Project

    Science.gov (United States)

    Larson, M. B.; McKenzie, D.; Slater, T.; Acton, L.; Alexander, D.; Freeland, S.; Lemen, J.; Metcalf, T.

    1999-05-01

    The NASA funded Yohkoh Public Outreach Project (YPOP) provides public access to high quality Yohkoh SXT data via the World Wide Web. The products of this effort are available to the scientific research community, K-12 schools, and informal education centers including planetaria, museums, and libraries. The project utilizes the intrinsic excitement of the SXT data, and in particular the SXT movies, to develop science learning tools and classroom activities. The WWW site at URL: http://solar.physics.montana.edu/YPOP/ uses a movie theater theme to highlight available Yohkoh movies in a format that is entertaining and inviting to non-scientists. The site features informational tours of the Sun as a star, the solar magnetic field, the internal structure and the Sun's general features. The on-line Solar Classroom has proven very popular, showcasing hand-on activities about image filtering, the solar cycle, satellite orbits, image processing, construction of a model Yohkoh satellite, solar rotation, measuring sunspots and building a portable sundial. The YPOP Guestbook has been helpful in evaluating the usefulness of the site with over 300 detailed comments to date.

  8. Lesion detectability in stereoscopically viewed digital breast tomosynthesis projection images: a model observer study with anthropomorphic computational breast phantoms

    Science.gov (United States)

    Reinhold, Jacob; Wen, Gezheng; Lo, Joseph Y.; Markey, Mia K.

    2017-03-01

    Stereoscopic views of 3D breast imaging data may better reveal the 3D structures of breasts, and potentially improve the detection of breast lesions. The imaging geometry of digital breast tomosynthesis (DBT) lends itself naturally to stereo viewing because a stereo pair can be easily formed by two projection images with a reasonable separation angle for perceiving depth. This simulation study attempts to mimic breast lesion detection on stereo viewing of a sequence of stereo pairs of DBT projection images. 3D anthropomorphic computational breast phantoms were scanned by a simulated DBT system, and spherical signals were inserted into different breast regions to imitate the presence of breast lesions. The regions of interest (ROI) had different local anatomical structures and consequently different background statistics. The projection images were combined into a sequence of stereo pairs, and then presented to a stereo matching model observer for determining lesion presence. The signal-to-noise ratio (SNR) was used as the figure of merit in evaluation, and the SNR from the stack of reconstructed slices was considered as the benchmark. We have shown that: 1) incorporating local anatomical backgrounds may improve lesion detectability relative to ignoring location-dependent image characteristics. The SNR was lower for the ROIs with the higher local power-law-noise coefficient β. 2) Lesion detectability may be inferior on stereo viewing of projection images relative to conventional viewing of reconstructed slices, but further studies are needed to confirm this observation.

  9. Accelerated perturbation-resilient block-iterative projection methods with application to image reconstruction

    Science.gov (United States)

    Nikazad, T.; Davidi, R.; Herman, G. T.

    2012-03-01

    We study the convergence of a class of accelerated perturbation-resilient block-iterative projection methods for solving systems of linear equations. We prove convergence to a fixed point of an operator even in the presence of summable perturbations of the iterates, irrespective of the consistency of the linear system. For a consistent system, the limit point is a solution of the system. In the inconsistent case, the symmetric version of our method converges to a weighted least-squares solution. Perturbation resilience is utilized to approximate the minimum of a convex functional subject to the equations. A main contribution, as compared to previously published approaches to achieving similar aims, is a more than an order of magnitude speed-up, as demonstrated by applying the methods to problems of image reconstruction from projections. In addition, the accelerated algorithms are illustrated to be better, in a strict sense provided by the method of statistical hypothesis testing, than their unaccelerated versions for the task of detecting small tumors in the brain from x-ray CT projection data.

  10. The price of surgery: markup of operative procedures in the United States.

    Science.gov (United States)

    Gani, Faiz; Makary, Martin A; Pawlik, Timothy M

    2017-02-01

    Despite cost containment efforts, the price for surgery is not subject to any regulations. We sought to characterize and compare variability in pricing for commonly performed major surgical procedures across the United States. Medicare claims corresponding to eight major surgical procedures (aortic aneurysm repair, aortic valvuloplasty, carotid endartectomy, coronary artery bypass grafting, esophagectomy, pancreatectomy, liver resection, and colectomy) were identified using the Medicare Provider Utilization and Payment Data Physician and Other Supplier Public Use File for 2013. For each procedure, total charges, Medicare-allowable costs, and total payments were recorded. A procedure-specific markup ratio (MR; ratio of total charges to Medicare-allowable costs) was calculated and compared between procedures and across states. Variation in MR was compared using a coefficient of variation (CoV). Among all providers, the median MR was 3.5 (interquartile range: 3.1-4.0). MR was noted to vary by procedure; ranging from 3.0 following colectomy to 6.0 following carotid endartectomy (P < 0.001). MR also varied for the same procedure; varying the least after liver resection (CoV = 0.24), while coronary artery bypass grafting pricing demonstrated the greatest variation in MR (CoV = 0.53). Compared with the national average, MR varied by 36% between states ranging from 1.8 to 13.0. Variation in MR was also noted within the same state varying by 15% within the state of Arkansas (CoV = 0.15) compared with 51% within the state of Wisconsin (CoV = 0.51). Significant variation was noted for the price of surgery by procedure as well as between and within different geographical regions. Greater scrutiny and transparency in the price of surgery is required to promote cost containment. Copyright © 2016 Elsevier Inc. All rights reserved.

  11. Using the Mars Student Imaging Project (MSIP) in a University Classroom

    Science.gov (United States)

    Manning, Heidi L.; Manzoni, Luiz; Berquo, Thelma; Gealy, Mark

    2014-11-01

    Many students enroll in an introductory astronomy course as a means to fulfill the science requirement for their general education classes. A common goal of these general education courses is to teach students about the nature and process of science. One of the best ways to learn this is by actually doing authentic scientific research. The Mars Student Imaging Project (MSIP) is a hands-on activity in which students conduct original research using images of Mars surface, and obtain a new image with the THEMIS camera. As a hands-on activity, the MSIP is an excellent addition to an introductory astronomy course as it allows students at all levels of scientific knowledge and ability to participate in making new discoveries and to make meaningful contributions to our understanding of Mars. In addition, students develop a much better understanding of the nature of science than they could through reading any textbook. The MSIP has been used in the Introductory Astronomy: The Solar System course at Concordia College for two years. The response from the students and faculty at our institution who have participated in MSIP has been very positive. We will provide helpful tips on how to adapt this activity to the university environment including scheduling the various steps of the activity and creating university-level scaffolding activities.

  12. Quantitative 3-Dimensional Imaging of Murine Neointimal and Atherosclerotic Lesions by Optical Projection Tomography

    Science.gov (United States)

    Kirkby, Nicholas S.; Low, Lucinda; Seckl, Jonathan R.; Walker, Brian R.; Webb, David J.; Hadoke, Patrick W. F.

    2011-01-01

    Objective Traditional methods for the analysis of vascular lesion formation are labour intensive to perform - restricting study to ‘snapshots’ within each vessel. This study was undertaken to determine the suitability of optical projection tomographic (OPT) imaging for the 3-dimensional representation and quantification of intimal lesions in mouse arteries. Methods and Results Vascular injury was induced by wire-insertion or ligation of the mouse femoral artery or administration of an atherogenic diet to apoE-deficient mice. Lesion formation was examined by OPT imaging of autofluorescent emission. Lesions could be clearly identified and distinguished from the underlying vascular wall. Planimetric measurements of lesion area correlated well with those made from histological sections subsequently produced from the same vessels (wire-injury: R2 = 0.92; ligation-injury: R2 = 0.89; atherosclerosis: R2 = 0.85), confirming both the accuracy of this methodology and its non-destructive nature. It was also possible to record volumetric measurements of lesion and lumen and these were highly reproducible between scans (coefficient of variation = 5.36%, 11.39% and 4.79% for wire- and ligation-injury and atherosclerosis, respectively). Conclusions These data demonstrate the eminent suitability of OPT for imaging of atherosclerotic and neointimal lesion formation, providing a much needed means for the routine 3-dimensional analysis of vascular morphology in studies of this type. PMID:21379578

  13. Three-dimensional surface measurement based on the projected defocused pattern technique using imaging fiber optics

    Science.gov (United States)

    Parra Escamilla, Geliztle A.; Kobayashi, Fumio; Otani, Yukitoshi

    2017-05-01

    We present a three-dimensional surface measurement system using imaging fiber endoscope and the measurement is based on the focus technique in uniaxial configuration. The surface height variation of the sample is retrieved by taking into account the contrast modulation change obtained from a projected fringe pattern on the sample. The technique takes into account the defocus change of the fringe pattern due to the height variation of the sample and by a Gaussian fitting process the height reconstruction can be retrieved. A baseline signal procedure was implemented to remove back reflection light coming from the two fiber-surfaces (inlet and outlet) and also a Fourier transform filter was used to remove the pixelated appearance of the images. The depth range of the system is 1.1 mm and a lateral range of 2 mm by 2 mm. The novelties of the implementation are that the system uses the same imaging fiber as illumination and measurement and offers the advantage of the transportability to the measurement to a confined space having potential application on medical or industrial endoscopes systems. We demonstrate the technique by showing the surface profile of a measured object.

  14. Comparison of parabolic filtration methods for 3D filtered back projection in pulsed EPR imaging.

    Science.gov (United States)

    Qiao, Zhiwei; Redler, Gage; Epel, Boris; Halpern, Howard J

    2014-11-01

    Pulse electron paramagnetic resonance imaging (Pulse EPRI) is a robust method for noninvasively measuring local oxygen concentrations in vivo. For 3D tomographic EPRI, the most commonly used reconstruction algorithm is filtered back projection (FBP), in which the parabolic filtration process strongly influences image quality. In this work, we designed and compared 7 parabolic filtration methods to reconstruct both simulated and real phantoms. To evaluate these methods, we designed 3 error criteria and 1 spatial resolution criterion. It was determined that the 2 point derivative filtration method and the two-ramp-filter method have unavoidable negative effects resulting in diminished spatial resolution and increased artifacts respectively. For the noiseless phantom the rectangular-window parabolic filtration method and sinc-window parabolic filtration method were found to be optimal, providing high spatial resolution and small errors. In the presence of noise, the 3 point derivative method and Hamming-window parabolic filtration method resulted in the best compromise between low image noise and high spatial resolution. The 3 point derivative method is faster than Hamming-window parabolic filtration method, so we conclude that the 3 point derivative method is optimal for 3D FBP. Copyright © 2014. Published by Elsevier Inc.

  15. Adaptive multifocus image fusion using block compressed sensing with smoothed projected Landweber integration in the wavelet domain.

    Science.gov (United States)

    V S, Unni; Mishra, Deepak; Subrahmanyam, G R K S

    2016-12-01

    The need for image fusion in current image processing systems is increasing mainly due to the increased number and variety of image acquisition techniques. Image fusion is the process of combining substantial information from several sensors using mathematical techniques in order to create a single composite image that will be more comprehensive and thus more useful for a human operator or other computer vision tasks. This paper presents a new approach to multifocus image fusion based on sparse signal representation. Block-based compressive sensing integrated with a projection-driven compressive sensing (CS) recovery that encourages sparsity in the wavelet domain is used as a method to get the focused image from a set of out-of-focus images. Compression is achieved during the image acquisition process using a block compressive sensing method. An adaptive thresholding technique within the smoothed projected Landweber recovery process reconstructs high-resolution focused images from low-dimensional CS measurements of out-of-focus images. Discrete wavelet transform and dual-tree complex wavelet transform are used as the sparsifying basis for the proposed fusion. The main finding lies in the fact that sparsification enables a better selection of the fusion coefficients and hence better fusion. A Laplacian mixture model fit is done in the wavelet domain and estimation of the probability density function (pdf) parameters by expectation maximization leads us to the proper selection of the coefficients of the fused image. Using the proposed method compared with the fusion scheme without employing the projected Landweber (PL) scheme and the other existing CS-based fusion approaches, it is observed that with fewer samples itself, the proposed method outperforms other approaches.

  16. Nondestructive Imaging of Internal Structures of Frog (Xenopus laevis) Embryos by Shadow-Projection X-Ray Microtomography

    Science.gov (United States)

    Aoki, Sadao; Yoneda, Ikuo; Nagai, Takeharu; Ueno, Naoto; Murakami, Kazuo

    1994-04-01

    Nondestructive high-resolution imaging of frog ( Xenopus laevis) embryos has been developed by X-ray microtomography. Shadow-projection X-ray microtomography with a brilliant fine focus laboratory X-ray source could image fine structures of Xenopus embryos which were embedded in paraffin wax. The imaging system enabled us to not only distinguish endoderm from ectoderm at the gastrula stage, but also to obtain a cross-section view of the tail bud embryo showing muscle, notochord and neural tube without staining. Furthermore, the distribution of myosin was also imaged in combination with whole-mount immunohistochemistry.

  17. Short Project-Based Learning with MATLAB Applications to Support the Learning of Video-Image Processing

    Science.gov (United States)

    Gil, Pablo

    2017-10-01

    University courses concerning Computer Vision and Image Processing are generally taught using a traditional methodology that is focused on the teacher rather than on the students. This approach is consequently not effective when teachers seek to attain cognitive objectives involving their students' critical thinking. This manuscript covers the development, implementation and assessment of a short project-based engineering course with MATLAB applications Multimedia Engineering being taken by Bachelor's degree students. The principal goal of all course lectures and hands-on laboratory activities was for the students to not only acquire image-specific technical skills but also a general knowledge of data analysis so as to locate phenomena in pixel regions of images and video frames. This would hopefully enable the students to develop skills regarding the implementation of the filters, operators, methods and techniques used for image processing and computer vision software libraries. Our teaching-learning process thus permits the accomplishment of knowledge assimilation, student motivation and skill development through the use of a continuous evaluation strategy to solve practical and real problems by means of short projects designed using MATLAB applications. Project-based learning is not new. This approach has been used in STEM learning in recent decades. But there are many types of projects. The aim of the current study is to analyse the efficacy of short projects as a learning tool when compared to long projects during which the students work with more independence. This work additionally presents the impact of different types of activities, and not only short projects, on students' overall results in this subject. Moreover, a statistical study has allowed the author to suggest a link between the students' success ratio and the type of content covered and activities completed on the course. The results described in this paper show that those students who took part

  18. Short Project-Based Learning with MATLAB Applications to Support the Learning of Video-Image Processing

    Science.gov (United States)

    Gil, Pablo

    2017-05-01

    University courses concerning Computer Vision and Image Processing are generally taught using a traditional methodology that is focused on the teacher rather than on the students. This approach is consequently not effective when teachers seek to attain cognitive objectives involving their students' critical thinking. This manuscript covers the development, implementation and assessment of a short project-based engineering course with MATLAB applications Multimedia Engineering being taken by Bachelor's degree students. The principal goal of all course lectures and hands-on laboratory activities was for the students to not only acquire image-specific technical skills but also a general knowledge of data analysis so as to locate phenomena in pixel regions of images and video frames. This would hopefully enable the students to develop skills regarding the implementation of the filters, operators, methods and techniques used for image processing and computer vision software libraries. Our teaching-learning process thus permits the accomplishment of knowledge assimilation, student motivation and skill development through the use of a continuous evaluation strategy to solve practical and real problems by means of short projects designed using MATLAB applications. Project-based learning is not new. This approach has been used in STEM learning in recent decades. But there are many types of projects. The aim of the current study is to analyse the efficacy of short projects as a learning tool when compared to long projects during which the students work with more independence. This work additionally presents the impact of different types of activities, and not only short projects, on students' overall results in this subject. Moreover, a statistical study has allowed the author to suggest a link between the students' success ratio and the type of content covered and activities completed on the course. The results described in this paper show that those students who took part

  19. Femtosecond few- to single-electron point-projection microscopy for nanoscale dynamic imaging

    Science.gov (United States)

    Bainbridge, A. R.; Barlow Myers, C. W.; Bryan, W. A.

    2016-01-01

    Femtosecond electron microscopy produces real-space images of matter in a series of ultrafast snapshots. Pulses of electrons self-disperse under space-charge broadening, so without compression, the ideal operation mode is a single electron per pulse. Here, we demonstrate femtosecond single-electron point projection microscopy (fs-ePPM) in a laser-pump fs-e-probe configuration. The electrons have an energy of only 150 eV and take tens of picoseconds to propagate to the object under study. Nonetheless, we achieve a temporal resolution with a standard deviation of 114 fs (equivalent to a full-width at half-maximum of 269 ± 40 fs) combined with a spatial resolution of 100 nm, applied to a localized region of charge at the apex of a nanoscale metal tip induced by 30 fs 800 nm laser pulses at 50 kHz. These observations demonstrate real-space imaging of reversible processes, such as tracking charge distributions, is feasible whilst maintaining femtosecond resolution. Our findings could find application as a characterization method, which, depending on geometry, could resolve tens of femtoseconds and tens of nanometres. Dynamically imaging electric and magnetic fields and charge distributions on sub-micron length scales opens new avenues of ultrafast dynamics. Furthermore, through the use of active compression, such pulses are an ideal seed for few-femtosecond to attosecond imaging applications which will access sub-optical cycle processes in nanoplasmonics. PMID:27158637

  20. Usefulness of 3-dimensional stereotactic surface projection FDG PET images for the diagnosis of dementia

    Science.gov (United States)

    Kim, Jahae; Cho, Sang-Geon; Song, Minchul; Kang, Sae-Ryung; Kwon, Seong Young; Choi, Kang-Ho; Choi, Seong-Min; Kim, Byeong-Chae; Song, Ho-Chun

    2016-01-01

    Abstract To compare diagnostic performance and confidence of a standard visual reading and combined 3-dimensional stereotactic surface projection (3D-SSP) results to discriminate between Alzheimer disease (AD)/mild cognitive impairment (MCI), dementia with Lewy bodies (DLB), and frontotemporal dementia (FTD). [18F]fluorodeoxyglucose (FDG) PET brain images were obtained from 120 patients (64 AD/MCI, 38 DLB, and 18 FTD) who were clinically confirmed over 2 years follow-up. Three nuclear medicine physicians performed the diagnosis and rated diagnostic confidence twice; once by standard visual methods, and once by adding of 3D-SSP. Diagnostic performance and confidence were compared between the 2 methods. 3D-SSP showed higher sensitivity, specificity, accuracy, positive, and negative predictive values to discriminate different types of dementia compared with the visual method alone, except for AD/MCI specificity and FTD sensitivity. Correction of misdiagnosis after adding 3D-SSP images was greatest for AD/MCI (56%), followed by DLB (13%) and FTD (11%). Diagnostic confidence also increased in DLB (visual: 3.2; 3D-SSP: 4.1; P < 0.001), followed by AD/MCI (visual: 3.1; 3D-SSP: 3.8; P = 0.002) and FTD (visual: 3.5; 3D-SSP: 4.2; P = 0.022). Overall, 154/360 (43%) cases had a corrected misdiagnosis or improved diagnostic confidence for the correct diagnosis. The addition of 3D-SSP images to visual analysis helped to discriminate different types of dementia in FDG PET scans, by correcting misdiagnoses and enhancing diagnostic confidence in the correct diagnosis. Improvement of diagnostic accuracy and confidence by 3D-SSP images might help to determine the cause of dementia and appropriate treatment. PMID:27930593

  1. Femtosecond few- to single-electron point-projection microscopy for nanoscale dynamic imaging

    Directory of Open Access Journals (Sweden)

    A. R. Bainbridge

    2016-03-01

    Full Text Available Femtosecond electron microscopy produces real-space images of matter in a series of ultrafast snapshots. Pulses of electrons self-disperse under space-charge broadening, so without compression, the ideal operation mode is a single electron per pulse. Here, we demonstrate femtosecond single-electron point projection microscopy (fs-ePPM in a laser-pump fs-e-probe configuration. The electrons have an energy of only 150 eV and take tens of picoseconds to propagate to the object under study. Nonetheless, we achieve a temporal resolution with a standard deviation of 114 fs (equivalent to a full-width at half-maximum of 269 ± 40 fs combined with a spatial resolution of 100 nm, applied to a localized region of charge at the apex of a nanoscale metal tip induced by 30 fs 800 nm laser pulses at 50 kHz. These observations demonstrate real-space imaging of reversible processes, such as tracking charge distributions, is feasible whilst maintaining femtosecond resolution. Our findings could find application as a characterization method, which, depending on geometry, could resolve tens of femtoseconds and tens of nanometres. Dynamically imaging electric and magnetic fields and charge distributions on sub-micron length scales opens new avenues of ultrafast dynamics. Furthermore, through the use of active compression, such pulses are an ideal seed for few-femtosecond to attosecond imaging applications which will access sub-optical cycle processes in nanoplasmonics.

  2. An Attempt to Construct a Database of Photographic Data of Radiolarian Fossils with the Hypertext Markup Language

    OpenAIRE

    磯貝, 芳徳; 水谷, 伸治郎; Yoshinori, Isogai; Shinjiro, MIZUTANI

    1998-01-01

    放散虫化石の走査型電子顕微鏡写真のコレクションを,Hypertext Markup Languageを用いてデータベース化した.このデータベースは約千枚の放散虫化石の写真を現時点でもっており,化石名,地質学的年代,発掘地名など多様な視点から検索することができる.このデータベースの構築によって,計算機やデータベースについて特別な技術を持っていない通常の研究者が,自身のデータベースを自らの手で構築しようとするとき,Hypertext Markup Languageが有効であることを示した.さらにインターネットを経由して,誰でもこのデータベースを利用できる点は,Hypertext Markup Languageを用いたデータベースの特筆するき特徴である.データベース構築の過程を記述し,現況を報告する.さらに当データベース構築の背景にある考えや問題点について議論する....

  3. Do state minimum markup/price laws work? Evidence from retail scanner data and TUS-CPS.

    Science.gov (United States)

    Huang, Jidong; Chriqui, Jamie F; DeLong, Hillary; Mirza, Maryam; Diaz, Megan C; Chaloupka, Frank J

    2016-10-01

    Minimum markup/price laws (MPLs) have been proposed as an alternative non-tax pricing strategy to reduce tobacco use and access. However, the empirical evidence on the effectiveness of MPLs in increasing cigarette prices is very limited. This study aims to fill this critical gap by examining the association between MPLs and cigarette prices. State MPLs were compiled from primary legal research databases and were linked to cigarette prices constructed from the Nielsen retail scanner data and the self-reported cigarette prices from the Tobacco Use Supplement to the Current Population Survey. Multivariate regression analyses were conducted to examine the association between MPLs and the major components of MPLs and cigarette prices. The presence of MPLs was associated with higher cigarette prices. In addition, cigarette prices were higher, above and beyond the higher prices resulting from MPLs, in states that prohibit below-cost combination sales; do not allow any distributing party to use trade discounts to reduce the base cost of cigarettes; prohibit distributing parties from meeting the price of a competitor, and prohibit distributing below-cost coupons to the consumer. Moreover, states that had total markup rates >24% were associated with significantly higher cigarette prices. MPLs are an effective way to increase cigarette prices. The impact of MPLs can be further strengthened by imposing greater markup rates and by prohibiting coupon distribution, competitor price matching, and use of below-cost combination sales and trade discounts. Published by the BMJ Publishing Group Limited. For permission to use (where not already granted under a licence) please go to http://www.bmj.com/company/products-services/rights-and-licensing/.

  4. Chest computed tomography using iterative reconstruction vs filtered back projection (Part 1): evaluation of image noise reduction in 32 patients

    Energy Technology Data Exchange (ETDEWEB)

    Pontana, Francois; Pagniez, Julien; Faivre, Jean-Baptiste; Remy, Jacques [Univ. Lille Nord de France, Department of Thoracic Imaging Hospital Calmette (EA 2694), Lille (France); Flohr, Thomas [Siemens HealthCare, Computed Tomography Division, Forchheim (Germany); Duhamel, Alain [Univ. Lille Nord de France, Department of Medical Statistics, Lille (France); Remy-Jardin, Martine [Univ. Lille Nord de France, Department of Thoracic Imaging Hospital Calmette (EA 2694), Lille (France); Hospital Calmette, Department of Thoracic Imaging, Lille cedex (France)

    2011-03-15

    To assess noise reduction achievable with an iterative reconstruction algorithm. 32 consecutive chest CT angiograms were reconstructed with regular filtered back projection (FBP) (Group 1) and an iterative reconstruction technique (IRIS) with 3 (Group 2a) and 5 (Group 2b) iterations. Objective image noise was significantly reduced in Group 2a and Group 2b compared with FBP (p < 0.0001). There was a significant reduction in the level of subjective image noise in Group 2a compared with Group 1 images (p < 0.003), further reinforced on Group 2b images (Group 2b vs Group 1; p < 0.0001) (Group 2b vs Group 2a; p = 0.0006). The overall image quality scores significantly improved on Group 2a images compared with Group 1 images (p = 0.0081) and on Group 2b images compared with Group 2a images (p < 0.0001). Comparative analysis of individual CT features of mild lung infiltration showed improved conspicuity of ground glass attenuation (p < 0.0001), ill-defined micronodules (p = 0.0351) and emphysematous lesions (p < 0.0001) on Group 2a images, further improved on Group 2b images for ground glass attenuation (p < 0.0001), and emphysematous lesions (p = 0.0087). Compared with regular FBP, iterative reconstructions enable significant reduction of image noise without loss of diagnostic information, thus having the potential to decrease radiation dose during chest CT examinations. (orig.)

  5. Standardized Semantic Markup for Reference Terminologies, Thesauri and Coding Systems: Benefits for distributed E-Health Applications.

    Science.gov (United States)

    Hoelzer, Simon; Schweiger, Ralf K; Liu, Raymond; Rudolf, Dirk; Rieger, Joerg; Dudeck, Joachim

    2005-01-01

    With the introduction of the ICD-10 as the standard for diagnosis, the development of an electronic representation of its complete content, inherent semantics and coding rules is necessary. Our concept refers to current efforts of the CEN/TC 251 to establish a European standard for hierarchical classification systems in healthcare. We have developed an electronic representation of the ICD-10 with the extensible Markup Language (XML) that facilitates the integration in current information systems or coding software taking into account different languages and versions. In this context, XML offers a complete framework of related technologies and standard tools for processing that helps to develop interoperable applications.

  6. XML Based Virtual Instrument Markup Language--VIML%基于XML的虚拟仪器标记语言--VIML

    Institute of Scientific and Technical Information of China (English)

    钟莹; 陈祥献

    2002-01-01

    详细讨论了如何基于XML语言来定义一套用于描述虚拟仪器系统的特定领域的标记语言--VIML(Virtual Instrument Markup Language).通过分析虚拟仪器系统的特点和运行方式,在元语言XML的基础上对虚拟仪器组件、端口、连接关系等进行了元素定义,规范了虚拟仪器系统的描述方法.

  7. UOML: An Unstructured Operation Markup Language%UOML:一种非结构化操作标记语言

    Institute of Scientific and Technical Information of China (English)

    王东临; 姜海峰; 张常有

    2007-01-01

    书面文档是非结构化信息的主要表现形式之一.本文提出了一种非结构化文档的互操作标准,并完成了其规范语言UOML(Unstructured Operation Markup Language)方案,定义了详尽的操作接口规范.操作接口基于XML表达,实现平台无关性.

  8. Design of the PET–MR system for head imaging of the DREAM Project

    Energy Technology Data Exchange (ETDEWEB)

    González, A.J., E-mail: agonzalez@i3m.upv.es [Instituto de Instrumentación para Imagen Molecular (I3M), Centro Mixto UPV CSIC CIEMAT, Camino de Vera s/n, 46022, Valencia (Spain); Conde, P.; Hernández, L.; Herrero, V.; Moliner, L.; Monzó, J.M.; Orero, A.; Peiró, A.; Rodríguez-Álvarez, M.J.; Ros, A.; Sánchez, F.; Soriano, A.; Vidal, L.F.; Benlloch, J.M. [Instituto de Instrumentación para Imagen Molecular (I3M), Centro Mixto UPV CSIC CIEMAT, Camino de Vera s/n, 46022, Valencia (Spain)

    2013-02-21

    In this paper we describe the overall design of a PET–MR system for head imaging within the framework of the DREAM Project as well as the first detector module tests. The PET system design consists of 4 rings of 16 detector modules each and it is expected to be integrated in a head dedicated radio frequency coil of an MR scanner. The PET modules are based on monolithic LYSO crystals coupled by means of optical devices to an array of 256 Silicon Photomultipliers. These types of crystals allow to preserve the scintillation light distribution and, thus, to recover the exact photon impact position with the proper characterization of such a distribution. Every module contains 4 Application Specific Integrated Circuits (ASICs) which return detailed information of several light statistical momenta. The preliminary tests carried out on this design and controlled by means of ASICs have shown promising results towards the suitability of hybrid PET–MR systems.

  9. Effects of image orientation and ground control points distribution on unmanned aerial vehicle photogrammetry projects on a road cut slope

    Science.gov (United States)

    Carvajal-Ramírez, Fernando; Agüera-Vega, Francisco; Martínez-Carricondo, Patricio J.

    2016-07-01

    The morphology of road cut slopes, such as length and high slopes, is one of the most prevalent causes of landslides and terrain stability troubles. Digital elevation models (DEMs) and orthoimages are used for land management purposes. Two flights with different orientations with respect to the target surface were planned, and four photogrammetric projects were carried out during these flights to study the image orientation effects. Orthogonal images oriented to the cut slope with only sidelaps were compared to the classical vertical orientation, with sidelapping, endlapping, and both types of overlapping simultaneously. DEM and orthoimages obtained from the orthogonal project showed smaller errors than those obtained from the other three photogrammetric projects, with the first one being much easier to manage. One additional flight and six photogrammetric projects were used to establish an objective criterion to locate the three ground control points for georeferencing and rectification DEMs and orthoimages. All possible sources of errors were evaluated in the DEMs and orthoimages.

  10. The IMAGE project: methodological issues for the molecular genetic analysis of ADHD

    Directory of Open Access Journals (Sweden)

    Faraone Stephen V

    2006-08-01

    Full Text Available Abstract The genetic mechanisms involved in attention deficit hyperactivity disorder (ADHD are being studied with considerable success by several centres worldwide. These studies confirm prior hypotheses about the role of genetic variation within genes involved in the regulation of dopamine, norepinephrine and serotonin neurotransmission in susceptibility to ADHD. Despite the importance of these findings, uncertainties remain due to the very small effects sizes that are observed. We discuss possible reasons for why the true strength of the associations may have been underestimated in research to date, considering the effects of linkage disequilibrium, allelic heterogeneity, population differences and gene by environment interactions. With the identification of genes associated with ADHD, the goal of ADHD genetics is now shifting from gene discovery towards gene functionality – the study of intermediate phenotypes ('endophenotypes'. We discuss methodological issues relating to quantitative genetic data from twin and family studies on candidate endophenotypes and how such data can inform attempts to link molecular genetic data to cognitive, affective and motivational processes in ADHD. The International Multi-centre ADHD Gene (IMAGE project exemplifies current collaborative research efforts on the genetics of ADHD. This European multi-site project is well placed to take advantage of the resources that are emerging following the sequencing of the human genome and the development of international resources for whole genome association analysis. As a result of IMAGE and other molecular genetic investigations of ADHD, we envisage a rapid increase in the number of identified genetic variants and the promise of identifying novel gene systems that we are not currently investigating, opening further doors in the study of gene functionality.

  11. Gabor filter based optical image recognition using Fractional Power Polynomial model based common discriminant locality preserving projection with kernels

    Science.gov (United States)

    Li, Jun-Bao

    2012-09-01

    This paper presents Gabor filter based optical image recognition using Fractional Power Polynomial model based Common Kernel Discriminant Locality Preserving Projection. This method tends to solve the nonlinear classification problem endured by optical image recognition owing to the complex illumination condition in practical applications, such as face recognition. The first step is to apply Gabor filter to extract desirable textural features characterized by spatial frequency, spatial locality and orientation selectivity to cope with the variations in illumination. In the second step we propose Class-wise Locality Preserving Projection through creating the nearest neighbor graph guided by the class labels for the textural features reduction. Finally we present Common Kernel Discriminant Vector with Fractional Power Polynomial model to reduce the dimensions of the textural features for recognition. For the performance evaluation on optical image recognition, we test the proposed method on a challenging optical image recognition problem, face recognition.

  12. 标记系统及学术文本处理的未来(一)%Markup Systems and the Future of Scholarly Text Processing

    Institute of Scientific and Technical Information of China (English)

    詹姆斯·库姆斯; 艾兰·瑞尼尔; 史蒂芬·德罗斯; 王晓光(译); 李梦琳(译); 刘晶(译)

    2016-01-01

    标记事务影响着学者们对思考与写作系统的采纳使用。程序性标记和表示性标记会阻碍这一进程的发展,描述性标记则通过简化机械性任务,让学者们将注意力集中于内容上的做法,以加快这一进程。%Markup practices can affect the move toward systems that support scholars in the process of thinking and writing. Whereas procedural and presentational markup systems retard that movement, descriptive markup systems accelerate the pace by simplifying mechanical tasks and alowing the authors to focus their attention on the content.

  13. Theoretical background of back-projection imaging and its relation to time-reversal and inverse solutions

    Science.gov (United States)

    Fukahata, Yukitoshi; Yagi, Yuji; Rivera, Luis

    2013-04-01

    The back-projection (BP) method has become a popular tool to image the rupture process of large earthquakes since the success of Ishii et al. (2005), while it has not been clear what the BP image represents physically. We clarified the theoretical background of the back-projection (BP) imaging and related it to classical inverse solutions via the hybrid back-projection (HBP) imaging (Yagi et al., 2012). In the HBP method, which is mathematically almost equivalent to the time-reversal imaging, cross correlations of observed waveforms with the corresponding Green's functions are calculated. The key condition for BP to work well is that the Green's function is sufficiently close to the delta function after stacking. Then, we found that the BP image represents the slip motion on the fault, and approximately equals to the least squares solution. In HBP, instead of the Green's function in BP, the stacked auto-correlation function of the Green's function must be similar to the delta function to obtain a fine image. Because the auto-correlation function is usually closer to the delta function than the original function, we can expect that HBP works better than BP, if we can reasonably assume the Green's function. With another condition that the stacked cross-correlation function of the Green's functions for different source locations is small enough, the HBP image is approximately equal to the least squares solution. If these assumptions are not satisfied, however, the HBP image corresponds to a damped least squares solution with an extremely large damping parameter, which is clearly inferior to usual inverse solutions. From the viewpoint of inverse theory, an elaborate stacking technique (such as an N-th root stack) to obtain a finer resolution image inevitably leads to larger estimation errors.

  14. Imaging and characterizing exo-Earths at 10 microns - The TIKI project

    Science.gov (United States)

    Marchis, F.; Christian, M.; Bradley, C.; Blain, C.; Lardiere, O.; Melis, C.; Packham, C.; Skemer, A.

    2016-12-01

    The next major breakthrough in the exoplanet field will be the discovery and characterization of an Earth-like planet in the solar neighborhood. While there are many promising ways to achieve such a detection, imaging at a wavelength of 10-micron offers a unique, fast and cost-effective approach to detect and characterize planets around nearby stars with physical properties similar to Earth (same size, incident stellar radiation, low clouds and average black body temperature of 288K). Instead of detecting reflected light from such planets (typical of shorter wavelength ground/space missions), the 10-micron band is sensitive to the thermal emission of planets, being very close to the peak emission wavelength of 288K black bodies. In less than 200h worth of observing, a 10-micron imager could detect the thermal emission of an Earth-like planet around Alpha Centauri A and B on current 8-m telescopes. The same instrument on a 30-m telescope, like the TMT, will be quite competitive, allowing such discovery and characterization in an hour. We will present the design of such instrument for the 8m Gemini South telescope, as well as simulations of observations and the schedule of our project which aims at a first light in 2019.

  15. Femtosecond single- to few-electron point-projection microscopy for nanoscale dynamic imaging

    CERN Document Server

    Bainbridge, A R; Bryan, W A

    2015-01-01

    Femtosecond electron microscopy produces real-space images of matter on micrometre to nanometre length scales in a series of ultrafast snapshots, tracking the dynamic evolution of charge distributions. Given that femtosecond pulses of electrons self-disperse under space-charge broadening, the ideal operation mode (without active compression) is a single electron per pulse. Here, we demonstrate for the first time femtosecond single-electron point projection microscopy (fs-ePPM) in a laser-pump fs-e-probe configuration. The electron pulses in the present work have an energy of only 150 eV and take tens of picoseconds to propagate to the object under study. Nonetheless, we achieve a temporal resolution with a standard deviation of 120 fs, combined with a spatial resolution below a micrometre. We image the evolution of a localized region of charge at the apex of a nanoscale metal tip induced by 30 fs 800 nm laser pulses at 50 kHz. The rapidity of the strong-field response of the metal nanotip facilitates the char...

  16. Design of lenses to project the image of a pupil in optical testing interferometers.

    Science.gov (United States)

    Malacara, Z; Malacara, D

    1995-02-01

    When an optical surface or lens in an interferometer (Twyman-Green or Fizeau interferometer) is tested, the wave front at the pupil of the element being tested does not have the same shape as at the observation plane, because this shape changes along its propagation trajectory if the wave front is not flat or spherical. An imaging lens must then be used, as reported many times in the literature, to project the image of the pupil of the system being tested over the observation plane. This lens is especially necessary if the deviation of the wave front from sphericity is large, as in the case of testing paraboloidal or hyperboloidal surfaces. We show that the wave front at both positions does not need to have the same shape. The only condition is that the interferograms at both places be identical, which is a different condition. This leads to some considerations that should be taken into account in the optical design of such lenses.

  17. A Kernel—based Nonlinear Subspace Projection Method for Dimensionality Reduction of Hyperspectral Image Data

    Institute of Scientific and Technical Information of China (English)

    GUYanfeng; ZHANGYe; QUANTaifan

    2003-01-01

    A challenging problem in using hyper-spectral data is to eliminate redundancy and preserve useful spectral information for applications. In this pa-per, a kernel-based nonlinear subspace projection (KNSP)method is proposed for feature extraction and dimension-ality reduction in hyperspectral images. The proposed method includes three key steps: subspace partition of hyperspectral data, feature extraction using kernel-based principal component analysis (KPCA) and feature selec-tion based on class separability in the subspaces. Accord-ing to the strong correlation between neighboring bands,the whole data space is partitioned to requested subspaces.In each subspace, the KPCA method is used to effectively extract spectral feature and eliminate redundancies. A criterion function based on class discrimination and sepa-rability is used for the transformed feature selection. For the purpose of testifying its effectiveness, the proposed new method is compared with the classical principal component analysis (PCA) and segmented principal component trans-formation (SPCT). A hyperspectral image classification is performed on AVIRIS data. which have 224 svectral bands.Experimental results show that KNSP is very effective for feature extraction and dimensionality reduction of hyper-spectral data and provides significant improvement over classical PCA and current SPCT technique.

  18. 2D magnetotelluric inversion using reflection seismic images as constraints and application in the COSC project

    Science.gov (United States)

    Kalscheuer, Thomas; Yan, Ping; Hedin, Peter; Garcia Juanatey, Maria d. l. A.

    2017-04-01

    We introduce a new constrained 2D magnetotelluric (MT) inversion scheme, in which the local weights of the regularization operator with smoothness constraints are based directly on the envelope attribute of a reflection seismic image. The weights resemble those of a previously published seismic modification of the minimum gradient support method introducing a global stabilization parameter. We measure the directional gradients of the seismic envelope to modify the horizontal and vertical smoothness constraints separately. An appropriate choice of the new stabilization parameter is based on a simple trial-and-error procedure. Our proposed constrained inversion scheme was easily implemented in an existing Gauss-Newton inversion package. From a theoretical perspective, we compare our new constrained inversion to similar constrained inversion methods, which are based on image theory and seismic attributes. Successful application of the proposed inversion scheme to the MT field data of the Collisional Orogeny in the Scandinavian Caledonides (COSC) project using constraints from the envelope attribute of the COSC reflection seismic profile (CSP) helped to reduce the uncertainty of the interpretation of the main décollement. Thus, the new model gave support to the proposed location of a future borehole COSC-2 which is supposed to penetrate the main décollement and the underlying Precambrian basement.

  19. Optimizing Low Light Level Imaging Techniques and Sensor Design Parameters using CCD Digital Cameras for Potential NASA Earth Science Research aboard a Small Satellite or ISS Project

    Data.gov (United States)

    National Aeronautics and Space Administration — This project made use of two computational photography techniques, high dynamic range (HDR) imagery formulation and bilateral filters to enable novel imaging...

  20. The Salton Seismic Imaging Project (SSIP): Active Rift Processes in the Brawley Seismic Zone

    Science.gov (United States)

    Han, L.; Hole, J. A.; Stock, J. M.; Fuis, G. S.; Rymer, M. J.; Driscoll, N. W.; Kent, G.; Harding, A. J.; Gonzalez-Fernandez, A.; Lazaro-Mancilla, O.

    2011-12-01

    The Salton Seismic Imaging Project (SSIP), funded by NSF and USGS, acquired seismic data in and across the Salton Trough in southern California and northern Mexico in March 2011. The project addresses both rifting processes at the northern end of the Gulf of California extensional province and earthquake hazards at the southern end of the San Andreas Fault system. Seven lines of onshore refraction and low-fold reflection data were acquired in the Coachella, Imperial, and Mexicali Valleys, two lines and a grid of airgun and OBS data were acquired in the Salton Sea, and onshore-offshore data were recorded. Almost 2800 land seismometers and 50 OBS's were used in almost 5000 deployments at almost 4300 sites, in spacing as dense as 100 m. These instruments received seismic signals from 126 explosive shots up to 1400 kg and over 2300 airgun shots. In the central Salton Trough, North American lithosphere appears to have been rifted completely apart. Based primarily on a 1979 seismic refraction project, the 20-22 km thick crust is apparently composed entirely of new crust added by magmatism from below and sedimentation from above. Active rifting of this new crust is manifested by shallow (geothermal energy production. This presentation is focused on an onshore-offshore line of densely sampled refraction and low-fold reflection data that crosses the Brawley Seismic Zone and Salton Buttes in the direction of plate motion. At the time of abstract submission, data analysis was very preliminary, consisting of first-arrival tomography of the onshore half of the line for upper crustal seismic velocity. Crystalline basement (>5 km/s), comprised of late-Pliocene to Quaternary sediment metamorphosed by the high heat flow, occurs at ~2 km depth beneath the Salton Buttes and geothermal field and ~4 km depth south of the BSZ. Preliminary result suggests that the velocity of basement is lower in the BSZ than to the south, which may result from fracturing. Basement velocity appears to be

  1. JPEG images of Seismic data collected by the U.S. Geological Survey as part of the Geologic Framework Studies project offshore of the Grand Strand, South Carolina

    Data.gov (United States)

    U.S. Geological Survey, Department of the Interior — JPEG images of each seismic line were generated in order to incorporate images of the seismic data into Geographic Information System (GIS) projects and data...

  2. Image reconstruction in phase-contrast tomography exploiting the second-order statistical properties of the projection data.

    Science.gov (United States)

    Chou, Cheng-Ying; Huang, Pin-Yu

    2011-11-21

    X-ray phase-contrast tomography (PCT) methods seek to quantitatively reconstruct separate images that depict an object's absorption and refractive contrasts. Most PCT reconstruction algorithms generally operate by explicitly or implicitly performing the decoupling of the projected absorption and phase properties at each tomographic view angle by use of a phase-retrieval formula. However, the presence of zero-frequency singularity in the Fourier-based phase retrieval formulas will lead to a strong noise amplification in the projection estimate and the subsequent refractive image obtained using conventional algorithms like filtered backprojection (FBP). Tomographic reconstruction by use of statistical methods can account for the noise model and a priori information, and thereby can produce images with better quality over conventional filtered backprojection algorithms. In this work, we demonstrate an iterative image reconstruction method that exploits the second-order statistical properties of the projection data can mitigate noise amplification in PCT. The autocovariance function of the reconstructed refractive images was empirically computed and shows smaller and shorter noise correlation compared to those obtained using the FBP and unweighted penalized least-squares methods. Concepts from statistical decision theory are applied to demonstrate that the statistical properties of images produced by our method can improve signal detectability.

  3. The feature and analysis of GML--Geography Markup Language%GML-地理标记语言特征与分析

    Institute of Scientific and Technical Information of China (English)

    梁明; 鲍艳; 黄朝华

    2002-01-01

    针对GIS行业中的数据交换与共享的困难,在讨论了XML(Extensible Markup Language)这一Web新行业标准的出现给GIS带来了希望的基础上,进一步详细地论述和分析了GML(Geography Markup Language)的特征,讨论了GML已逐渐成为大家所接受并容易理解的一种空间信息的交换格式.

  4. Fast polyenergetic forward projection for image formation using OpenCL on a heterogeneous parallel computing platform.

    Science.gov (United States)

    Zhou, Lili; Clifford Chao, K S; Chang, Jenghwa

    2012-11-01

    Simulated projection images of digital phantoms constructed from CT scans have been widely used for clinical and research applications but their quality and computation speed are not optimal for real-time comparison with the radiography acquired with an x-ray source of different energies. In this paper, the authors performed polyenergetic forward projections using open computing language (OpenCL) in a parallel computing ecosystem consisting of CPU and general purpose graphics processing unit (GPGPU) for fast and realistic image formation. The proposed polyenergetic forward projection uses a lookup table containing the NIST published mass attenuation coefficients (μ∕ρ) for different tissue types and photon energies ranging from 1 keV to 20 MeV. The CT images of interested sites are first segmented into different tissue types based on the CT numbers and converted to a three-dimensional attenuation phantom by linking each voxel to the corresponding tissue type in the lookup table. The x-ray source can be a radioisotope or an x-ray generator with a known spectrum described as weight w(n) for energy bin E(n). The Siddon method is used to compute the x-ray transmission line integral for E(n) and the x-ray fluence is the weighted sum of the exponential of line integral for all energy bins with added Poisson noise. To validate this method, a digital head and neck phantom constructed from the CT scan of a Rando head phantom was segmented into three (air, gray∕white matter, and bone) regions for calculating the polyenergetic projection images for the Mohan 4 MV energy spectrum. To accelerate the calculation, the authors partitioned the workloads using the task parallelism and data parallelism and scheduled them in a parallel computing ecosystem consisting of CPU and GPGPU (NVIDIA Tesla C2050) using OpenCL only. The authors explored the task overlapping strategy and the sequential method for generating the first and subsequent DRRs. A dispatcher was designed to drive

  5. Transparent ICD and DRG coding using information technology: linking and associating information sources with the eXtensible Markup Language.

    Science.gov (United States)

    Hoelzer, Simon; Schweiger, Ralf K; Dudeck, Joachim

    2003-01-01

    With the introduction of ICD-10 as the standard for diagnostics, it becomes necessary to develop an electronic representation of its complete content, inherent semantics, and coding rules. The authors' design relates to the current efforts by the CEN/TC 251 to establish a European standard for hierarchical classification systems in health care. The authors have developed an electronic representation of ICD-10 with the eXtensible Markup Language (XML) that facilitates integration into current information systems and coding software, taking different languages and versions into account. In this context, XML provides a complete processing framework of related technologies and standard tools that helps develop interoperable applications. XML provides semantic markup. It allows domain-specific definition of tags and hierarchical document structure. The idea of linking and thus combining information from different sources is a valuable feature of XML. In addition, XML topic maps are used to describe relationships between different sources, or "semantically associated" parts of these sources. The issue of achieving a standardized medical vocabulary becomes more and more important with the stepwise implementation of diagnostically related groups, for example. The aim of the authors' work is to provide a transparent and open infrastructure that can be used to support clinical coding and to develop further software applications. The authors are assuming that a comprehensive representation of the content, structure, inherent semantics, and layout of medical classification systems can be achieved through a document-oriented approach.

  6. Multidetector CT evaluation of central airways stenoses: Comparison of virtual bronchoscopy, minimal-intensity projection, and multiplanar reformatted images

    Directory of Open Access Journals (Sweden)

    Dinesh K Sundarakumar

    2011-01-01

    Full Text Available Aims: To evaluate the diagnostic utility of virtual bronchoscopy, multiplanar reformatted images, and minimal-intensity projection in assessing airway stenoses. Settings and Design: It was a prospective study involving 150 patients with symptoms of major airway disease. Materials and Methods: Fifty-six patients were selected for analysis based on the detection of major airway lesions on fiber-optic bronchoscopy (FB or routine axial images. Comparisons were made between axial images, virtual bronchoscopy (VB, minimal-intensity projection (minIP, and multiplanar reformatted (MPR images using FB as the gold standard. Lesions were evaluated in terms of degree of airway narrowing, distance from carina, length of the narrowed segment and visualization of airway distal to the lesion. Results: MPR images had the highest degree of agreement with FB (Κ = 0.76 in the depiction of degree of narrowing. minIP had the least degree of agreement with FB (Κ = 0.51 in this regard. The distal visualization was best on MPR images (84.2%, followed by axial images (80.7%, whereas FB could visualize the lesions only in 45.4% of the cases. VB had the best agreement with FB in assessing the segment length (Κ = 0.62. Overall there were no statistically significant differences in the measurement of the distance from the carina in the axial, minIP, and MPR images. MPR images had the highest overall degree of confidence, namely, 70.17% (n = 40. Conclusion: Three-dimensional reconstruction techniques were found to improve lesion evaluation compared with axial images alone. The technique of MPR images was the most useful for lesion evaluation and provided additional information useful for surgical and airway interventions in tracheobronchial stenosis. minIP was useful in the overall depiction of airway anatomy.

  7. Super Resolution Image Enhancement for a Flash Lidar: Back Projection Method

    Science.gov (United States)

    Bulyshev, Alexander; Hines, Glenn; Vanek, Michael; Amzajerdian, Farzin; Reisse, Robert; Pierrottet, Diego

    2010-01-01

    In this paper a new image processing technique for flash LIDAR data is presented as a potential tool to enable safe and precise spacecraft landings in future robotic or crewed lunar and planetary missions. Flash LIDARs can generate, in real-time, range data that can be interpreted as a 3-dimensional (3-D) image and transformed into a corresponding digital elevation map (DEM). The NASA Autonomous Landing and Hazard Avoidance (ALHAT) project is capitalizing on this new technology by developing, testing and analyzing flash LIDARs to detect hazardous terrain features such as craters, rocks, and slopes during the descent phase of spacecraft landings. Using a flash LIDAR for this application looks very promising, however through theoretical and simulation analysis the ALHAT team has determined that a single frame, or mosaic, of flash LIDAR data may not be sufficient to build a landing site DEM with acceptable spatial resolution, precision, size, or for a mosaic, in time, to meet current system requirements. One way to overcome this potential limitation is by enhancing the flash LIDAR output images. We propose a new super-resolution algorithm applicable to flash LIDAR range data that will create a DEM with sufficient accuracy, precision and size to meet current ALHAT requirements. The performance of our super-resolution algorithm is analyzed by processing data generated during a series of simulation runs by a high fidelity model of a flash LIDAR imaging a high resolution synthetic lunar elevation map. The flash LIDAR model is attached to a simulated spacecraft by a gimbal that points the LIDAR to a target landing site. For each simulation run, a sequence of flash LIDAR frames is recorded and processed as the spacecraft descends toward the landing site. Each run has a different trajectory profile with varying LIDAR look angles of the terrain. We process the output LIDAR frames using our SR algorithm and the results show that the achieved level of accuracy and precision of

  8. A Novel Image Alignment Algorithm Based on Rotation-Discriminating Ring-Shifted Projection for Automatic Optical Inspection

    Directory of Open Access Journals (Sweden)

    Chin-Sheng Chen

    2016-05-01

    Full Text Available This paper proposes a novel image alignment algorithm based on rotation-discriminating ring-shifted projection for automatic optical inspection. This new algorithm not only identifies the location of the template image within an inspection image but also provides precise rotation information during the template-matching process by using a novel rotation estimation scheme, the so-called ring-shifted technique. We use a two stage framework with an image pyramid searching technique for realizing the proposed image alignment algorithm; in the first stage, the similarity based on hybrid projection transformation with the image pyramid searching technique is employed for quick selection and location of the candidates in the inspection image. In the second stage, the rotation angle of the object is estimated by a novel ring-shifted technique. The estimation is performed only for the most likely candidate which is the one having the highest similarity in the first stage. The experimental results show that the proposed method provides accurate estimation for template matching with arbitrary rotations and is applicable in various environmental conditions.

  9. Improvement of Self-Image; Public Law 89-10, Title I-Project 1939N, Evaluation.

    Science.gov (United States)

    Hartog, John F.; Modlinger, Roy

    This report describes a Federally-financed project to improve the self-image of disadvantaged pupils living in two institutions for neglected children. After a week of orientation the children were exposed to 3 weeks of camping environment. Program activities included small group counseling, independent study, physical education, and music, drama…

  10. Forward ray tracing for image projection prediction and surface reconstruction in the evaluation of corneal topography systems

    NARCIS (Netherlands)

    Snellenburg, J.J.; Braaf, B.; Hermans, E.A.; Heijde, van der R.G.L.; Sicam, V.A.

    2010-01-01

    A forward ray tracing (FRT) model is presented to determine the exact image projection in a general corneal topography system. Consequently, the skew ray error in Placido-based topography is demonstrated. A quantitative analysis comparing FRT-based algorithms and Placido-based algorithms in reconstr

  11. High dynamic range imaging for fringe projection profilometry with single-shot raw data of the color camera

    Science.gov (United States)

    Yin, Yongkai; Cai, Zewei; Jiang, Hao; Meng, Xiangfeng; Xi, Jiangtao; Peng, Xiang

    2017-02-01

    It is a challenging issue to get satisfied results in terms of 3D imaging for shiny surface with fringe projection profilometry (FPP), as the wide variation of surface reflectance for shiny surface will lead to bad exposure, which requires the high dynamic range imaging (HDRI) technique. HDRI with monochromatic illumination and single-shot raw data of the color camera is presented in this paper. From the single-shot raw data, 4 monochrome sub-images corresponding to R, G, G and B channels can be separated respectively. After the attenuation ratios between R&G, G&B channels are calibrated, an image with higher dynamic range can be synthesized with the 4 sub-images, which can help to avoid the impact of bad exposure and improve the accuracy of phase calculation. Experiments demonstrate the validity of proposed method for shiny surface.

  12. Improving image quality in Electrical Impedance Tomography (EIT using Projection Error Propagation-based Regularization (PEPR technique: A simulation study

    Directory of Open Access Journals (Sweden)

    Tushar Kanti Bera

    2011-03-01

    Full Text Available A Projection Error Propagation-based Regularization (PEPR method is proposed and the reconstructed image quality is improved in Electrical Impedance Tomography (EIT. A projection error is produced due to the misfit of the calculated and measured data in the reconstruction process. The variation of the projection error is integrated with response matrix in each iterations and the reconstruction is carried out in EIDORS. The PEPR method is studied with the simulated boundary data for different inhomogeneity geometries. Simulated results demonstrate that the PEPR technique improves image reconstruction precision in EIDORS and hence it can be successfully implemented to increase the reconstruction accuracy in EIT.>doi:10.5617/jeb.158 J Electr Bioimp, vol. 2, pp. 2-12, 2011

  13. Imaging source process of earthquakes from back-projection of high frequency seismograms

    Science.gov (United States)

    Pulido, N.

    2007-12-01

    Standard methodologies for calculation of the earthquakes source process, are based on inversion procedures which require the calculation of complete source-stations Greens functions. On the other hand alternative procedures have been developed in order to directly retrieve an image of the rupture process from high frequency seismograms (Spudich et. al. 1984, Kao and Shan 2004, Ishii et. al. 2005). In this study we extend the Isochron- Backprojection methodology (Festa et al., 2006), to image the source process of earthquakes, by incorporating the use of high frequency seismograms around the source area. We take full advantage of the dense strong motion networks available in Japan to model the source process of recent Japanese earthquakes. The IBM method differs from conventional earthquake source inversion approaches, in that the calculation of Green's functions is not required. The idea of the procedure is to directly back-project amplitudes of seismograms envelopes around the source into a space image of the earthquake rupture (Pulido et al. 2007). The method requires the calculation of theoretical travel times between a set of grids points distributed across the fault plane, and every station. For this purpose and for simplicity we assume a multi-layered 1D model. All travel times are adjusted by a station correction factor, calculated by taking the difference between observed and theoretical travel times at each station. Next we calculate the rupture time of every grid within the fault plane by assuming some arbitrary constant rupture velocity value, and obtain the isochrones distribution across the fault plane by adding subfaults rupture times and the corresponding travel times for every station. We select waveforms that have clear P and S wavelets, which means stations located approximately between 40 km and 100km from the epicenter. We extract P-wave windows between the origin time of the earthquake and the theoretical arrival of the S-wave, and taper 1s of

  14. Comparison of pure and hybrid iterative reconstruction techniques with conventional filtered back projection: Image quality assessment in the cervicothoracic region

    Energy Technology Data Exchange (ETDEWEB)

    Katsura, Masaki, E-mail: mkatsura-tky@umin.ac.jp [Department of Radiology, Graduate School of Medicine, The University of Tokyo, 7-3-1 Hongo, Bunkyo-ku, Tokyo 113-8655 (Japan); Sato, Jiro; Akahane, Masaaki; Matsuda, Izuru; Ishida, Masanori; Yasaka, Koichiro; Kunimatsu, Akira; Ohtomo, Kuni [Department of Radiology, Graduate School of Medicine, The University of Tokyo, 7-3-1 Hongo, Bunkyo-ku, Tokyo 113-8655 (Japan)

    2013-02-15

    Objectives: To evaluate the impact on image quality of three different image reconstruction techniques in the cervicothoracic region: model-based iterative reconstruction (MBIR), adaptive statistical iterative reconstruction (ASIR), and filtered back projection (FBP). Methods: Forty-four patients underwent unenhanced standard-of-care clinical computed tomography (CT) examinations which included the cervicothoracic region with a 64-row multidetector CT scanner. Images were reconstructed with FBP, 50% ASIR-FBP blending (ASIR50), and MBIR. Two radiologists assessed the cervicothoracic region in a blinded manner for streak artifacts, pixilated blotchy appearances, critical reproduction of visually sharp anatomical structures (thyroid gland, common carotid artery, and esophagus), and overall diagnostic acceptability. Objective image noise was measured in the internal jugular vein. Data were analyzed using the sign test and pair-wise Student's t-test. Results: MBIR images had significant lower quantitative image noise (8.88 ± 1.32) compared to ASIR images (18.63 ± 4.19, P < 0.01) and FBP images (26.52 ± 5.8, P < 0.01). Significant improvements in streak artifacts of the cervicothoracic region were observed with the use of MBIR (P < 0.001 each for MBIR vs. the other two image data sets for both readers), while no significant difference was observed between ASIR and FBP (P > 0.9 for ASIR vs. FBP for both readers). MBIR images were all diagnostically acceptable. Unique features of MBIR images included pixilated blotchy appearances, which did not adversely affect diagnostic acceptability. Conclusions: MBIR significantly improves image noise and streak artifacts of the cervicothoracic region over ASIR and FBP. MBIR is expected to enhance the value of CT examinations for areas where image noise and streak artifacts are problematic.

  15. Did Caravaggio employ optical projections? An image analysis of the parity in the artist's paintings

    Science.gov (United States)

    Stork, David G.

    2011-03-01

    We examine one class of evidence put forth in support of the recent claim that the Italian Baroque master Caravaggio secretly employed optical projectors as a direct drawing aid. Specically, we test the claims that there is an "abnormal number" of left-handed gures in his works and, more specically, that "During the Del Monte period he had too many left-handed models." We also test whether there was a reversal in the handedness of specic models in different paintings. Such evidence would be consistent with the claim that Caravaggio switched between using a convex-lens projector to using a concave-mirror projector and would support, but not prove, the claim that Caravaggio used optical projections. We estimate the parity (+ or -) of each of Caravaggio's 76 appropriate oil paintings based on the handedness of gures, the orientation of asymmetric objects, placement of scabbards, depicted text, and so on, and search for statistically significant changes in handedness in figures. We also track the direction of the illumination over time in the artist's uvre. We discuss some historical evidence as it relates to the question of his possible use of optics. We nd the proportion of left-handed figures lower than that in the general population (not higher), and no significant change in estimated handedness even of individual models. Optical proponents have argued that Bacchus (1597) portrays a left-handed gure, but we give visual and cultural evidence showing that this gure is instead right-handed, thereby rebutting this claim that the painting was executed using optical projections. Moreover, scholars recently re-discovered the image of the artist with easel and canvas reflected in the carafe of wine at the front left in the tableau in Bacchus, showing that this painting was almost surely executed using traditional (non-optical) easel methods. We conclude that there is 1) no statistically signicant abnormally high number of left-handed gures in Caravaggio's uvre, including

  16. Methane Band and Continuum Band Imaging of Titan's Atmosphere Using Cassini ISS Narrow Angle Camera Pictures from the CURE/Cassini Imaging Project

    Science.gov (United States)

    Shitanishi, Jennifer; Gillam, S. D.

    2009-05-01

    The study of Titan's atmosphere, which bears resemblance to early Earth's, may help us understand more of our own. Constructing a Monte Carlo model of Titan's atmosphere is helpful to achieve this goal. Methane (MT) and continuum band (CB) images of Titan taken by the CURE/Cassini Imaging Project, using the Cassini Narrow Angle Camera (NAC) were analyzed. They were scheduled by Cassini Optical Navigation. Images were obtained at phase 53°, 112°, 161°, and 165°. They include 22 total MT1(center wavelength at 619nm), MT2(727nm), MT3(889nm), CB1(635nm), CB2(751nm), and CB3(938nm) images. They were reduced with previously written scripts using the National Optical Astronomy Observatory Image Reduction and Analysis Facility scientific analysis suite. Correction for horizontal and vertical banding and cosmic ray hits were made. The MT images were registered with corresponding CB images to ensure that subsequently measured fluxes ratios came from the same parts of the atmosphere. Preliminary DN limb-to-limb scans and loci of the haze layers will be presented. Accurate estimates of the sub-spacecraft points on each picture will be presented. Flux ratios (FMT/FCB=Q0) along the scans and total absorption coefficients along the lines of sight from the spacecraft through the pixels (and into Titan) will also be presented.

  17. The Pilot Project 'Optical Image Correlation' of the ESA Geohazards Thematic Exploitation Platform (GTEP)

    Science.gov (United States)

    Stumpf, André; Malet, Jean-Philippe

    2016-04-01

    Since more than 20 years, "Earth Observation" (EO) satellites developed or operated by ESA have provided a wealth of data. In the coming years, the Sentinel missions, along with the Copernicus Contributing Missions as well as Earth Explorers and other, Third Party missions will provide routine monitoring of our environment at the global scale, thereby delivering an unprecedented amount of data. While the availability of the growing volume of environmental data from space represents a unique opportunity for science, general R&D, and applications, it also poses major challenges to fully exploit the potential of archived and daily incoming datasets. Those challenges do not only comprise the discovery, access, processing, and visualization of large data volumes but also an increasing diversity of data sources and end users from different fields (e.g. EO, in-situ monitoring, and modeling). In this context, the GTEP (Geohazards Thematic Exploitation Platform) initiative aims to build an operational distributed processing platform to maximize the exploitation of EO data from past and future satellite missions for the detection and monitoring of natural hazards. This presentation focuses on the "Optical Image Correlation" Pilot Project (funded by ESA within the GTEP platform) which objectives are to develop an easy-to-use, flexible and distributed processing chain for: 1) the automated reconstruction of surface Digital Elevation Models from stereo (and tristereo) pairs of Spot 6/7 and Pléiades satellite imagery, 2) the creation of ortho-images (panchromatic and multi-spectral) of Landsat 8, Sentinel-2, Spot 6/7 and Pléiades scenes, 3) the calculation of horizontal (E-N) displacement vectors based on sub-pixel image correlation. The processing chains is being implemented on the GEP cloud-based (Hadoop, MapReduce) environment and designed for analysis of surface displacements at local to regional scale (10-1000 km2) targeting in particular co-seismic displacement and slow

  18. The Yosemite Extreme Panoramic Imaging Project: Monitoring Rockfall in Yosemite Valley with High-Resolution, Three-Dimensional Imagery

    Science.gov (United States)

    Stock, G. M.; Hansen, E.; Downing, G.

    2008-12-01

    Yosemite Valley experiences numerous rockfalls each year, with over 600 rockfall events documented since 1850. However, monitoring rockfall activity has proved challenging without high-resolution "basemap" imagery of the Valley walls. The Yosemite Extreme Panoramic Imaging Project, a partnership between the National Park Service and xRez Studio, has created an unprecedented image of Yosemite Valley's walls by utilizing gigapixel panoramic photography, LiDAR-based digital terrain modeling, and three-dimensional computer rendering. Photographic capture was accomplished by 20 separate teams shooting from key overlapping locations throughout Yosemite Valley. The shots were taken simultaneously in order to ensure uniform lighting, with each team taking over 500 overlapping shots from each vantage point. Each team's shots were then assembled into 20 gigapixel panoramas. In addition, all 20 gigapixel panoramas were projected onto a 1 meter resolution digital terrain model in three-dimensional rendering software, unifying Yosemite Valley's walls into a vertical orthographic view. The resulting image reveals the geologic complexity of Yosemite Valley in high resolution and represents one of the world's largest photographic captures of a single area. Several rockfalls have already occurred since image capture, and repeat photography of these areas clearly delineates rockfall source areas and failure dynamics. Thus, the imagery has already proven to be a valuable tool for monitoring and understanding rockfall in Yosemite Valley. It also sets a new benchmark for the quality of information a photographic image, enabled with powerful new imaging technology, can provide for the earth sciences.

  19. An object tracking and global localization method using the cylindrical projection of omnidirectional image

    Institute of Scientific and Technical Information of China (English)

    孙英杰; 曹其新; 洪炳熔

    2004-01-01

    We present an omnidirectional vision system we have implemented to provide our mobile robot with a fast tracking and robust localization capability. An algorithm is proposed to do reconstruction of the environmentfrom the omnidirectional image and global localization of the robot in the context of the Middle Size League RoboCup field. This is accomplished by learning a set of visual landmarks such as the goals and the corner posts. Due to the dynamic changing environment and the partially observable landmarks, four localization cases are discussed in order to get robust localization performance. Localization is performed using a method that matches the observed landmarks, i.e. color blobs, which are extracted from the environment. The advantages of the cylindrical projection are discussed giving special consideration to the characteristics of the visual landmark and the meaning of the blob extraction. The analysis is established based on real time experiments with our omnidirectional vision system and the actual mobile robot. The comparative studies are presented and the feasibility of the method is shown.

  20. Using commercially available off-the-shelf software and hardware to develop an intranet-based hypertext markup language teaching file.

    Science.gov (United States)

    Wendt, G J

    1999-05-01

    This presentation describes the technical details of implementing a process to create digital teaching files stressing the use of commercial off-the-shelf (COTS) software and hardware and standard hypertext markup language (HTML) to keep development costs to a minimum.

  1. Wide-field wide-band interferometric imaging:The WB A-Projection and hybrid algorithms

    CERN Document Server

    Bhatnagar, S; Golap, K

    2013-01-01

    Variations of the antenna primary beam (PB) pattern as a function of time, frequency and polarization form one of the dominant direction-dependent effects at most radio frequency bands. These gains may also vary from antenna to antenna. The A-Projection algorithm, published earlier, accounts for the effects of the narrow-band antenna PB in full polarization. In this paper we present the Wide-Band A-Projection algorithm (WB A-Projection) to include the effects of wide bandwidth in the A-term itself and show that the resulting algorithm simultaneously corrects for the time, frequency and polarization dependence of the PB. We discuss the combination of the WB A-Projection and the Multi-term Multi Frequency Synthesis (MT-MFS) algorithm for simultaneous mapping of the sky brightness distribution and the spectral index distribution across a wide field of view. We also discuss the use of the narrow-band A-Projection algorithm in hybrid imaging schemes that account for the frequency dependence of the PB in the image ...

  2. Image quality and dose in mammography in 17 countries in Africa, Asia and Eastern Europe: Results from IAEA projects

    Energy Technology Data Exchange (ETDEWEB)

    Ciraj-Bjelac, Olivera, E-mail: ociraj@vinca.rs [Vinca Institute of Nuclear Sciences, Belgrade (Serbia); Avramova-Cholakova, Simona, E-mail: s_avramova@mail.bg [National Centre of Radiobiology and Radiation Protection (NCRRP), Ministry of Health, Sofia (Bulgaria); Beganovic, Adnan, E-mail: adnanbeg@gmail.com [University of Sarajevo, Institute of Radiology, Sarajevo, Bosnia and Herzegovina (Bosnia and Herzegovina); Economides, Sotirios, E-mail: adnanbeg@gmail.com [Ministry of Development, Greek Atomic Energy Commission, Athens (Greece); Faj, Dario, E-mail: dariofaj@mefos.hr [University Hospital Osijek, Osijek (Croatia); Gershan, Vesna, E-mail: vgersan@gmail.com [National Commission on Radiation Protection, Institute of Radiology, Skopje, The Former Yugoslav Republic of Macedonia (Macedonia, The Former Yugoslav Republic of); Grupetta, Edward, E-mail: edward.gruppetta@gov.mt [St. Luke' s Hospital, Diagnostic Radiology Unit, Guardamangia (Malta); Kharita, M.H., E-mail: mhkharita@aec.org.sy [Atomic Energy Commission of Syria (AECS), Department of Protection and Safety, Radiation and Nuclear Regulatory Office, Damascus (Syrian Arab Republic); Milakovic, Milomir, E-mail: mmilomir@teol.net [Ministry of Health of the Republic of Srpska, Public Health Institute of Republic of Srpska, Banja Luka, Bosnia and Herzegovina (Bosnia and Herzegovina); Milu, Constantin, E-mail: milu.constantin@yahoo.com [Institute of Public Health, SSDL, Bucharest (Romania); Muhogora, Wilbroad E., E-mail: wmuhogora@yahoo.com [Tanzania Atomic Energy Commission, Arusha, Tanzania (Tanzania, United Republic of); Muthuvelu, Pirunthavany, E-mail: mpvany@gmail.com [Ministry of Health, Radiation Health Safety Branch, Putra Jaya (Malaysia); Oola, Samuel, E-mail: ooladavidson@yahoo.com [Mulago Hospital, Department of Radiology, Kampala (Uganda); Setayeshi, Saeid, E-mail: setayesh@aut.ac.ir [Ministry of Health, Treatment, and Medical Training, Tehran (Iran, Islamic Republic of); and others

    2012-09-15

    Purpose: The objective is to study mammography practice from an optimisation point of view by assessing the impact of simple and immediately implementable corrective actions on image quality. Materials and methods: This prospective multinational study included 54 mammography units in 17 countries. More than 21,000 mammography images were evaluated using a three-level image quality scoring system. Following initial assessment, appropriate corrective actions were implemented and image quality was re-assessed in 24 units. Results: The fraction of images that were considered acceptable without any remark in the first phase (before the implementation of corrective actions) was 70% and 75% for cranio-caudal and medio-lateral oblique projections, respectively. The main causes for poor image quality before corrective actions were related to film processing, damaged or scratched image receptors, or film-screen combinations that are not spectrally matched, inappropriate radiographic techniques and lack of training. Average glandular dose to a standard breast was 1.5 mGy (mean and range 0.59–3.2 mGy). After optimisation the frequency of poor quality images decreased, but the relative contributions of the various causes remained similar. Image quality improvements following appropriate corrective actions were up to 50 percentage points in some facilities. Conclusions: Poor image quality is a major source of unnecessary radiation dose to the breast. An increased awareness of good quality mammograms is of particular importance for countries that are moving towards introduction of population-based screening programmes. The study demonstrated how simple and low-cost measures can be a valuable tool in improving of image quality in mammography.

  3. Restructuring an EHR system and the Medical Markup Language (MML) standard to improve interoperability by archetype technology.

    Science.gov (United States)

    Kobayashi, Shinji; Kume, Naoto; Yoshihara, Hiroyuki

    2015-01-01

    In 2001, we developed an EHR system for regional healthcare information inter-exchange and to provide individual patient data to patients. This system was adopted in three regions in Japan. We also developed a Medical Markup Language (MML) standard for inter- and intra-hospital communications. The system was built on a legacy platform, however, and had not been appropriately maintained or updated to meet clinical requirements. To improve future maintenance costs, we reconstructed the EHR system using archetype technology on the Ruby on Rails platform, and generated MML equivalent forms from archetypes. The system was deployed as a cloud-based system for preliminary use as a regional EHR. The system now has the capability to catch up with new requirements, maintaining semantic interoperability with archetype technology. It is also more flexible than the legacy EHR system.

  4. The Biological Connection Markup Language: a SBGN-compliant format for visualization, filtering and analysis of biological pathways.

    Science.gov (United States)

    Beltrame, Luca; Calura, Enrica; Popovici, Razvan R; Rizzetto, Lisa; Guedez, Damariz Rivero; Donato, Michele; Romualdi, Chiara; Draghici, Sorin; Cavalieri, Duccio

    2011-08-01

    Many models and analysis of signaling pathways have been proposed. However, neither of them takes into account that a biological pathway is not a fixed system, but instead it depends on the organism, tissue and cell type as well as on physiological, pathological and experimental conditions. The Biological Connection Markup Language (BCML) is a format to describe, annotate and visualize pathways. BCML is able to store multiple information, permitting a selective view of the pathway as it exists and/or behave in specific organisms, tissues and cells. Furthermore, BCML can be automatically converted into data formats suitable for analysis and into a fully SBGN-compliant graphical representation, making it an important tool that can be used by both computational biologists and 'wet lab' scientists. The XML schema and the BCML software suite are freely available under the LGPL for download at http://bcml.dc-atlas.net. They are implemented in Java and supported on MS Windows, Linux and OS X.

  5. The latest MML (Medical Markup Language) version 2.3--XML-based standard for medical data exchange/storage.

    Science.gov (United States)

    Guo, Jinqiu; Araki, Kenji; Tanaka, Koji; Sato, Junzo; Suzuki, Muneou; Takada, Akira; Suzuki, Toshiaki; Nakashima, Yusei; Yoshihara, Hiroyuki

    2003-08-01

    As a set of standards, Medical Markup Language (MML) has been developed over the last 8 years to allow the exchange of medical data between different medical information providers MML version 2.21 was characterized by XML as metalanguage and was announced in 1999, at which time full-scale implementation tests were carried out; subsequently, various information and functional inadequacies were discovered in this version. MML was therefore updated to version 2.3 in 2001. At present, MML contains 12 MML modules including the new referral, test result, and report modules. In version 2.3, the group ID element was added; the access right definition and health insurance module were amended.

  6. Active-source seismic imaging below Lake Malawi (Nyasa) from the SEGMeNT project

    Science.gov (United States)

    Shillington, D. J.; Scholz, C. A.; Gaherty, J. B.; Accardo, N. J.; McCartney, T.; Chindandali, P. R. N.; Kamihanda, G.; Trinhammer, P.; Wood, D. A.; Khalfan, M.; Ebinger, C. J.; Nyblade, A.; Mbogoni, G. J.; Mruma, A. H.; Salima, J.; Ferdinand-Wambura, R.

    2015-12-01

    Little is known about the controls on the initiation and development of magmatism and segmentation in young rift systems. The northern Lake Malawi (Nyasa) rift in the East African Rift System is an early stage rift exhibiting pronounced tectonic segmentation, which is defined in the upper crust by ~100-km-long border faults. Very little volcanism is associated with rifting; the only surface expression of magmatism occurs in an accommodation zone between segments to the north of the lake in the Rungwe Volcanic Province. The SEGMeNT (Study of Extension and maGmatism in Malawi aNd Tanzania) project is a multidisciplinary, multinational study that is acquiring a suite of geophysical, geological and geochemical data to characterize deformation and magmatism in the crust and mantle lithosphere along 2-3 segments of this rift. As a part of the SEGMeNT project, we acquired seismic reflection and refraction data in Lake Malawi (Nyasa) in March-April 2015. Over 2000 km of seismic reflection data were acquired with a 500 to 2580 cu in air gun array from GEUS/Aarhus and a 500- to 1500-m-long seismic streamer from Syracuse University over a grid of lines across and along the northern and central basins. Air gun shots from MCS profiles and 1000 km of additional shooting with large shot intervals were also recorded on 27 short-period and 6 broadband lake bottom seismometers from Scripps Oceanographic Institute as a part of the Ocean Bottom Seismic Instrument Pool (OBSIP) as well as the 55-station onshore seismic array. The OBS were deployed along one long strike line and two dip lines. We will present preliminary data and results from seismic reflection and refraction data acquired in the lake and their implications for crustal deformation within and between rift segments. Seismic reflection data image structures up to ~5-6 km below the lake bottom, including syntectonic sediments, intrabasinal faults and other complex horsts. Some intrabasinal faults in both the northern and

  7. Descriptive Analysis on the Impacts of Universal Zero-Markup Drug Policy on a Chinese Urban Tertiary Hospital.

    Science.gov (United States)

    Tian, Wei; Yuan, Jiangfan; Yang, Dong; Zhang, Lanjing

    2016-01-01

    Universal Zero-Markup Drug Policy (UZMDP) mandates no price mark-ups on any drug dispensed by a healthcare institution, and covers the medicines not included in the China's National Essential Medicine System. Five tertiary hospitals in Beijing, China implemented UZMDP in 2012. Its impacts on these hospitals are unknown. We described the effects of UZMDP on a participating hospital, Jishuitan Hospital, Beijing, China (JST). This retrospective longitudinal study examined the hospital-level data of JST and city-level data of tertiary hospitals of Beijing, China (BJT) 2009-2015. Rank-sum tests and join-point regression analyses were used to assess absolute changes and differences in trends, respectively. In absolute terms, after the UZDMP implementation, there were increased annual patient-visits and decreased ratios of medicine-to-healthcare-charges (RMOH) in JST outpatient and inpatient services; however, in outpatient service, physician work-days decreased and physician-workload and inflation-adjusted per-visit healthcare charges increased, while the inpatient physician work-days increased and inpatient mortality-rate reduced. Interestingly, the decreasing trend in inpatient mortality-rate was neutralized after UZDMP implementation. Compared with BJT and under influence of UZDMP, JST outpatient and inpatient services both had increasing trends in annual patient-visits (annual percentage changes[APC] = 8.1% and 6.5%, respectively) and decreasing trends in RMOH (APC = -4.3% and -5.4%, respectively), while JST outpatient services had increasing trend in inflation-adjusted per-visit healthcare charges (APC = 3.4%) and JST inpatient service had decreasing trend in inflation-adjusted per-visit medicine-charges (APC = -5.2%). Implementation of UZMDP seems to increase annual patient-visits, reduce RMOH and have different impacts on outpatient and inpatient services in a Chinese urban tertiary hospital.

  8. PLÉIADES PROJECT: ASSESSMENT OF GEOREFERENCING ACCURACY, IMAGE QUALITY, PANSHARPENING PERFORMENCE AND DSM/DTM QUALITY

    Directory of Open Access Journals (Sweden)

    H. Topan

    2016-06-01

    Full Text Available Pléiades 1A and 1B are twin optical satellites of Optical and Radar Federated Earth Observation (ORFEO program jointly running by France and Italy. They are the first satellites of Europe with sub-meter resolution. Airbus DS (formerly Astrium Geo runs a MyGIC (formerly Pléiades Users Group program to validate Pléiades images worldwide for various application purposes. The authors conduct three projects, one is within this program, the second is supported by BEU Scientific Research Project Program, and the third is supported by TÜBİTAK. Assessment of georeferencing accuracy, image quality, pansharpening performance and Digital Surface Model/Digital Terrain Model (DSM/DTM quality subjects are investigated in these projects. For these purposes, triplet panchromatic (50 cm Ground Sampling Distance (GSD and VNIR (2 m GSD Pléiades 1A images were investigated over Zonguldak test site (Turkey which is urbanised, mountainous and covered by dense forest. The georeferencing accuracy was estimated with a standard deviation in X and Y (SX, SY in the range of 0.45m by bias corrected Rational Polynomial Coefficient (RPC orientation, using ~170 Ground Control Points (GCPs. 3D standard deviation of ±0.44m in X, ±0.51m in Y, and ±1.82m in Z directions have been reached in spite of the very narrow angle of convergence by bias corrected RPC orientation. The image quality was also investigated with respect to effective resolution, Signal to Noise Ratio (SNR and blur coefficient. The effective resolution was estimated with factor slightly below 1.0, meaning that the image quality corresponds to the nominal resolution of 50cm. The blur coefficients were achieved between 0.39-0.46 for triplet panchromatic images, indicating a satisfying image quality. SNR is in the range of other comparable space borne images which may be caused by de-noising of Pléiades images. The pansharpened images were generated by various methods, and are validated by most common

  9. PLÉIADES Project: Assessment of Georeferencing Accuracy, Image Quality, Pansharpening Performence and Dsm/dtm Quality

    Science.gov (United States)

    Topan, Hüseyin; Cam, Ali; Özendi, Mustafa; Oruç, Murat; Jacobsen, Karsten; Taşkanat, Talha

    2016-06-01

    Pléiades 1A and 1B are twin optical satellites of Optical and Radar Federated Earth Observation (ORFEO) program jointly running by France and Italy. They are the first satellites of Europe with sub-meter resolution. Airbus DS (formerly Astrium Geo) runs a MyGIC (formerly Pléiades Users Group) program to validate Pléiades images worldwide for various application purposes. The authors conduct three projects, one is within this program, the second is supported by BEU Scientific Research Project Program, and the third is supported by TÜBİTAK. Assessment of georeferencing accuracy, image quality, pansharpening performance and Digital Surface Model/Digital Terrain Model (DSM/DTM) quality subjects are investigated in these projects. For these purposes, triplet panchromatic (50 cm Ground Sampling Distance (GSD)) and VNIR (2 m GSD) Pléiades 1A images were investigated over Zonguldak test site (Turkey) which is urbanised, mountainous and covered by dense forest. The georeferencing accuracy was estimated with a standard deviation in X and Y (SX, SY) in the range of 0.45m by bias corrected Rational Polynomial Coefficient (RPC) orientation, using ~170 Ground Control Points (GCPs). 3D standard deviation of ±0.44m in X, ±0.51m in Y, and ±1.82m in Z directions have been reached in spite of the very narrow angle of convergence by bias corrected RPC orientation. The image quality was also investigated with respect to effective resolution, Signal to Noise Ratio (SNR) and blur coefficient. The effective resolution was estimated with factor slightly below 1.0, meaning that the image quality corresponds to the nominal resolution of 50cm. The blur coefficients were achieved between 0.39-0.46 for triplet panchromatic images, indicating a satisfying image quality. SNR is in the range of other comparable space borne images which may be caused by de-noising of Pléiades images. The pansharpened images were generated by various methods, and are validated by most common statistical

  10. Fundamental remote science research program. Part 2: Status report of the mathematical pattern recognition and image analysis project

    Science.gov (United States)

    Heydorn, R. P.

    1984-01-01

    The Mathematical Pattern Recognition and Image Analysis (MPRIA) Project is concerned with basic research problems related to the study of he Earth from remotely sensed measurements of its surface characteristics. The program goal is to better understand how to analyze the digital image that represents the spatial, spectral, and temporal arrangement of these measurements for purposing of making selected inferences about the Earth. This report summarizes the progress that has been made toward this program goal by each of the principal investigators in the MPRIA Program.

  11. Image and diagnosis quality of X-ray image transmission via cell phone camera: a project study evaluating quality and reliability.

    Directory of Open Access Journals (Sweden)

    Hans Goost

    Full Text Available INTRODUCTION: Developments in telemedicine have not produced any relevant benefits for orthopedics and trauma surgery to date. For the present project study, several parameters were examined during assessment of x-ray images, which had been photographed and transmitted via cell phone. MATERIALS AND METHODS: A total of 100 x-ray images of various body regions were photographed with a Nokia cell phone and transmitted via email or MMS. Next, the transmitted photographs were reviewed on a laptop computer by five medical specialists and assessed regarding quality and diagnosis. RESULTS: Due to their poor quality, the transmitted MMS images could not be evaluated and this path of transmission was therefore excluded. Mean size of transmitted x-ray email images was 394 kB (range: 265-590 kB, SD ± 59, average transmission time was 3.29 min ± 8 (CI 95%: 1.7-4.9. Applying a score from 1-10 (very poor - excellent, mean image quality was 5.8. In 83.2 ± 4% (mean value ± SD of cases (median 82; 80-89%, there was agreement between final diagnosis and assessment by the five medical experts who had received the images. However, there was a markedly low concurrence ratio in the thoracic area and in pediatric injuries. DISCUSSION: While the rate of accurate diagnosis and indication for surgery was high with a concurrence ratio of 83%, considerable differences existed between the assessed regions, with lowest values for thoracic images. Teleradiology is a cost-effective, rapid method which can be applied wherever wireless cell phone reception is available. In our opinion, this method is in principle suitable for clinical use, enabling the physician on duty to agree on appropriate measures with colleagues located elsewhere via x-ray image transmission on a cell phone.

  12. Teaching strategies for using projected images to develop conceptual understanding: Exploring discussion practices in computer simulation and static image-based lessons

    Science.gov (United States)

    Price, Norman T.

    The availability and sophistication of visual display images, such as simulations, for use in science classrooms has increased exponentially however, it can be difficult for teachers to use these images to encourage and engage active student thinking. There is a need to describe flexible discussion strategies that use visual media to engage active thinking. This mixed methods study analyzes teacher behavior in lessons using visual media about the particulate model of matter that were taught by three experienced middle school teachers. Each teacher taught one half of their students with lessons using static overheads and taught the other half with lessons using a projected dynamic simulation. The quantitative analysis of pre-post data found significant gain differences between the two image mode conditions, suggesting that the students who were assigned to the simulation condition learned more than students who were assigned to the overhead condition. Open coding was used to identify a set of eight image-based teaching strategies that teachers were using with visual displays. Fixed codes for this set of image-based discussion strategies were then developed and used to analyze video and transcripts of whole class discussions from 12 lessons. The image-based discussion strategies were refined over time in a set of three in-depth 2x2 comparative case studies of two teachers teaching one lesson topic with two image display modes. The comparative case study data suggest that the simulation mode may have offered greater affordances than the overhead mode for planning and enacting discussions. The 12 discussions were also coded for overall teacher student interaction patterns, such as presentation, IRE, and IRF. When teachers moved during a lesson from using no image to using either image mode, some teachers were observed asking more questions when the image was displayed while others asked many fewer questions. The changes in teacher student interaction patterns suggest

  13. Design of a Web interface for anatomical images.

    Science.gov (United States)

    Barker, T M; Young, J

    1997-03-01

    Interactive documents for use with the World Wide Web have been developed for viewing multi-dimensional radiographic and visual images of human anatomy, derived from the Visible Human Project. Emphasis has been placed on user-controlled features and selections. The purpose was to develop an interface which was independent of host operating system and browser software which would allow viewing of information by multiple users. The interfaces were implemented using HyperText Markup Language (HTML) forms, C programming language and Perl scripting language. Images were pre-processed using ANALYZE and stored on a Web server in CompuServe GIF format. Viewing options were included in the document design, such as interactive thresholding and two-dimensional slice direction. The interface is an example of what may be achieved using the World Wide Web. Key applications envisaged for such software include education, research and accessing of information through internal databases and simultaneous sharing of images by remote computers by health personnel for diagnostic purposes.

  14. A Visual Database System for Image Analysis on Parallel Computers and its Application to the EOS Amazon Project

    Science.gov (United States)

    Shapiro, Linda G.; Tanimoto, Steven L.; Ahrens, James P.

    1996-01-01

    The goal of this task was to create a design and prototype implementation of a database environment that is particular suited for handling the image, vision and scientific data associated with the NASA's EOC Amazon project. The focus was on a data model and query facilities that are designed to execute efficiently on parallel computers. A key feature of the environment is an interface which allows a scientist to specify high-level directives about how query execution should occur.

  15. Precise Automatic Image Coregistration Tools to Enable Pixel-Level Change Detection Project

    Data.gov (United States)

    National Aeronautics and Space Administration — Automated detection of land cover changes between multitemporal images (i.e., images captured at different times) has long been a goal of the remote sensing...

  16. High Resolution Multispectral Flow Imaging of Cells with Extended Depth of Field Project

    Data.gov (United States)

    National Aeronautics and Space Administration — Proposed is the development the extended depth of field (EDF) or confocal like imaging capabilities of a breakthrough multispectral high resolution imaging flow...

  17. High Resolution Multispectral Flow Imaging of Cells with Extended Depth of Field Project

    Data.gov (United States)

    National Aeronautics and Space Administration — Proposed is the development the extended depth of field (EDF) or confocal like imaging capabilities of a breakthrough multispectral high resolution imaging flow...

  18. In-Flight Imaging Systems for Hypervelocity and Re-Entry Vehicles Project

    Data.gov (United States)

    National Aeronautics and Space Administration — It is proposed to create a rugged, reliable, compact, standardized imaging system for hypervelocity and re-entry vehicles using sapphire windows, small imagers, and...

  19. PAO Image Gallery = Public Affairs Photos of EROS Projects: 1972 - 2005

    Data.gov (United States)

    U.S. Geological Survey, Department of the Interior — The EROS Image Gallery collection is composed of a wide variety of images ranging from low altitude aircraft to satellite and NASA imagery; oblique photographs and...

  20. Projection-based energy weighting on photon-counting X-ray images in digital subtraction mammography: a feasibility study

    Science.gov (United States)

    Choi, Sung-Hoon; Lee, Seung-Wan; Choi, Yu-Na; Lee, Young-Jin; Kim, Hee-Joung

    2014-03-01

    In digital subtraction mammography where subtracts the one image (with contrast medium) from the other (anatomical background) for observing the tumor structure, tumors which include more blood vessels than normal tissue could be distinguished through the enhancement of contrast-to-noise ratio (CNR). In order to improve CNR, we adopted projection-based energy weighting for iodine solutions with four different concentrations embedded in a breast phantom (50% adipose and 50% glandular tissues). In this study, a Monte Carlo simulation was used to simulate a 40 mm thickness breast phantom, which has 15 and 30 mg/cm3 iodine solutions with two different thicknesses, and an energy resolving photon-counting system. The input energy spectrum was simulated in a range of 20 to 45 keV in order to reject electronic noise and include k-edge energy of iodine (33.2 keV). The results showed that the projection-based energy weighting improved the CNR by factors of 1.05-1.86 compared to the conventional integrating images. Consequently, the CNR of images from the digital subtraction mammography could be improved by the projection-based energy weighting with photon-counting detectors.

  1. Reflections from a Creative Community-Based Participatory Research Project Exploring Health and Body Image with First Nations Girls

    Directory of Open Access Journals (Sweden)

    Jennifer M. Shea PhD

    2013-02-01

    Full Text Available In Canada, Aboriginal peoples often experience a multitude of inequalities when compared with the general population, particularly in relation to health (e.g., increased incidence of diabetes. These inequalities are rooted in a negative history of colonization. Decolonizing methodologies recognize these realities and aim to shift the focus from communities being researched to being collaborative partners in the research process. This article describes a qualitative community-based participatory research project focused on health and body image with First Nations girls in a Tribal Council region in Western Canada. We discuss our project design and the incorporation of creative methods (e.g., photovoice to foster integration and collaboration as related to decolonizing methodology principles. This article is both descriptive and reflective as it summarizes our project and discusses lessons learned from the process, integrating evaluations from the participating girls as well as our reflections as researchers.

  2. Performance of sampling density-weighted and postfiltered density-adapted projection reconstruction in sodium magnetic resonance imaging.

    Science.gov (United States)

    Konstandin, Simon; Nagel, Armin M

    2013-02-01

    Sampling density-weighted apodization projection reconstruction sequences are evaluated for three-dimensional radial imaging. The readout gradients of the sampling density-weighted apodization sequence are designed such that the locally averaged sampling density matches a Hamming filter function. This technique is compared with density-adapted projection reconstruction with nonfiltered and postfiltered image reconstruction. Sampling density-weighted apodization theoretically allows for a 1.28-fold higher signal-to-noise ratio compared with postfiltered density-adapted projection reconstruction sequences, if T(2)* decay is negligible compared with the readout duration T(RO). Simulations of the point-spread functions are performed for monoexponential and biexponential decay to investigate the effects of T(2)* decay on the performance of the different sequences. Postfiltered density-adapted projection reconstruction performs superior to sampling density-weighted apodization for large T(RO)/T(2)* ratios [>1.36 (monoexponential decay); >0.35 (biexponential decay with T(2s)*/T(2f)* = 10)], if signal-to-noise ratio of point-like objects is considered. In conclusion, it depends on the readout parameters, the T(2)* relaxation times, and the dimensions of the subject which of both sequences is most suitable. Copyright © 2012 Wiley Periodicals, Inc.

  3. A sparsity-based iterative algorithm for reconstruction of micro-CT images from highly undersampled projection datasets obtained with a synchrotron X-ray source

    Science.gov (United States)

    Melli, S. Ali; Wahid, Khan A.; Babyn, Paul; Cooper, David M. L.; Gopi, Varun P.

    2016-12-01

    Synchrotron X-ray Micro Computed Tomography (Micro-CT) is an imaging technique which is increasingly used for non-invasive in vivo preclinical imaging. However, it often requires a large number of projections from many different angles to reconstruct high-quality images leading to significantly high radiation doses and long scan times. To utilize this imaging technique further for in vivo imaging, we need to design reconstruction algorithms that reduce the radiation dose and scan time without reduction of reconstructed image quality. This research is focused on using a combination of gradient-based Douglas-Rachford splitting and discrete wavelet packet shrinkage image denoising methods to design an algorithm for reconstruction of large-scale reduced-view synchrotron Micro-CT images with acceptable quality metrics. These quality metrics are computed by comparing the reconstructed images with a high-dose reference image reconstructed from 1800 equally spaced projections spanning 180°. Visual and quantitative-based performance assessment of a synthetic head phantom and a femoral cortical bone sample imaged in the biomedical imaging and therapy bending magnet beamline at the Canadian Light Source demonstrates that the proposed algorithm is superior to the existing reconstruction algorithms. Using the proposed reconstruction algorithm to reduce the number of projections in synchrotron Micro-CT is an effective way to reduce the overall radiation dose and scan time which improves in vivo imaging protocols.

  4. Topography improvements in MEMS DMs for high-contrast, high-resolution imaging Project

    Data.gov (United States)

    National Aeronautics and Space Administration — This project will develop and demonstrate an innovative microfabrication process to substantially improve the surface quality achievable in high-resolution...

  5. MR-guided PET motion correction in LOR space using generic projection data for image reconstruction with PRESTO

    Energy Technology Data Exchange (ETDEWEB)

    Scheins, J., E-mail: j.scheins@fz-juelich.de [Institute of Neuroscience and Medicine (INM-4), Forschungszentrum Jülich GmbH, Leo-Brandt-Str., 52425 Jülich (Germany); Ullisch, M.; Tellmann, L.; Weirich, C.; Rota Kops, E.; Herzog, H.; Shah, N.J. [Institute of Neuroscience and Medicine (INM-4), Forschungszentrum Jülich GmbH, Leo-Brandt-Str., 52425 Jülich (Germany)

    2013-02-21

    The BrainPET scanner from Siemens, designed as hybrid MR/PET system for simultaneous acquisition of both modalities, provides high-resolution PET images with an optimum resolution of 3 mm. However, significant head motion often compromises the achievable image quality, e.g. in neuroreceptor studies of human brain. This limitation can be omitted when tracking the head motion and accurately correcting measured Lines-of-Response (LORs). For this purpose, we present a novel method, which advantageously combines MR-guided motion tracking with the capabilities of the reconstruction software PRESTO (PET Reconstruction Software Toolkit) to convert motion-corrected LORs into highly accurate generic projection data. In this way, the high-resolution PET images achievable with PRESTO can also be obtained in presence of severe head motion.

  6. SU-E-J-126: Generation of Fluoroscopic 3D Images Using Single X-Ray Projections on Realistic Modified XCAT Phantom Data.

    Science.gov (United States)

    Mishra, P; Li, R; St James, S; Yue, Y; Mak, R; Berbeco, R; Lewis, J

    2012-06-01

    To simulate the process of generating fluoroscopic 3D treatment images from 4DCT and measured 2D x-ray projections using a realistic modified XCAT phantom based on measured patient 3D tumor trajectories. First, the existing XCAT phantom is adapted to incorporate measured patient lung tumor trajectories. Realistic diaphragm and chest wall motion are automatically generated based on input tumor motion and position, producing synchronized, realistic motion in the phantom. Based on 4DCT generated with the XCAT phantom, we derive patient-specific motion models that are used to generate 3D fluoroscopic images. Patient-specific models are created in two steps: first, the displacement vector fields (DVFs) are obtained through deformable image registration of each phase of 4DCT with respect to a reference image (typically peak-exhale). Each phase is registered to the reference image to obtain (n-1) DVFs. Second, the most salient characteristics in the DVFs are captured in a compact representation through principal component analysis (PCA). Since PCA is a linear decomposition method, all the DVFs can be represented as linear combinations of eigenvectors. Fluoroscopic 3D images are obtained using the projection image to determine optimal weights for the eigenvectors. These weights are determined through iterative optimization of a cost function relating the projection image to the 3D image via the PCA lung motion model and a projection operator. Constructing fluoroscopic 3D images is thus reduced to finding optimal weights for the eigenvectors. Fluoroscopic 3D treatment images were generated using the modified XCAT phantom. The average relative error of the reconstructed image over 30 sec is 0.0457 HU and the standard deviation is 0.0063. The XCAT phantom was modified to produce realistic images by incorporating patient tumor trajectories. The modified XCAT phantom can be used to simulate the process of generating fluoroscopic 3D treatment images from 4DCT and 2D x

  7. Final report on LDRD project : single-photon-sensitive imaging detector arrays at 1600 nm.

    Energy Technology Data Exchange (ETDEWEB)

    Childs, Kenton David; Serkland, Darwin Keith; Geib, Kent Martin; Hawkins, Samuel D.; Carroll, Malcolm S.; Klem, John Frederick; Sheng, Josephine Juin-Jye; Patel, Rupal K.; Bolles, Desta; Bauer, Tom M.; Koudelka, Robert

    2006-11-01

    The key need that this project has addressed is a short-wave infrared light detector for ranging (LIDAR) imaging at temperatures greater than 100K, as desired by nonproliferation and work for other customers. Several novel device structures to improve avalanche photodiodes (APDs) were fabricated to achieve the desired APD performance. A primary challenge to achieving high sensitivity APDs at 1550 nm is that the small band-gap materials (e.g., InGaAs or Ge) necessary to detect low-energy photons exhibit higher dark counts and higher multiplication noise compared to materials like silicon. To overcome these historical problems APDs were designed and fabricated using separate absorption and multiplication (SAM) regions. The absorption regions used (InGaAs or Ge) to leverage these materials 1550 nm sensitivity. Geiger mode detection was chosen to circumvent gain noise issues in the III-V and Ge multiplication regions, while a novel Ge/Si device was built to examine the utility of transferring photoelectrons in a silicon multiplication region. Silicon is known to have very good analog and GM multiplication properties. The proposed devices represented a high-risk for high-reward approach. Therefore one primary goal of this work was to experimentally resolve uncertainty about the novel APD structures. This work specifically examined three different designs. An InGaAs/InAlAs Geiger mode (GM) structure was proposed for the superior multiplication properties of the InAlAs. The hypothesis to be tested in this structure was whether InAlAs really presented an advantage in GM. A Ge/Si SAM was proposed representing the best possible multiplication material (i.e., silicon), however, significant uncertainty existed about both the Ge material quality and the ability to transfer photoelectrons across the Ge/Si interface. Finally a third pure germanium GM structure was proposed because bulk germanium has been reported to have better dark count properties. However, significant

  8. Evaluation of dose reduction and image quality in CT colonography: Comparison of low-dose CT with iterative reconstruction and routine-dose CT with filtered back projection

    Energy Technology Data Exchange (ETDEWEB)

    Nagata, Koichi [Kameda Medical Center, Department of Radiology, Kamogawa, Chiba (Japan); Jichi Medical University, Department of Radiology, Tochigi (Japan); National Cancer Center, Cancer Screening Technology Division, Research Center for Cancer Prevention and Screening, Tokyo (Japan); Fujiwara, Masanori; Mogi, Tomohiro; Iida, Nao [Kameda Medical Center Makuhari, Department of Radiology, Chiba (Japan); Kanazawa, Hidenori; Sugimoto, Hideharu [Jichi Medical University, Department of Radiology, Tochigi (Japan); Mitsushima, Toru [Kameda Medical Center Makuhari, Department of Gastroenterology, Chiba (Japan); Lefor, Alan T. [Jichi Medical University, Department of Surgery, Tochigi (Japan)

    2015-01-15

    To prospectively evaluate the radiation dose and image quality comparing low-dose CT colonography (CTC) reconstructed using different levels of iterative reconstruction techniques with routine-dose CTC reconstructed with filtered back projection. Following institutional ethics clearance and informed consent procedures, 210 patients underwent screening CTC using automatic tube current modulation for dual positions. Examinations were performed in the supine position with a routine-dose protocol and in the prone position, randomly applying four different low-dose protocols. Supine images were reconstructed with filtered back projection and prone images with iterative reconstruction. Two blinded observers assessed the image quality of endoluminal images. Image noise was quantitatively assessed by region-of-interest measurements. The mean effective dose in the supine series was 1.88 mSv using routine-dose CTC, compared to 0.92, 0.69, 0.57, and 0.46 mSv at four different low doses in the prone series (p < 0.01). Overall image quality and noise of low-dose CTC with iterative reconstruction were significantly improved compared to routine-dose CTC using filtered back projection. The lowest dose group had image quality comparable to routine-dose images. Low-dose CTC with iterative reconstruction reduces the radiation dose by 48.5 to 75.1 % without image quality degradation compared to routine-dose CTC with filtered back projection. (orig.)

  9. The ANIMAGE project: a multimodal imaging platform for small animal research

    Science.gov (United States)

    Sappey-Marinier, D.; Beuf, O.; Billotey, C.; Chereul, E.; Dupuy, J.; Jeandey, M.; Grenier, D.; Hasserodt, J.; Langlois, J. B.; Lartizien, C.; Mai, W.; Odet, C.; Samarut, J.; Vray, D.; Zimmer, L.; Janier, M.

    2004-07-01

    The advent of the molecular era has just generated a revolution in the development of new in vivo imaging techniques to examine the integrative functions of molecules, cells, organ systems and whole organisms. Molecular imaging constitutes a new tool allowing the biologist to characterize, repeatedly and non-invasively, a large number of experimental models developed in rodents. In order to monitor biological processes such as gene expression, normal development, metabolic alterations or medical treatment effects, several methodological and technological challenges have to be raised up. Developments are needed in chemistry to create new radiotracers or contrast agents, and in physic, to adapt the medical imaging techniques to the constraints of small animal investigations. ANIMAGE is a multimodal imaging platform to image the structure and function of systems using and developing different technologies such as autoradiography, ultrasounds, positron emission tomography (PET), X-ray computed tomography (CT), magnetic resonance imaging and spectroscopy (MRI/MRS). The first biological applications and results are presented.

  10. The PixFEL project: Progress towards a fine pitch X-ray imaging camera for next generation FEL facilities

    Science.gov (United States)

    Rizzo, G.; Batignani, G.; Benkechkache, M. A.; Bettarini, S.; Casarosa, G.; Comotti, D.; Dalla Betta, G.-F.; Fabris, L.; Forti, F.; Grassi, M.; Lodola, L.; Malcovati, P.; Manghisoni, M.; Mendicino, R.; Morsani, F.; Paladino, A.; Pancheri, L.; Paoloni, E.; Ratti, L.; Re, V.; Traversi, G.; Vacchi, C.; Verzellesi, G.; Xu, H.

    2016-07-01

    The INFN PixFEL project is developing the fundamental building blocks for a large area X-ray imaging camera to be deployed at next generation free electron laser (FEL) facilities with unprecedented intensity. Improvement in performance beyond the state of art in imaging instrumentation will be explored adopting advanced technologies like active edge sensors, a 65 nm node CMOS process and vertical integration. These are the key ingredients of the PixFEL project to realize a seamless large area focal plane instrument composed by a matrix of multilayer four-side buttable tiles. In order to minimize the dead area and reduce ambiguities in image reconstruction, a fine pitch active edge thick sensor is being optimized to cope with very high intensity photon flux, up to 104 photons per pixel, in the range from 1 to 10 keV. A low noise analog front-end channel with this wide dynamic range and a novel dynamic compression feature, together with a low power 10 bit analog to digital conversion up to 5 MHz, has been realized in a 110 μm pitch with a 65 nm CMOS process. Vertical interconnection of two CMOS tiers will be also explored in the future to build a four-side buttable readout chip with high density memories. In the long run the objective of the PixFEL project is to build a flexible X-ray imaging camera for operation both in burst mode, like at the European X-FEL, or in continuous mode with the high frame rates anticipated for future FEL facilities.

  11. The PixFEL project: Progress towards a fine pitch X-ray imaging camera for next generation FEL facilities

    Energy Technology Data Exchange (ETDEWEB)

    Rizzo, G., E-mail: giuliana.rizzo@pi.infn.it [Università di Pisa, Dipartimento di Fisica, I-56127 Pisa (Italy); INFN, Sezione di Pisa, I-56127 Pisa (Italy); Batignani, G. [Università di Pisa, Dipartimento di Fisica, I-56127 Pisa (Italy); INFN, Sezione di Pisa, I-56127 Pisa (Italy); Benkechkache, M.A. [Università di Trento, Dipartimento di Ingegneria Industriale, I-38123 Trento (Italy); University Constantine 1, Department of Electronics in the Science and Technology Faculty, I-25017, Constantine (Algeria); Bettarini, S.; Casarosa, G. [Università di Pisa, Dipartimento di Fisica, I-56127 Pisa (Italy); INFN, Sezione di Pisa, I-56127 Pisa (Italy); Comotti, D. [Università di Pavia, Dipartimento di Ingegneria Industriale e dell' Informazione, I-27100 Pavia (Italy); INFN Sezione di Pavia, I-27100 Pavia (Italy); Dalla Betta, G.-F. [Università di Trento, Dipartimento di Ingegneria Industriale, I-38123 Trento (Italy); TIFPA INFN, I-38123 Trento (Italy); Fabris, L. [INFN Sezione di Pavia, I-27100 Pavia (Italy); Università di Bergamo, Dipartimento di Ingegneria e Scienze Applicate, I-24044 Dalmine (Italy); Forti, F. [Università di Pisa, Dipartimento di Fisica, I-56127 Pisa (Italy); INFN, Sezione di Pisa, I-56127 Pisa (Italy); Grassi, M.; Lodola, L.; Malcovati, P. [Università di Pavia, Dipartimento di Ingegneria Industriale e dell' Informazione, I-27100 Pavia (Italy); INFN Sezione di Pavia, I-27100 Pavia (Italy); Manghisoni, M. [INFN Sezione di Pavia, I-27100 Pavia (Italy); Università di Bergamo, Dipartimento di Ingegneria e Scienze Applicate, I-24044 Dalmine (Italy); and others

    2016-07-11

    The INFN PixFEL project is developing the fundamental building blocks for a large area X-ray imaging camera to be deployed at next generation free electron laser (FEL) facilities with unprecedented intensity. Improvement in performance beyond the state of art in imaging instrumentation will be explored adopting advanced technologies like active edge sensors, a 65 nm node CMOS process and vertical integration. These are the key ingredients of the PixFEL project to realize a seamless large area focal plane instrument composed by a matrix of multilayer four-side buttable tiles. In order to minimize the dead area and reduce ambiguities in image reconstruction, a fine pitch active edge thick sensor is being optimized to cope with very high intensity photon flux, up to 10{sup 4} photons per pixel, in the range from 1 to 10 keV. A low noise analog front-end channel with this wide dynamic range and a novel dynamic compression feature, together with a low power 10 bit analog to digital conversion up to 5 MHz, has been realized in a 110 μm pitch with a 65 nm CMOS process. Vertical interconnection of two CMOS tiers will be also explored in the future to build a four-side buttable readout chip with high density memories. In the long run the objective of the PixFEL project is to build a flexible X-ray imaging camera for operation both in burst mode, like at the European X-FEL, or in continuous mode with the high frame rates anticipated for future FEL facilities.

  12. Novel near-to-mid IR imaging sensors without cooling Project

    Data.gov (United States)

    National Aeronautics and Space Administration — Boston Applied Technologies, Inc (BATi), together with Kent State University (KSU), proposes to develop a high sensitivity infrared (IR) imaging sensor without...

  13. Conjugate Etalon Spectral Imager (CESI) & Scanning Etalon Methane Mapper (SEMM) Project

    Data.gov (United States)

    National Aeronautics and Space Administration — The Conjugate Etalon Spectral Imaging (CESI) concept enables the development of miniature instruments with high spectral resolution, suitable for LEO missions aboard...

  14. Processor for Real-Time Atmospheric Compensation in Long-Range Imaging Project

    Data.gov (United States)

    National Aeronautics and Space Administration — Long-range imaging is a critical component to many NASA applications including range surveillance, launch tracking, and astronomical observation. However,...

  15. Image-domain sampling properties of the Hotelling Observer in CT using filtered back-projection

    Science.gov (United States)

    Sanchez, Adrian A.; Sidky, Emil Y.; Pan, Xiaochuan

    2015-03-01

    The Hotelling Observer (HO),1 along with its channelized variants,2 has been proposed for image quality evaluation in x-ray CT.3,4 In this work, we investigate HO performance for a detection task in parallel-beam FBP as a function of two image-domain sampling parameters, namely pixel size and field-of-view. These two parameters are of central importance in adapting HO methods to use in CT, since the large number of pixels in a single image makes direct computation of HO performance for a full image infeasible in most cases. Reduction of the number of image pixels and/or restriction of the image to a region-of-interest (ROI) has the potential to make direct computation of HO statistics feasible in CT, provided that the signal and noise properties lead to redundant information in some regions of the image. For small signals, we hypothesize that reduction of image pixel size and enlargement of the image field-of-view are approximately equivalent means of gaining additional information relevant to a detection task. The rationale for this hypothesis is that the backprojection operation in FBP introduces long range correlations so that, for small signals, the reconstructed signal outside of a small ROI is not linearly independent of the signal within the ROI. In this work, we perform a preliminary investigation of this hypothesis by sweeping these two sampling parameters and computing HO performance for a signal detection task.

  16. Traumatic Brain Injury Diffusion Magnetic Resonance Imaging Research Roadmap Development Project

    Science.gov (United States)

    2011-10-01

    Neuroinformatics Research Group ‐  Washington University School of Medicine  Inspecting and modifying  DICOM files  ImageJ   Tool  NIH  Image processing and...Hong Gao1 Brain Research Imaging Center and Department of Radiology , The University of Chicago, Chicago, IL Department of Radiology , Xuhui Central...Michael Vannier1, Jia-Hong Gao1 1Brain Research Imaging Center and Department of Radiology , The University of Chicago, Chicago, IL 2Department of

  17. Color intensity projections provides a fast, simple and robust method of summarizing the grayscale images from a renal scan in a single color image

    CERN Document Server

    Cover, Keith S

    2008-01-01

    To assess its usefulness, the peak version of color intensity projections (CIPs) was used to display a summary of the grayscale images composing a renogram as a single color image. Method For each pixel in a renogram, the time point with the maximum intensity was used to control the hue of the color of the corresponding pixel in the CIPs image. The hue ranged over red-yellow-green-light blue-blue with red representing the earliest time. Results For subjects with normal appearing kidneys, the injection site shows up in red, the kidneys in a red-yellow and the bladder in a green-blue. A late fill kidney typically appeared greener or bluer than a normal kidney indicating it reached its peak intensity at a later time point than normal. Conclusions Having the time and intensity information summarized in a single image promises to speed up the initial impression of patients by less experienced interpreters and should also provide a valuable training tool.

  18. Deployment of a Prototype Plant GFP Imager at the Arthur Clarke Mars Greenhouse of the Haughton Mars Project.

    Science.gov (United States)

    Paul, Anna-Lisa; Bamsey, Matthew; Berinstain, Alain; Braham, Stephen; Neron, Philip; Murdoch, Trevor; Graham, Thomas; Ferl, Robert J

    2008-04-18

    The use of engineered plants as biosensors has made elegant strides in the past decades, providing keen insights into the health of plants in general and particularly in the nature and cellular location of stress responses. However, most of the analytical procedures involve laboratory examination of the biosensor plants. With the advent of the green fluorescence protein (GFP) as a biosensor molecule, it became at least theoretically possible for analyses of gene expression to occur telemetrically, with the gene expression information of the plant delivered to the investigator over large distances simply as properly processed fluorescence images. Spaceflight and other extraterrestrial environments provide unique challenges to plant life, challenges that often require changes at the gene expression level to accommodate adaptation and survival. Having previously deployed transgenic plant biosensors to evaluate responses to orbital spaceflight, we wished to develop the plants and especially the imaging devices required to conduct such experiments robotically, without operator intervention, within extraterrestrial environments. This requires the development of an autonomous and remotely operated plant GFP imaging system and concomitant development of the communications infrastructure to manage dataflow from the imaging device. Here we report the results of deploying a prototype GFP imaging system within the Arthur Clarke Mars Greenhouse (ACMG) an autonomously operated greenhouse located within the Haughton Mars Project in the Canadian High Arctic. Results both demonstrate the applicability of the fundamental GFP biosensor technology and highlight the difficulties in collecting and managing telemetric data from challenging deployment environments.

  19. Deployment of a Prototype Plant GFP Imager at the Arthur Clarke Mars Greenhouse of the Haughton Mars Project

    Directory of Open Access Journals (Sweden)

    Robert J. Ferl

    2008-04-01

    Full Text Available The use of engineered plants as biosensors has made elegant strides in the past decades, providing keen insights into the health of plants in general and particularly in the nature and cellular location of stress responses. However, most of the analytical procedures involve laboratory examination of the biosensor plants. With the advent of the green fluorescence protein (GFP as a biosensor molecule, it became at least theoretically possible for analyses of gene expression to occur telemetrically, with the gene expression information of the plant delivered to the investigator over large distances simply as properly processed fluorescence images. Spaceflight and other extraterrestrial environments provide unique challenges to plant life, challenges that often require changes at the gene expression level to accommodate adaptation and survival. Having previously deployed transgenic plant biosensors to evaluate responses to orbital spaceflight, we wished to develop the plants and especially the imaging devices required to conduct such experiments robotically, without operator intervention, within extraterrestrial environments. This requires the development of an autonomous and remotely operated plant GFP imaging system and concomitant development of the communications infrastructure to manage dataflow from the imaging device. Here we report the results of deploying a prototype GFP imaging system within the Arthur Clarke Mars Greenhouse (ACMG an autonomously operated greenhouse located within the Haughton Mars Project in the Canadian High Arctic. Results both demonstrate the applicability of the fundamental GFP biosensor technology and highlight the difficulties in collecting and managing telemetric data from challenging deployment environments.

  20. Postprocessing of interframe coded images based on convex projection and regularization

    Science.gov (United States)

    Joung, Shichang; Kim, Sungjin; Paik, Joon-Ki

    2000-04-01

    In order to reduce blocking artifacts in inter-frame coded images, we propose a new image restoration algorithm, which directly processes differential images before reconstruction. We note that blocking artifact in inter-frame coded images is caused by both 8 X 8 DCT and 16 X 16 macroblock based motion compensation, while that of intra-coded images is caused by 8 X 8 DCT only. According to the observation, we propose a new degradation model for differential images and the corresponding restoration algorithm that utilizes additional constraints and convex sets for discontinuity inside blocks. The proposed restoration algorithm is a modified version of standard regularization that incorporates spatially adaptive lowpass filtering with consideration of edge directions by utilizing a part of DCT coefficients. Most of video coding standard adopt a hybrid structure of block- based motion compensation and block discrete cosine transform (BDCT). By this reason, blocking artifacts are occurred on both block boundary and block interior. For more complete removal of both kinds of blocking artifacts, the restored differential image must satisfy two constraints, such as, directional discontinuities on block boundary and block interior. Those constraints have been used for defining convex sets for restoring differential images.

  1. Shape and motion reconstruction from 3D-to-1D orthographically projected data via object-image relations.

    Science.gov (United States)

    Ferrara, Matthew; Arnold, Gregory; Stuff, Mark

    2009-10-01

    This paper describes an invariant-based shape- and motion reconstruction algorithm for 3D-to-1D orthographically projected range data taken from unknown viewpoints. The algorithm exploits the object-image relation that arises in echo-based range data and represents a simplification and unification of previous work in the literature. Unlike one proposed approach, this method does not require uniqueness constraints, which makes its algorithmic form independent of the translation removal process (centroid removal, range alignment, etc.). The new algorithm, which simultaneously incorporates every projection and does not use an initialization in the optimization process, requires fewer calculations and is more straightforward than the previous approach. Additionally, the new algorithm is shown to be the natural extension of the approach developed by Tomasi and Kanade for 3D-to-2D orthographically projected data and is applied to a realistic inverse synthetic aperture radar imaging scenario, as well as experiments with varying amounts of aperture diversity and noise.

  2. Virtual autopsy using imaging: bridging radiologic and forensic sciences. A review of the Virtopsy and similar projects.

    Science.gov (United States)

    Bolliger, Stephan A; Thali, Michael J; Ross, Steffen; Buck, Ursula; Naether, Silvio; Vock, Peter

    2008-02-01

    The transdisciplinary research project Virtopsy is dedicated to implementing modern imaging techniques into forensic medicine and pathology in order to augment current examination techniques or even to offer alternative methods. Our project relies on three pillars: three-dimensional (3D) surface scanning for the documentation of body surfaces, and both multislice computed tomography (MSCT) and magnetic resonance imaging (MRI) to visualise the internal body. Three-dimensional surface scanning has delivered remarkable results in the past in the 3D documentation of patterned injuries and of objects of forensic interest as well as whole crime scenes. Imaging of the interior of corpses is performed using MSCT and/or MRI. MRI, in addition, is also well suited to the examination of surviving victims of assault, especially choking, and helps visualise internal injuries not seen at external examination of the victim. Apart from the accuracy and three-dimensionality that conventional documentations lack, these techniques allow for the re-examination of the corpse and the crime scene even decades later, after burial of the corpse and liberation of the crime scene. We believe that this virtual, non-invasive or minimally invasive approach will improve forensic medicine in the near future.

  3. A java viewer to publish digital imaging and communications in medicine (DICOM) radiologic images on the world wide web

    OpenAIRE

    Setti, E.; R. Musumeci

    2001-01-01

    The world wide web is an exciting service that allows one to publish electronic documents made of text and images on the internet. Client software called a web browser can access these documents, and display and print them. The most popular browsers are currently Microsoft Internet Explorer (Microsoft, Redmond, WA) and Netscape Communicator (Netscape Communications, Mountain View, CA). These browsers can display text in hypertext markup language (HTML) format and images in Joint Photographic ...

  4. Images

    Data.gov (United States)

    National Aeronautics and Space Administration — Images for the website main pages and all configurations. The upload and access points for the other images are: Website Template RSW images BSCW Images HIRENASD...

  5. Study on the Markup Ratio of Price-cost of Bamboo%竹材的价格—成本加成率研究

    Institute of Scientific and Technical Information of China (English)

    张敏新; 肖平

    2001-01-01

    The markup ratio of price-cost is often used as the indicators to judge the performance of a company or an industry. It reflects the competitiveness of the company or the industry. Author make the demonstration study on the markup ratio of price-cost of bamboo based on statistic data in Jiangxi province,and find the markup ratio is rather high and the price of labor and economic growth have a major impact on it.%价格—成本加成率是常用的企业或产业业绩衡量指标之一,它在某种程度上表现了企业或产业的市场竞争力。作者根据江西省林产品成本调查历史数据,对竹材生产的价格—成本加成率进行了实证研究。发现,竹材的价格—成本加成率有相当高的水平,农村劳动力价格以及经济增长状况对竹材的价格—成本加成水平及变动有着重要的影响。

  6. Applying full polarization A-Projection to very wide field of view instruments: An imager for LOFAR

    Science.gov (United States)

    Tasse, C.; van der Tol, S.; van Zwieten, J.; van Diepen, G.; Bhatnagar, S.

    2013-05-01

    The required high sensitivities and large fields of view of the new generation of radio interferometers impose high dynamic ranges, e.g., ~1:106 to 1:108 for the Square Kilometre Array (SKA). The main problem for achieving these high ranges is the calibration and correction of direction dependent effects (DDE) that can affect the electro-magnetic field (antenna beams, ionosphere, Faraday rotation, etc.). It has already been shown that the A-Projection is a fast and accurate algorithm that can potentially correct for any given DDE in the imaging step. With its very wide field of view, low operating frequency (~30-250 MHz), long baselines, and complex station-dependent beam patterns, the LOw Frequency ARray (LOFAR) is certainly the most complex SKA pathfinder instrument. In this paper we present a few implementations of the A-Projection in LOFAR which can deal nondiagonal Mueller matrices. The algorithm is designed to correct for all DDE, including individual antennas, projection of the dipoles on the sky, beam forming, and ionospheric effects. We describe a few important algorithmic optimizations related to LOFAR's architecture that allowed us to build a fast imager. Based on simulated datasets we show that A-Projection can dramatically improve the dynamic range for both phased array beams and ionospheric effects. However, certain problems associated with the calibration of DDE remain (especially ionospheric effects), and the effect of the algorithm on real LOFAR survey data still needs to be demonstrated. We will be able to use this algorithm to construct the deepest extragalactic surveys, comprising hundreds of days of integration.

  7. True Color Orthorectified Image for Saugus Ironworks National Historical Site Vegetation Mapping Project

    Data.gov (United States)

    National Park Service, Department of the Interior — Orthorectified true color image of Saugus Ironworks National Historical Site. Sanborn Colorado L.L.C. of Colorado Springs, CO, flew the photography in April 2005....

  8. Democratizing an electroluminescence imaging apparatus and analytics project for widespread data acquisition in photovoltaic materials.

    Science.gov (United States)

    Fada, Justin S; Wheeler, Nicholas R; Zabiyaka, Davis; Goel, Nikhil; Peshek, Timothy J; French, Roger H

    2016-08-01

    We present a description of an electroluminescence (EL) apparatus, easily sourced from commercially available components, with a quantitative image processing platform that demonstrates feasibility for the widespread utility of EL imaging as a characterization tool. We validated our system using a Gage R&R analysis to find a variance contribution by the measurement system of 80.56%, which is typically unacceptable, but through quantitative image processing and development of correction factors a variance contribution by the measurement system of 2.41% was obtained. We further validated the system by quantifying the signal-to-noise ratio (SNR) and found values consistent with other systems published in the literature, at SNR values of 10-100, albeit at exposure times of greater than 1 s compared to 10 ms for other systems. This SNR value range is acceptable for image feature recognition, providing the opportunity for widespread data acquisition and large scale data analytics of photovoltaics.

  9. The First JFET-based Silicon Carbide Active Pixel Sensor UV Imager Project

    Data.gov (United States)

    National Aeronautics and Space Administration — Solar-blind ultraviolet (UV) imaging is critically important in the fields of space astronomy, national defense, and bio-chemistry. United Silicon Carbide, Inc....

  10. Bandwidth Controllable Tunable Filter for Hyper-/Multi-Spectral Imager Project

    Data.gov (United States)

    National Aeronautics and Space Administration — This SBIR Phase I proposal introduces a fast speed bandwidth controllable tunable filter for hyper-/multi-spectral (HS/MS) imagers. It dynamically passes a variable...

  11. In-Situ / In-Flight Detection of Fluorescent Proteins Using Imaging Spectroscopy Sensors Project

    Data.gov (United States)

    National Aeronautics and Space Administration — The proposal addresses technologies relevant to NASA's new Vision for Space Explorations in the areas of robotics, teleoperations, and macro and micro imaging...

  12. High Spectral Resolution, High Cadence, Imaging X-ray Microcalorimeters for Solar Physics - Phase 2 Project

    Data.gov (United States)

    National Aeronautics and Space Administration — Microcalorimeter x-ray instruments are non-dispersive, high spectral resolution, broad-band, high cadence imaging spectrometers. We have been developing these...

  13. Airborne Thematic Thermal InfraRed and Electro-Optical Imaging System Project

    Data.gov (United States)

    National Aeronautics and Space Administration — The innovation is an advanced Airborne Thematic Thermal InfraRed and Electro-Optical Imaging System (ATTIREOIS). ATTIREOIS sensor payload consists of two sets of...

  14. Wide Field-of-View (FOV) Soft X-Ray Imager Project

    Data.gov (United States)

    National Aeronautics and Space Administration — The Wide Field-of-View (FOV) Soft X-Ray Imager proposes to be a state-of-art instrument with applications for numerous heliospheric and planetary...

  15. High-Sensitivity Semiconductor Photocathodes for Space-Born UV Photon-Counting and Imaging Project

    Data.gov (United States)

    National Aeronautics and Space Administration — Many UV photon-counting and imaging applications, including space-borne astronomy, missile tracking and guidance, UV spectroscopy for chemical/biological...

  16. The First JFET-Based Silicon Carbide Active Pixel Sensor UV Imager Project

    Data.gov (United States)

    National Aeronautics and Space Administration — Solar-blind ultraviolet (UV) imaging is needed in the fields of astronomy, national defense, and bio-chemistry. United Silicon Carbide, Inc. proposes to develop a...

  17. Democratizing an electroluminescence imaging apparatus and analytics project for widespread data acquisition in photovoltaic materials

    Science.gov (United States)

    Fada, Justin S.; Wheeler, Nicholas R.; Zabiyaka, Davis; Goel, Nikhil; Peshek, Timothy J.; French, Roger H.

    2016-08-01

    We present a description of an electroluminescence (EL) apparatus, easily sourced from commercially available components, with a quantitative image processing platform that demonstrates feasibility for the widespread utility of EL imaging as a characterization tool. We validated our system using a Gage R&R analysis to find a variance contribution by the measurement system of 80.56%, which is typically unacceptable, but through quantitative image processing and development of correction factors a variance contribution by the measurement system of 2.41% was obtained. We further validated the system by quantifying the signal-to-noise ratio (SNR) and found values consistent with other systems published in the literature, at SNR values of 10-100, albeit at exposure times of greater than 1 s compared to 10 ms for other systems. This SNR value range is acceptable for image feature recognition, providing the opportunity for widespread data acquisition and large scale data analytics of photovoltaics.

  18. Precise automatic image coregistration tools to enable pixel-level change detection Project

    Data.gov (United States)

    National Aeronautics and Space Administration — Automated detection of land cover changes between multitemporal images has long been a goal of the remote sensing discipline. Most research in this area has focused...

  19. On-Chip hyperspetral imaging system for portable IR spectroscopy applications Project

    Data.gov (United States)

    National Aeronautics and Space Administration — Hyperspectral middlewave infrared and longwave infrared (MWIR/LWIR) imaging systems capable of obtaining hundreds of narrow band (10-15 nm) spectral information of...

  20. US Participation in the Solar Orbiter Multi Element Telescope for Imaging and Spectroscopy (METIS) Project

    Data.gov (United States)

    National Aeronautics and Space Administration — The Multi Element Telescope for Imaging and Spectroscopy, METIS, investigation has been conceived to perform off-limb and near-Sun coronagraphy and is motivated by...