WorldWideScience

Sample records for image markup project

  1. The caBIG annotation and image Markup project.

    Science.gov (United States)

    Channin, David S; Mongkolwat, Pattanasak; Kleper, Vladimir; Sepukar, Kastubh; Rubin, Daniel L

    2010-04-01

    Image annotation and markup are at the core of medical interpretation in both the clinical and the research setting. Digital medical images are managed with the DICOM standard format. While DICOM contains a large amount of meta-data about whom, where, and how the image was acquired, DICOM says little about the content or meaning of the pixel data. An image annotation is the explanatory or descriptive information about the pixel data of an image that is generated by a human or machine observer. An image markup is the graphical symbols placed over the image to depict an annotation. While DICOM is the standard for medical image acquisition, manipulation, transmission, storage, and display, there are no standards for image annotation and markup. Many systems expect annotation to be reported verbally, while markups are stored in graphical overlays or proprietary formats. This makes it difficult to extract and compute with both of them. The goal of the Annotation and Image Markup (AIM) project is to develop a mechanism, for modeling, capturing, and serializing image annotation and markup data that can be adopted as a standard by the medical imaging community. The AIM project produces both human- and machine-readable artifacts. This paper describes the AIM information model, schemas, software libraries, and tools so as to prepare researchers and developers for their use of AIM.

  2. Managing and Querying Image Annotation and Markup in XML

    Science.gov (United States)

    Wang, Fusheng; Pan, Tony; Sharma, Ashish; Saltz, Joel

    2010-01-01

    Proprietary approaches for representing annotations and image markup are serious barriers for researchers to share image data and knowledge. The Annotation and Image Markup (AIM) project is developing a standard based information model for image annotation and markup in health care and clinical trial environments. The complex hierarchical structures of AIM data model pose new challenges for managing such data in terms of performance and support of complex queries. In this paper, we present our work on managing AIM data through a native XML approach, and supporting complex image and annotation queries through native extension of XQuery language. Through integration with xService, AIM databases can now be conveniently shared through caGrid. PMID:21218167

  3. Managing and Querying Image Annotation and Markup in XML.

    Science.gov (United States)

    Wang, Fusheng; Pan, Tony; Sharma, Ashish; Saltz, Joel

    2010-01-01

    Proprietary approaches for representing annotations and image markup are serious barriers for researchers to share image data and knowledge. The Annotation and Image Markup (AIM) project is developing a standard based information model for image annotation and markup in health care and clinical trial environments. The complex hierarchical structures of AIM data model pose new challenges for managing such data in terms of performance and support of complex queries. In this paper, we present our work on managing AIM data through a native XML approach, and supporting complex image and annotation queries through native extension of XQuery language. Through integration with xService, AIM databases can now be conveniently shared through caGrid.

  4. Application of whole slide image markup and annotation for pathologist knowledge capture.

    Science.gov (United States)

    Campbell, Walter S; Foster, Kirk W; Hinrichs, Steven H

    2013-01-01

    The ability to transfer image markup and annotation data from one scanned image of a slide to a newly acquired image of the same slide within a single vendor platform was investigated. The goal was to study the ability to use image markup and annotation data files as a mechanism to capture and retain pathologist knowledge without retaining the entire whole slide image (WSI) file. Accepted mathematical principles were investigated as a method to overcome variations in scans of the same glass slide and to accurately associate image markup and annotation data across different WSI of the same glass slide. Trilateration was used to link fixed points within the image and slide to the placement of markups and annotations of the image in a metadata file. Variation in markup and annotation placement between WSI of the same glass slide was reduced from over 80 μ to less than 4 μ in the x-axis and from 17 μ to 6 μ in the y-axis (P < 0.025). This methodology allows for the creation of a highly reproducible image library of histopathology images and interpretations for educational and research use.

  5. Informatics in radiology: An open-source and open-access cancer biomedical informatics grid annotation and image markup template builder.

    Science.gov (United States)

    Mongkolwat, Pattanasak; Channin, David S; Kleper, Vladimir; Rubin, Daniel L

    2012-01-01

    In a routine clinical environment or clinical trial, a case report form or structured reporting template can be used to quickly generate uniform and consistent reports. Annotation and image markup (AIM), a project supported by the National Cancer Institute's cancer biomedical informatics grid, can be used to collect information for a case report form or structured reporting template. AIM is designed to store, in a single information source, (a) the description of pixel data with use of markups or graphical drawings placed on the image, (b) calculation results (which may or may not be directly related to the markups), and (c) supplemental information. To facilitate the creation of AIM annotations with data entry templates, an AIM template schema and an open-source template creation application were developed to assist clinicians, image researchers, and designers of clinical trials to quickly create a set of data collection items, thereby ultimately making image information more readily accessible.

  6. The National Cancer Informatics Program (NCIP) Annotation and Image Markup (AIM) Foundation model.

    Science.gov (United States)

    Mongkolwat, Pattanasak; Kleper, Vladimir; Talbot, Skip; Rubin, Daniel

    2014-12-01

    Knowledge contained within in vivo imaging annotated by human experts or computer programs is typically stored as unstructured text and separated from other associated information. The National Cancer Informatics Program (NCIP) Annotation and Image Markup (AIM) Foundation information model is an evolution of the National Institute of Health's (NIH) National Cancer Institute's (NCI) Cancer Bioinformatics Grid (caBIG®) AIM model. The model applies to various image types created by various techniques and disciplines. It has evolved in response to the feedback and changing demands from the imaging community at NCI. The foundation model serves as a base for other imaging disciplines that want to extend the type of information the model collects. The model captures physical entities and their characteristics, imaging observation entities and their characteristics, markups (two- and three-dimensional), AIM statements, calculations, image source, inferences, annotation role, task context or workflow, audit trail, AIM creator details, equipment used to create AIM instances, subject demographics, and adjudication observations. An AIM instance can be stored as a Digital Imaging and Communications in Medicine (DICOM) structured reporting (SR) object or Extensible Markup Language (XML) document for further processing and analysis. An AIM instance consists of one or more annotations and associated markups of a single finding along with other ancillary information in the AIM model. An annotation describes information about the meaning of pixel data in an image. A markup is a graphical drawing placed on the image that depicts a region of interest. This paper describes fundamental AIM concepts and how to use and extend AIM for various imaging disciplines.

  7. iPad: Semantic annotation and markup of radiological images.

    Science.gov (United States)

    Rubin, Daniel L; Rodriguez, Cesar; Shah, Priyanka; Beaulieu, Chris

    2008-11-06

    Radiological images contain a wealth of information,such as anatomy and pathology, which is often not explicit and computationally accessible. Information schemes are being developed to describe the semantic content of images, but such schemes can be unwieldy to operationalize because there are few tools to enable users to capture structured information easily as part of the routine research workflow. We have created iPad, an open source tool enabling researchers and clinicians to create semantic annotations on radiological images. iPad hides the complexity of the underlying image annotation information model from users, permitting them to describe images and image regions using a graphical interface that maps their descriptions to structured ontologies semi-automatically. Image annotations are saved in a variety of formats,enabling interoperability among medical records systems, image archives in hospitals, and the Semantic Web. Tools such as iPad can help reduce the burden of collecting structured information from images, and it could ultimately enable researchers and physicians to exploit images on a very large scale and glean the biological and physiological significance of image content.

  8. imzML: Imaging Mass Spectrometry Markup Language: A common data format for mass spectrometry imaging.

    Science.gov (United States)

    Römpp, Andreas; Schramm, Thorsten; Hester, Alfons; Klinkert, Ivo; Both, Jean-Pierre; Heeren, Ron M A; Stöckli, Markus; Spengler, Bernhard

    2011-01-01

    Imaging mass spectrometry is the method of scanning a sample of interest and generating an "image" of the intensity distribution of a specific analyte. The data sets consist of a large number of mass spectra which are usually acquired with identical settings. Existing data formats are not sufficient to describe an MS imaging experiment completely. The data format imzML was developed to allow the flexible and efficient exchange of MS imaging data between different instruments and data analysis software.For this purpose, the MS imaging data is divided in two separate files. The mass spectral data is stored in a binary file to ensure efficient storage. All metadata (e.g., instrumental parameters, sample details) are stored in an XML file which is based on the standard data format mzML developed by HUPO-PSI. The original mzML controlled vocabulary was extended to include specific parameters of imaging mass spectrometry (such as x/y position and spatial resolution). The two files (XML and binary) are connected by offset values in the XML file and are unambiguously linked by a universally unique identifier. The resulting datasets are comparable in size to the raw data and the separate metadata file allows flexible handling of large datasets.Several imaging MS software tools already support imzML. This allows choosing from a (growing) number of processing tools. One is no longer limited to proprietary software, but is able to use the processing software which is best suited for a specific question or application. On the other hand, measurements from different instruments can be compared within one software application using identical settings for data processing. All necessary information for evaluating and implementing imzML can be found at http://www.imzML.org .

  9. Markups and Exporting Behavior

    DEFF Research Database (Denmark)

    De Loecker, Jan; Warzynski, Frederic Michel Patrick

    2012-01-01

    In this paper, we develop a method to estimate markups using plant-level production data. Our approach relies on cost minimizing producers and the existence of at least one variable input of production. The suggested empirical framework relies on the estimation of a production function and provides...... estimates of plant- level markups without specifying how firms compete in the product market. We rely on our method to explore the relationship be- tween markups and export behavior. We find that markups are estimated significantly higher when controlling for unobserved productivity; that exporters charge......, on average, higher markups and that markups increase upon export entry....

  10. A Leaner, Meaner Markup Language.

    Science.gov (United States)

    Online & CD-ROM Review, 1997

    1997-01-01

    In 1996 a working group of the World Wide Web Consortium developed and released a simpler form of markup language, Extensible Markup Language (XML), combining the flexibility of standard Generalized Markup Language (SGML) and the Web suitability of HyperText Markup Language (HTML). Reviews SGML and discusses XML's suitability for journal…

  11. Evolving a lingua franca and associated software infrastructure for computational systems biology: the Systems Biology Markup Language (SBML) project.

    Science.gov (United States)

    Hucka, M; Finney, A; Bornstein, B J; Keating, S M; Shapiro, B E; Matthews, J; Kovitz, B L; Schilstra, M J; Funahashi, A; Doyle, J C; Kitano, H

    2004-06-01

    Biologists are increasingly recognising that computational modelling is crucial for making sense of the vast quantities of complex experimental data that are now being collected. The systems biology field needs agreed-upon information standards if models are to be shared, evaluated and developed cooperatively. Over the last four years, our team has been developing the Systems Biology Markup Language (SBML) in collaboration with an international community of modellers and software developers. SBML has become a de facto standard format for representing formal, quantitative and qualitative models at the level of biochemical reactions and regulatory networks. In this article, we summarise the current and upcoming versions of SBML and our efforts at developing software infrastructure for supporting and broadening its use. We also provide a brief overview of the many SBML-compatible software tools available today.

  12. GOATS Image Projection Component

    Science.gov (United States)

    Haber, Benjamin M.; Green, Joseph J.

    2011-01-01

    When doing mission analysis and design of an imaging system in orbit around the Earth, answering the fundamental question of imaging performance requires an understanding of the image products that will be produced by the imaging system. GOATS software represents a series of MATLAB functions to provide for geometric image projections. Unique features of the software include function modularity, a standard MATLAB interface, easy-to-understand first-principles-based analysis, and the ability to perform geometric image projections of framing type imaging systems. The software modules are created for maximum analysis utility, and can all be used independently for many varied analysis tasks, or used in conjunction with other orbit analysis tools.

  13. The geometry description markup language

    International Nuclear Information System (INIS)

    Chytracek, R.

    2001-01-01

    Currently, a lot of effort is being put on designing complex detectors. A number of simulation and reconstruction frameworks and applications have been developed with the aim to make this job easier. A very important role in this activity is played by the geometry description of the detector apparatus layout and its working environment. However, no real common approach to represent geometry data is available and such data can be found in various forms starting from custom semi-structured text files, source code (C/C++/FORTRAN), to XML and database solutions. The XML (Extensible Markup Language) has proven to provide an interesting approach for describing detector geometries, with several different but incompatible XML-based solutions existing. Therefore, interoperability and geometry data exchange among different frameworks is not possible at present. The author introduces a markup language for geometry descriptions. Its aim is to define a common approach for sharing and exchanging of geometry description data. Its requirements and design have been driven by experience and user feedback from existing projects which have their geometry description in XML

  14. Treatment of Markup in Statistical Machine Translation

    OpenAIRE

    Müller, Mathias

    2017-01-01

    We present work on handling XML markup in Statistical Machine Translation (SMT). The methods we propose can be used to effectively preserve markup (for instance inline formatting or structure) and to place markup correctly in a machine-translated segment. We evaluate our approaches with parallel data that naturally contains markup or where markup was inserted to create synthetic examples. In our experiments, hybrid reinsertion has proven the most accurate method to handle markup, while alignm...

  15. Projecting Images on a Sphere

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — A system for projecting images on an object with a reflective surface. A plurality of image projectors are spaced around the object and synchronized such that each...

  16. Percentage Retail Mark-Ups

    OpenAIRE

    Thomas von Ungern-Sternberg

    1999-01-01

    A common assumption in the literature on the double marginalization problem is that the retailer can set his mark-up only in the second stage of the game after the producer has moved. To the extent that the sequence of moves is designed to reflect the relative bargaining power of the two parties it is just as plausible to let the retailer move first. Furthermore, retailers frequently calculate their selling prices by adding a percentage mark-up to their wholesale prices. This allows a retaile...

  17. TEI Standoff Markup - A work in progress

    NARCIS (Netherlands)

    Spadini, E.; Turska, Magdalena; Broughton, Misha

    2015-01-01

    Markup is said to be standoff, or external, when the markup data is placed outside of the text it is meant to tag” (). One of the most widely recognized limitations of inline XML markup is its inability to cope with element overlap; standoff has been considered as a possible solution to

  18. Changes in latent fingerprint examiners' markup between analysis and comparison.

    Science.gov (United States)

    Ulery, Bradford T; Hicklin, R Austin; Roberts, Maria Antonia; Buscaglia, JoAnn

    2015-02-01

    After the initial analysis of a latent print, an examiner will sometimes revise the assessment during comparison with an exemplar. Changes between analysis and comparison may indicate that the initial analysis of the latent was inadequate, or that confirmation bias may have affected the comparison. 170 volunteer latent print examiners, each randomly assigned 22 pairs of prints from a pool of 320 total pairs, provided detailed markup documenting their interpretations of the prints and the bases for their comparison conclusions. We describe changes in value assessments and markup of features and clarity. When examiners individualized, they almost always added or deleted minutiae (90.3% of individualizations); every examiner revised at least some markups. For inconclusive and exclusion determinations, changes were less common, and features were added more frequently when the image pair was mated (same source). Even when individualizations were based on eight or fewer corresponding minutiae, in most cases some of those minutiae had been added during comparison. One erroneous individualization was observed: the markup changes were notably extreme, and almost all of the corresponding minutiae had been added during comparison. Latents assessed to be of value for exclusion only (VEO) during analysis were often individualized when compared to a mated exemplar (26%); in our previous work, where examiners were not required to provide markup of features, VEO individualizations were much less common (1.8%). Published by Elsevier Ireland Ltd.

  19. The Behavior Markup Language: Recent Developments and Challenges

    NARCIS (Netherlands)

    Vilhjalmsson, Hannes; Cantelmo, Nathan; Cassell, Justine; Chafai, Nicholas E.; Kipp, Michael; Kopp, Stefan; Mancini, Maurizio; Marsella, Stacy; Marshall, Andrew N.; Pelachaud, Catherine; Ruttkay, Z.M.; Thorisson, Kristinn R.; van Welbergen, H.; van der Werf, Rick J.; Pelachaud, Catherine; Martin, Jean-Claude; Andre, Elisabeth; Collet, Gerard; Karpouzis, Kostas; Pele, Danielle

    2007-01-01

    Since the beginning of the SAIBA effort to unify key interfaces in the multi-modal behavior generation process, the Behavior Markup Language (BML) has both gained ground as an important component in many projects worldwide, and continues to undergo further refinement. This paper reports on the

  20. Astronomical Instrumentation System Markup Language

    Science.gov (United States)

    Goldbaum, Jesse M.

    2016-05-01

    The Astronomical Instrumentation System Markup Language (AISML) is an Extensible Markup Language (XML) based file format for maintaining and exchanging information about astronomical instrumentation. The factors behind the need for an AISML are first discussed followed by the reasons why XML was chosen as the format. Next it's shown how XML also provides the framework for a more precise definition of an astronomical instrument and how these instruments can be combined to form an Astronomical Instrumentation System (AIS). AISML files for several instruments as well as one for a sample AIS are provided. The files demonstrate how AISML can be utilized for various tasks from web page generation and programming interface to instrument maintenance and quality management. The advantages of widespread adoption of AISML are discussed.

  1. Endogenous Markups, Firm Productivity and International Trade:

    DEFF Research Database (Denmark)

    Bellone, Flora; Musso, Patrick; Nesta, Lionel

    ) markups are positively related to firm productivity; 3) markups are negatively related to import penetration; 4) markups are positively related to firm export intensity and markups are higher on the export market than on the domestic ones in the presence of trade barriers and/or if competitors...... on the export market are less efficient than competitors on the domestic market. We estimate micro-level price cost margins (PCMs) using firm-level data extending the techniques developed by Hall (1986, 1988) and extended by Domowitz et al. (1988) and Roeger (1995) for the French manufacturing industry from......In this paper, we test key micro-level theoretical predictions ofMelitz and Ottaviano (MO) (2008), a model of international trade with heterogenous firms and endogenous mark-ups. At the firm-level, the MO model predicts that: 1) firm markups are negatively related to domestic market size; 2...

  2. Markups and Firm-Level Export Status

    DEFF Research Database (Denmark)

    De Loecker, Jan; Warzynski, Frederic

    We derive an estimating equation to estimate markups using the insight of Hall (1986) and the control function approach of Olley and Pakes (1996). We rely on our method to explore the relationship between markups and export behavior using plant-level data. We find significantly higher markups when...... we control for unobserved productivity shocks. Furthermore, we find significant higher markups for exporting firms and present new evidence on markup-export status dynamics. More specifically, we find that firms' markups significantly increase (decrease) after entering (exiting) export markets. We...... see these results as a first step in opening up the productivity-export black box, and provide a potential explanation for the big measured productivity premia for firms entering export markets....

  3. Wine Price Markup in California Restaurants

    OpenAIRE

    Amspacher, William

    2011-01-01

    The study quantifies the relationship between retail wine price and restaurant mark-up. Ordinary Least Squares regressions were run to estimate how restaurant mark-up responded to retail price. Separate regressions were run for white wine, red wine, and both red and white combined. Both slope and intercept coefficients for each of these regressions were highly significant and indicated the expected inverse relationship between retail price and mark-up.

  4. Markups and Firm-Level Export Status

    DEFF Research Database (Denmark)

    De Loecker, Jan; Warzynski, Frederic

    and export behavior using plant-level data. We find that i) markups are estimated significantly higher when controlling for unobserved productivity, ii) exporters charge on average higher markups and iii) firms' markups increase (decrease) upon export entry (exit).We see these findings as a first step...... in opening up the productivity-export black box, and provide a potential explanation for the big measured productivity premia for firms entering export markets....

  5. LOG2MARKUP: State module to transform a Stata text log into a markup document

    DEFF Research Database (Denmark)

    2016-01-01

    log2markup extract parts of the text version from the Stata log command and transform the logfile into a markup based document with the same name, but with extension markup (or otherwise specified in option extension) instead of log. The author usually uses markdown for writing documents. However...

  6. Answer Markup Algorithms for Southeast Asian Languages.

    Science.gov (United States)

    Henry, George M.

    1991-01-01

    Typical markup methods for providing feedback to foreign language learners are not applicable to languages not written in a strictly linear fashion. A modification of Hart's edit markup software is described, along with a second variation based on a simple edit distance algorithm adapted to a general Southeast Asian font system. (10 references)…

  7. How many Enrons? Mark-ups in the stated capital cost of independent power producers' (IPPs') power projects in developing countries

    International Nuclear Information System (INIS)

    Phadke, Amol

    2009-01-01

    I analyze the determinants of the stated capital cost of IPPs' power projects which significantly influences their price of power. I show that IPPs face a strong incentive to overstate their capital cost and argue that effective competition or regulatory scrutiny will limit the extent of the same. I analyze the stated capital costs of combined cycle gas turbine (CCGT) IPP projects in eight developing countries which became operational during 1990-2006 and find that the stated capital cost of projects selected without competitive bidding is 44-56% higher than those selected with competitive bidding, even after controlling for the effect of cost differences among projects. The extent to which the stated capital costs of projects selected without competitive bidding are higher compared those selected with competitive bidding, is a lower bound on the extent to which they are overstated. My results indicate the drawbacks associated with a policy of promoting private sector participation without an adequate focus on improving competition or regulation. (author)

  8. The Accelerator Markup Language and the Universal Accelerator Parser

    International Nuclear Information System (INIS)

    Sagan, D.; Forster, M.; Cornell U., LNS; Bates, D.A.; LBL, Berkeley; Wolski, A.; Liverpool U.; Cockcroft Inst. Accel. Sci. Tech.; Schmidt, F.; CERN; Walker, N.J.; DESY; Larrieu, T.; Roblin, Y.; Jefferson Lab; Pelaia, T.; Oak Ridge; Tenenbaum, P.; Woodley, M.; SLAC; Reiche, S.; UCLA

    2006-01-01

    A major obstacle to collaboration on accelerator projects has been the sharing of lattice description files between modeling codes. To address this problem, a lattice description format called Accelerator Markup Language (AML) has been created. AML is based upon the standard eXtensible Markup Language (XML) format; this provides the flexibility for AML to be easily extended to satisfy changing requirements. In conjunction with AML, a software library, called the Universal Accelerator Parser (UAP), is being developed to speed the integration of AML into any program. The UAP is structured to make it relatively straightforward (by giving appropriate specifications) to read and write lattice files in any format. This will allow programs that use the UAP code to read a variety of different file formats. Additionally, this will greatly simplify conversion of files from one format to another. Currently, besides AML, the UAP supports the MAD lattice format

  9. Markup cyclicality, employment adjustment, and financial constraints

    OpenAIRE

    Askildsen, Jan Erik; Nilsen, Øivind Anti

    2001-01-01

    We investigate the existence of markups and their cyclical behaviour. Markup is not directly observed. Instead, it is given as a price-cost relation that is estimated from a dynamic model of the firm. The model incorporates potential costly employment adjustments and takes into consideration that firms may be financially constrained. When considering size of the future labour stock, financially constrained firms may behave as if they have a higher discount factor, which may affect the realise...

  10. Instrument Remote Control via the Astronomical Instrument Markup Language

    Science.gov (United States)

    Sall, Ken; Ames, Troy; Warsaw, Craig; Koons, Lisa; Shafer, Richard

    1998-01-01

    The Instrument Remote Control (IRC) project ongoing at NASA's Goddard Space Flight Center's (GSFC) Information Systems Center (ISC) supports NASA's mission by defining an adaptive intranet-based framework that provides robust interactive and distributed control and monitoring of remote instruments. An astronomical IRC architecture that combines the platform-independent processing capabilities of Java with the power of Extensible Markup Language (XML) to express hierarchical data in an equally platform-independent, as well as human readable manner, has been developed. This architecture is implemented using a variety of XML support tools and Application Programming Interfaces (API) written in Java. IRC will enable trusted astronomers from around the world to easily access infrared instruments (e.g., telescopes, cameras, and spectrometers) located in remote, inhospitable environments, such as the South Pole, a high Chilean mountaintop, or an airborne observatory aboard a Boeing 747. Using IRC's frameworks, an astronomer or other scientist can easily define the type of onboard instrument, control the instrument remotely, and return monitoring data all through the intranet. The Astronomical Instrument Markup Language (AIML) is the first implementation of the more general Instrument Markup Language (IML). The key aspects of our approach to instrument description and control applies to many domains, from medical instruments to machine assembly lines. The concepts behind AIML apply equally well to the description and control of instruments in general. IRC enables us to apply our techniques to several instruments, preferably from different observatories.

  11. Kernel based subspace projection of hyperspectral images

    DEFF Research Database (Denmark)

    Larsen, Rasmus; Nielsen, Allan Aasbjerg; Arngren, Morten

    In hyperspectral image analysis an exploratory approach to analyse the image data is to conduct subspace projections. As linear projections often fail to capture the underlying structure of the data, we present kernel based subspace projections of PCA and Maximum Autocorrelation Factors (MAF...

  12. XML/TEI Stand-off Markup. One step beyond.

    NARCIS (Netherlands)

    Spadini, E.; Turska, Magdalena

    2018-01-01

    Stand-off markup is widely considered as a possible solution for overcoming the limitation of inline XML markup, primarily dealing with multiple overlapping hierarchies. Considering previous contributions on the subject and implementations of stand-off markup, we propose a new TEI-based model for

  13. Monopoly, Pareto and Ramsey mark-ups

    NARCIS (Netherlands)

    Ten Raa, T.

    2009-01-01

    Monopoly prices are too high. It is a price level problem, in the sense that the relative mark-ups have Ramsey optimal proportions, at least for independent constant elasticity demands. I show that this feature of monopoly prices breaks down the moment one demand is replaced by the textbook linear

  14. TumorML: Concept and requirements of an in silico cancer modelling markup language.

    Science.gov (United States)

    Johnson, David; Cooper, Jonathan; McKeever, Steve

    2011-01-01

    This paper describes the initial groundwork carried out as part of the European Commission funded Transatlantic Tumor Model Repositories project, to develop a new markup language for computational cancer modelling, TumorML. In this paper we describe the motivations for such a language, arguing that current state-of-the-art biomodelling languages are not suited to the cancer modelling domain. We go on to describe the work that needs to be done to develop TumorML, the conceptual design, and a description of what existing markup languages will be used to compose the language specification.

  15. Hospital markup and operation outcomes in the United States.

    Science.gov (United States)

    Gani, Faiz; Ejaz, Aslam; Makary, Martin A; Pawlik, Timothy M

    2016-07-01

    Although the price hospitals charge for operations has broad financial implications, hospital pricing is not subject to regulation. We sought to characterize national variation in hospital price markup for major cardiothoracic and gastrointestinal operations and to evaluate perioperative outcomes of hospitals relative to hospital price markup. All hospitals in which a patient underwent a cardiothoracic or gastrointestinal procedure were identified using the Nationwide Inpatient Sample for 2012. Markup ratios (ratio of charges to costs) for the total cost of hospitalization were compared across hospitals. Risk-adjusted morbidity, failure-to-rescue, and mortality were calculated using multivariable, hierarchical logistic regression. Among the 3,498 hospitals identified, markup ratios ranged from 0.5-12.2, with a median markup ratio of 2.8 (interquartile range 2.7-3.9). For the 888 hospitals with extreme markup (greatest markup ratio quartile: markup ratio >3.9), the median markup ratio was 4.9 (interquartile range 4.3-6.0), with 10% of these hospitals billing more than 7 times the Medicare-allowable costs (markup ratio ≥7.25). Extreme markup hospitals were more often large (46.3% vs 33.8%, P markup ratio compared with 19.3% (n = 452) and 6.8% (n = 35) of nonprofit and government hospitals, respectively. Perioperative morbidity (32.7% vs 26.4%, P markup hospitals. There is wide variation in hospital markup for cardiothoracic and gastrointestinal procedures, with approximately a quarter of hospital charges being 4 times greater than the actual cost of hospitalization. Hospitals with an extreme markup had greater perioperative morbidity. Copyright © 2016 Elsevier Inc. All rights reserved.

  16. Modularization and Structured Markup for Learning Content in an Academic Environment

    Science.gov (United States)

    Schluep, Samuel; Bettoni, Marco; Schar, Sissel Guttormsen

    2006-01-01

    This article aims to present a flexible component model for modular, web-based learning content, and a simple structured markup schema for the separation of content and presentation. The article will also contain an overview of the dynamic Learning Content Management System (dLCMS) project, which implements these concepts. Content authors are a…

  17. Definition of an XML markup language for clinical laboratory procedures and comparison with generic XML markup.

    Science.gov (United States)

    Saadawi, Gilan M; Harrison, James H

    2006-10-01

    Clinical laboratory procedure manuals are typically maintained as word processor files and are inefficient to store and search, require substantial effort for review and updating, and integrate poorly with other laboratory information. Electronic document management systems could improve procedure management and utility. As a first step toward building such systems, we have developed a prototype electronic format for laboratory procedures using Extensible Markup Language (XML). Representative laboratory procedures were analyzed to identify document structure and data elements. This information was used to create a markup vocabulary, CLP-ML, expressed as an XML Document Type Definition (DTD). To determine whether this markup provided advantages over generic markup, we compared procedures structured with CLP-ML or with the vocabulary of the Health Level Seven, Inc. (HL7) Clinical Document Architecture (CDA) narrative block. CLP-ML includes 124 XML tags and supports a variety of procedure types across different laboratory sections. When compared with a general-purpose markup vocabulary (CDA narrative block), CLP-ML documents were easier to edit and read, less complex structurally, and simpler to traverse for searching and retrieval. In combination with appropriate software, CLP-ML is designed to support electronic authoring, reviewing, distributing, and searching of clinical laboratory procedures from a central repository, decreasing procedure maintenance effort and increasing the utility of procedure information. A standard electronic procedure format could also allow laboratories and vendors to share procedures and procedure layouts, minimizing duplicative word processor editing. Our results suggest that laboratory-specific markup such as CLP-ML will provide greater benefit for such systems than generic markup.

  18. Discriminative Projection Selection Based Face Image Hashing

    Science.gov (United States)

    Karabat, Cagatay; Erdogan, Hakan

    Face image hashing is an emerging method used in biometric verification systems. In this paper, we propose a novel face image hashing method based on a new technique called discriminative projection selection. We apply the Fisher criterion for selecting the rows of a random projection matrix in a user-dependent fashion. Moreover, another contribution of this paper is to employ a bimodal Gaussian mixture model at the quantization step. Our simulation results on three different databases demonstrate that the proposed method has superior performance in comparison to previously proposed random projection based methods.

  19. Monopoly, Pareto and Ramsey mark-ups

    OpenAIRE

    Ten Raa, T.

    2009-01-01

    Monopoly prices are too high. It is a price level problem, in the sense that the relative mark-ups have Ramsey optimal proportions, at least for independent constant elasticity demands. I show that this feature of monopoly prices breaks down the moment one demand is replaced by the textbook linear demand or, even within the constant elasticity framework, dependence is introduced. The analysis provides a single Generalized Inverse Elasticity Rule for the problems of monopoly, Pareto and Ramsey.

  20. Semantic Web Services with Web Ontology Language (OWL-S) - Specification of Agent-Services for DARPA Agent Markup Language (DAML)

    National Research Council Canada - National Science Library

    Sycara, Katia P

    2006-01-01

    CMU did research and development on semantic web services using OWL-S, the semantic web service language under the Defense Advanced Research Projects Agency- DARPA Agent Markup Language (DARPA-DAML) program...

  1. Chemical Markup, XML and the World-Wide Web. 8. Polymer Markup Language.

    Science.gov (United States)

    Adams, Nico; Winter, Jerry; Murray-Rust, Peter; Rzepa, Henry S

    2008-11-01

    Polymers are among the most important classes of materials but are only inadequately supported by modern informatics. The paper discusses the reasons why polymer informatics is considerably more challenging than small molecule informatics and develops a vision for the computer-aided design of polymers, based on modern semantic web technologies. The paper then discusses the development of Polymer Markup Language (PML). PML is an extensible language, designed to support the (structural) representation of polymers and polymer-related information. PML closely interoperates with Chemical Markup Language (CML) and overcomes a number of the previously identified challenges.

  2. Markup heterogeneity, export status ans the establishment of the euro

    OpenAIRE

    Guillou , Sarah; Nesta , Lionel

    2015-01-01

    We investigate the effects of the establishment of the euro on the markups of French manufacturing firms. Merging firm-level census data with customs data, we estimate time-varying firm-specific markups and distinguish between eurozone exporters from other firms between 1995 and 2007. We find that the establishment of the euro has had a pronounced pro-competitive impact by reducing firm markups by 14 percentage points. By reducing export costs, the euro represented an opp...

  3. Medical imaging projects meet at CERN

    CERN Multimedia

    CERN Bulletin

    2013-01-01

    ENTERVISION, the Research Training Network in 3D Digital Imaging for Cancer Radiation Therapy, successfully passed its mid-term review held at CERN on 11 January. This multidisciplinary project aims at qualifying experts in medical imaging techniques for improved hadron therapy.   ENTERVISION provides training in physics, medicine, electronics, informatics, radiobiology and engineering, as well as a wide range of soft skills, to 16 researchers of different backgrounds and nationalities. The network is funded by the European Commission within the Marie Curie Initial Training Network, and relies on the EU-funded research project ENVISION to provide a training platform for the Marie Curie researchers. The two projects hold their annual meetings jointly, allowing the young researchers to meet senior scientists and to have a full picture of the latest developments in the field beyond their individual research project. ENVISION and ENTERVISION are both co-ordinated by CERN, and the Laboratory hosts t...

  4. Projection x-space magnetic particle imaging.

    Science.gov (United States)

    Goodwill, Patrick W; Konkle, Justin J; Zheng, Bo; Saritas, Emine U; Conolly, Steven M

    2012-05-01

    Projection magnetic particle imaging (MPI) can improve imaging speed by over 100-fold over traditional 3-D MPI. In this work, we derive the 2-D x-space signal equation, 2-D image equation, and introduce the concept of signal fading and resolution loss for a projection MPI imager. We then describe the design and construction of an x-space projection MPI scanner with a field gradient of 2.35 T/m across a 10 cm magnet free bore. The system has an expected resolution of 3.5 × 8.0 mm using Resovist tracer, and an experimental resolution of 3.8 × 8.4 mm resolution. The system images 2.5 cm × 5.0 cm partial field-of views (FOVs) at 10 frames/s, and acquires a full field-of-view of 10 cm × 5.0 cm in 4 s. We conclude by imaging a resolution phantom, a complex "Cal" phantom, mice injected with Resovist tracer, and experimentally confirm the theoretically predicted x-space spatial resolution.

  5. Intended and unintended consequences of China's zero markup drug policy.

    Science.gov (United States)

    Yi, Hongmei; Miller, Grant; Zhang, Linxiu; Li, Shaoping; Rozelle, Scott

    2015-08-01

    Since economic liberalization in the late 1970s, China's health care providers have grown heavily reliant on revenue from drugs, which they both prescribe and sell. To curb abuse and to promote the availability, safety, and appropriate use of essential drugs, China introduced its national essential drug list in 2009 and implemented a zero markup policy designed to decouple provider compensation from drug prescription and sales. We collected and analyzed representative data from China's township health centers and their catchment-area populations both before and after the reform. We found large reductions in drug revenue, as intended by policy makers. However, we also found a doubling of inpatient care that appeared to be driven by supply, instead of demand. Thus, the reform had an important unintended consequence: China's health care providers have sought new, potentially inappropriate, forms of revenue. Project HOPE—The People-to-People Health Foundation, Inc.

  6. Semantic Markup for Literary Scholars: How Descriptive Markup Affects the Study and Teaching of Literature.

    Science.gov (United States)

    Campbell, D. Grant

    2002-01-01

    Describes a qualitative study which investigated the attitudes of literary scholars towards the features of semantic markup for primary texts in XML format. Suggests that layout is a vital part of the reading process which implies that the standardization of DTDs (Document Type Definitions) should extend to styling as well. (Author/LRW)

  7. PENDEKATAN MODEL MATEMATIS UNTUK MENENTUKAN PERSENTASE MARKUP HARGA JUAL PRODUK

    Directory of Open Access Journals (Sweden)

    Oviliani Yenty Yuliana

    2002-01-01

    Full Text Available The purpose of this research is to design Mathematical models that can determine the selling volume as an alternative to improve the markup percentage. Mathematical models was designed with double regression statistic. Selling volume is a function of markup, market condition, and substitute condition variables. The designed Mathematical model has fulfilled by the test of: error upon assumption, accurate model, validation model, and multi collinear problem. The Mathematical model has applied in application program with expectation that the application program can give: (1 alternative to decide percentage markup for user, (2 Illustration of gross profit estimation that will be achieve for selected percentage markup, (3 Illustration of estimation percentage of the units sold that will be achieve for selected percentage markup, and (4 Illustration of total net income before tax will get for specific period. Abstract in Bahasa Indonesia : Penelitian ini bertujuan untuk merancang model Matematis guna menetapkan volume penjualan, sebagai alternatif untuk menentukan persentase markup harga jual produk. Model Matematis dirancang menggunakan Statistik Regresi Berganda. Volume penjualan merupakan fungsi dari variabel markup, kondisi pasar, dan kondisi pengganti. Model Matematis yang dirancang sudah memenuhi uji: asumsi atas error, akurasi model, validasi model, dan masalah multikolinearitas. Rancangan model Matematis tersebut diterapkan dalam program aplikasi dengan harapan dapat memberi: (1 alternatif bagi pengguna mengenai berapa besar markup yang sebaiknya ditetapkan, (2 gambaran perkiraan laba kotor yang akan diperoleh setiap pemilihan markup, (3 gambaran perkiraan persentase unit yang terjual setiap pemilihan markup, dan (4 gambaran total laba kotor sebelum pajak yang dapat diperoleh pada periode yang bersangkutan. Kata kunci: model Matematis, aplikasi program, volume penjualan, markup, laba kotor.

  8. An Introduction to the Extensible Markup Language (XML).

    Science.gov (United States)

    Bryan, Martin

    1998-01-01

    Describes Extensible Markup Language (XML), a subset of the Standard Generalized Markup Language (SGML) that is designed to make it easy to interchange structured documents over the Internet. Topics include Document Type Definition (DTD), components of XML, the use of XML, text and non-text elements, and uses for XML-coded files. (LRW)

  9. Information and image integration: project spectrum

    Science.gov (United States)

    Blaine, G. James; Jost, R. Gilbert; Martin, Lori; Weiss, David A.; Lehmann, Ron; Fritz, Kevin

    1998-07-01

    The BJC Health System (BJC) and the Washington University School of Medicine (WUSM) formed a technology alliance with industry collaborators to develop and implement an integrated, advanced clinical information system. The industry collaborators include IBM, Kodak, SBC and Motorola. The activity, called Project Spectrum, provides an integrated clinical repository for the multiple hospital facilities of the BJC. The BJC System consists of 12 acute care hospitals serving over one million patients in Missouri and Illinois. An interface engine manages transactions from each of the hospital information systems, lab systems and radiology information systems. Data is normalized to provide a consistent view for the primary care physician. Access to the clinical repository is supported by web-based server/browser technology which delivers patient data to the physician's desktop. An HL7 based messaging system coordinates the acquisition and management of radiological image data and sends image keys to the clinical data repository. Access to the clinical chart browser currently provides radiology reports, laboratory data, vital signs and transcribed medical reports. A chart metaphor provides tabs for the selection of the clinical record for review. Activation of the radiology tab facilitates a standardized view of radiology reports and provides an icon used to initiate retrieval of available radiology images. The selection of the image icon spawns an image browser plug-in and utilizes the image key from the clinical repository to access the image server for the requested image data. The Spectrum system is collecting clinical data from five hospital systems and imaging data from two hospitals. Domain specific radiology imaging systems support the acquisition and primary interpretation of radiology exams. The spectrum clinical workstations are deployed to over 200 sites utilizing local area networks and ISDN connectivity.

  10. Balancing medicine prices and business sustainability: analyses of pharmacy costs, revenues and profit shed light on retail medicine mark-ups in rural Kyrgyzstan

    Directory of Open Access Journals (Sweden)

    Maddix Jason

    2010-07-01

    Full Text Available Abstract Background Numerous not-for-profit pharmacies have been created to improve access to medicines for the poor, but many have failed due to insufficient financial planning and management. These pharmacies are not well described in health services literature despite strong demand from policy makers, implementers, and researchers. Surveys reporting unaffordable medicine prices and high mark-ups have spurred efforts to reduce medicine prices, but price reduction goals are arbitrary in the absence of information on pharmacy costs, revenues, and profit structures. Health services research is needed to develop sustainable and "reasonable" medicine price goals and strategic initiatives to reach them. Methods We utilized cost accounting methods on inventory and financial information obtained from a not-for-profit rural pharmacy network in mountainous Kyrgyzstan to quantify costs, revenues, profits and medicine mark-ups during establishment and maintenance periods (October 2004-December 2007. Results Twelve pharmacies and one warehouse were established in remote Kyrgyzstan with 100%, respectively. Annual mark-ups increased dramatically each year to cover increasing recurrent costs, and by 2007, only 19% and 46% of products revealed mark-ups of 100%. 2007 medicine mark-ups varied substantially across these products, ranging from 32% to 244%. Mark-ups needed to sustain private pharmacies would be even higher in the absence of government subsidies. Conclusion Pharmacy networks can be established in hard-to-reach regions with little funding using public-private partnership, resource-sharing models. Medicine prices and mark-ups must be interpreted with consideration for regional costs of business. Mark-ups vary dramatically across medicines. Some mark-ups appear "excessive" but are likely necessary for pharmacy viability. Pharmacy financial data is available in remote settings and can be used towards determination of "reasonable" medicine price goals

  11. Balancing medicine prices and business sustainability: analyses of pharmacy costs, revenues and profit shed light on retail medicine mark-ups in rural Kyrgyzstan.

    Science.gov (United States)

    Waning, Brenda; Maddix, Jason; Soucy, Lyne

    2010-07-13

    Numerous not-for-profit pharmacies have been created to improve access to medicines for the poor, but many have failed due to insufficient financial planning and management. These pharmacies are not well described in health services literature despite strong demand from policy makers, implementers, and researchers. Surveys reporting unaffordable medicine prices and high mark-ups have spurred efforts to reduce medicine prices, but price reduction goals are arbitrary in the absence of information on pharmacy costs, revenues, and profit structures. Health services research is needed to develop sustainable and "reasonable" medicine price goals and strategic initiatives to reach them. We utilized cost accounting methods on inventory and financial information obtained from a not-for-profit rural pharmacy network in mountainous Kyrgyzstan to quantify costs, revenues, profits and medicine mark-ups during establishment and maintenance periods (October 2004-December 2007). Twelve pharmacies and one warehouse were established in remote Kyrgyzstan with 100%, respectively. Annual mark-ups increased dramatically each year to cover increasing recurrent costs, and by 2007, only 19% and 46% of products revealed mark-ups of 100%. 2007 medicine mark-ups varied substantially across these products, ranging from 32% to 244%. Mark-ups needed to sustain private pharmacies would be even higher in the absence of government subsidies. Pharmacy networks can be established in hard-to-reach regions with little funding using public-private partnership, resource-sharing models. Medicine prices and mark-ups must be interpreted with consideration for regional costs of business. Mark-ups vary dramatically across medicines. Some mark-ups appear "excessive" but are likely necessary for pharmacy viability. Pharmacy financial data is available in remote settings and can be used towards determination of "reasonable" medicine price goals. Health systems researchers must document the positive and negative

  12. Invisibility cloak with image projection capability.

    Science.gov (United States)

    Banerjee, Debasish; Ji, Chengang; Iizuka, Hideo

    2016-12-13

    Investigations of invisibility cloaks have been led by rigorous theories and such cloak structures, in general, require extreme material parameters. Consequently, it is challenging to realize them, particularly in the full visible region. Due to the insensitivity of human eyes to the polarization and phase of light, cloaking a large object in the full visible region has been recently realized by a simplified theory. Here, we experimentally demonstrate a device concept where a large object can be concealed in a cloak structure and at the same time any images can be projected through it by utilizing a distinctively different approach; the cloaking via one polarization and the image projection via the other orthogonal polarization. Our device structure consists of commercially available optical components such as polarizers and mirrors, and therefore, provides a significant further step towards practical application scenarios such as transparent devices and see-through displays.

  13. Genomic Sequence Variation Markup Language (GSVML).

    Science.gov (United States)

    Nakaya, Jun; Kimura, Michio; Hiroi, Kaei; Ido, Keisuke; Yang, Woosung; Tanaka, Hiroshi

    2010-02-01

    With the aim of making good use of internationally accumulated genomic sequence variation data, which is increasing rapidly due to the explosive amount of genomic research at present, the development of an interoperable data exchange format and its international standardization are necessary. Genomic Sequence Variation Markup Language (GSVML) will focus on genomic sequence variation data and human health applications, such as gene based medicine or pharmacogenomics. We developed GSVML through eight steps, based on case analysis and domain investigations. By focusing on the design scope to human health applications and genomic sequence variation, we attempted to eliminate ambiguity and to ensure practicability. We intended to satisfy the requirements derived from the use case analysis of human-based clinical genomic applications. Based on database investigations, we attempted to minimize the redundancy of the data format, while maximizing the data covering range. We also attempted to ensure communication and interface ability with other Markup Languages, for exchange of omics data among various omics researchers or facilities. The interface ability with developing clinical standards, such as the Health Level Seven Genotype Information model, was analyzed. We developed the human health-oriented GSVML comprising variation data, direct annotation, and indirect annotation categories; the variation data category is required, while the direct and indirect annotation categories are optional. The annotation categories contain omics and clinical information, and have internal relationships. For designing, we examined 6 cases for three criteria as human health application and 15 data elements for three criteria as data formats for genomic sequence variation data exchange. The data format of five international SNP databases and six Markup Languages and the interface ability to the Health Level Seven Genotype Model in terms of 317 items were investigated. GSVML was developed as

  14. Descriptive markup languages and the development of digital humanities

    Directory of Open Access Journals (Sweden)

    Boris Bosančić

    2012-11-01

    Full Text Available The paper discusses the role of descriptive markup languages in the development of digital humanities, a new research discipline that is part of social sciences and humanities, which focuses on the use of computers in research. A chronological review of the development of digital humanities, and then descriptive markup languages is exposed, through several developmental stages. It is shown that the development of digital humanities since the mid-1980s and the appearance of SGML, markup language that was the foundation of TEI, a key standard for the encoding and exchange of humanities texts in the digital environment, is inseparable from the development of markup languages. Special attention is dedicated to the presentation of the Text Encoding Initiative – TEI development, a key organization that developed the titled standard, both from organizational and markup perspectives. By this time, TEI standard is published in five versions, and during 2000s SGML is replaced by XML markup language. Key words: markup languages, digital humanities, text encoding, TEI, SGML, XML

  15. Image-projection ion-beam lithography

    International Nuclear Information System (INIS)

    Miller, P.A.

    1989-01-01

    Image-projection ion-beam lithography is an attractive alternative for submicron patterning because it may provide high throughput; it uses demagnification to gain advantages in reticle fabrication, inspection, and lifetime; and it enjoys the precise deposition characteristics of ions which cause essentially no collateral damage. This lithographic option involves extracting low-mass ions (e.g., He + ) from a plasma source, transmitting the ions at low voltage through a stencil reticle, and then accelerating and focusing the ions electrostatically onto a resist-coated wafer. While the advantages of this technology have been demonstrated experimentally by the work of IMS (Austria), many difficulties still impede extension of the technology to the high-volume production of microelectronic devices. We report a computational study of a lithography system designed to address problem areas in field size, telecentricity, and chromatic and geometric aberration. We present a novel ion-column-design approach and conceptual ion-source and column designs which address these issues. We find that image-projection ion-beam technology should in principle meet high-volume-production requirements. The technical success of our present relatively compact-column design requires that a glow-discharge-based ion source (or equivalent cold source) be developed and that moderate further improvement in geometric aberration levels be obtained. Our system requires that image predistortion be employed during reticle fabrication to overcome distortion due to residual image nonlinearity and space-charge forces. This constitutes a software data preparation step, as do correcting for distortions in electron lithography columns and performing proximity-effect corrections. Areas needing further fundamental work are identified

  16. CytometryML: a markup language for analytical cytology

    Science.gov (United States)

    Leif, Robert C.; Leif, Stephanie H.; Leif, Suzanne B.

    2003-06-01

    Cytometry Markup Language, CytometryML, is a proposed new analytical cytology data standard. CytometryML is a set of XML schemas for encoding both flow cytometry and digital microscopy text based data types. CytometryML schemas reference both DICOM (Digital Imaging and Communications in Medicine) codes and FCS keywords. These schemas provide representations for the keywords in FCS 3.0 and will soon include DICOM microscopic image data. Flow Cytometry Standard (FCS) list-mode has been mapped to the DICOM Waveform Information Object. A preliminary version of a list mode binary data type, which does not presently exist in DICOM, has been designed. This binary type is required to enhance the storage and transmission of flow cytometry and digital microscopy data. Index files based on Waveform indices will be used to rapidly locate the cells present in individual subsets. DICOM has the advantage of employing standard file types, TIF and JPEG, for Digital Microscopy. Using an XML schema based representation means that standard commercial software packages such as Excel and MathCad can be used to analyze, display, and store analytical cytometry data. Furthermore, by providing one standard for both DICOM data and analytical cytology data, it eliminates the need to create and maintain special purpose interfaces for analytical cytology data thereby integrating the data into the larger DICOM and other clinical communities. A draft version of CytometryML is available at www.newportinstruments.com.

  17. Ultrasonic imaging of projected components of PFBR

    Energy Technology Data Exchange (ETDEWEB)

    Sylvia, J.I., E-mail: sylvia@igcar.gov.in [Indira Gandhi Centre for Atomic Research, Kalpakkam 603 102, Tamil Nadu (India); Jeyan, M.R.; Anbucheliyan, M.; Asokane, C.; Babu, V. Rajan; Babu, B.; Rajan, K.K.; Velusamy, K.; Jayakumar, T. [Indira Gandhi Centre for Atomic Research, Kalpakkam 603 102, Tamil Nadu (India)

    2013-05-15

    Highlights: ► Under sodium ultrasonic scanner in PFBR is for detecting protruding objects. ► Feasibility study for detecting Absorber rods and its drive mechanisms. ► Developed in-house PC based ultrasonic imaging system. ► Different case studies were carried out on simulated ARDM's. ► Implemented the experimental results to PFBR application. -- Abstract: The 500 MWe, sodium cooled, Prototype Fast Breeder Reactor (PFBR) is under advanced stage of construction at Kalpakkam in India. Opacity of sodium restricts visual inspection of components immersed in sodium by optical means. Ultrasonic wave passes through sodium hence ultrasonic techniques using under sodium ultrasonic scanners are developed to obtain under sodium images. The main objective of such an Under Sodium Ultrasonic Scanner (USUSS) for Prototype Fast Breeder Reactor (PFBR) is to detect and ensure that no core Sub Assembly (SA) or Absorber Rod or its Drive Mechanism is protruded in the above core plenum before starting the fuel handling operation. Hence, it is necessary to detect and locate the object, if it is protruding the above core plenum. To study the feasibility of detecting the absorber rods and their drive mechanisms using direct ultrasonic imaging technique, experiments were carried out for different orientations and profiles of the projected components in a 5 m diameter water tank. The in-house developed PC based ultrasonic scanning system is used for acquisition and analysis of data. The pseudo three dimensional color images obtained are discussed and the results are applicable for PFBR. This paper gives the details of the features of the absorber rods and their drive mechanisms, their orientation in the reactor core, experimental setup, PC based ultrasonic scanning system, ultrasonic images and the discussion on the results.

  18. A quality assessment tool for markup-based clinical guidelines.

    Science.gov (United States)

    Shalom, Erez; Shahar, Yuval; Taieb-Maimon, Meirav; Lunenfeld, Eitan

    2008-11-06

    We introduce a tool for quality assessment of procedural and declarative knowledge. We developed this tool for evaluating the specification of mark-up-based clinical GLs. Using this graphical tool, the expert physician and knowledge engineer collaborate to perform scoring, using pre-defined scoring scale, each of the knowledge roles of the mark-ups, comparing it to a gold standard. The tool enables scoring the mark-ups simultaneously at different sites by different users at different locations.

  19. PENDEKATAN MODEL MATEMATIS UNTUK MENENTUKAN PERSENTASE MARKUP HARGA JUAL PRODUK

    OpenAIRE

    Oviliani Yenty Yuliana; Yohan Wahyudi; Siana Halim

    2002-01-01

    The purpose of this research is to design Mathematical models that can determine the selling volume as an alternative to improve the markup percentage. Mathematical models was designed with double regression statistic. Selling volume is a function of markup, market condition, and substitute condition variables. The designed Mathematical model has fulfilled by the test of: error upon assumption, accurate model, validation model, and multi collinear problem. The Mathematical model has applied i...

  20. STMML. A markup language for scientific, technical and medical publishing

    Directory of Open Access Journals (Sweden)

    Peter Murray-Rust

    2006-01-01

    Full Text Available STMML is an XML-based markup language covering many generic aspects of scientific information. It has been developed as a re-usable core for more specific markup languages. It supports data structures, data types, metadata, scientific units and some basic components of scientific narrative. The central means of adding semantic information is through dictionaries. The specification is through an XML Schema which can be used to validate STMML documents or fragments. Many examples of the language are given.

  1. On the Power of Fuzzy Markup Language

    CERN Document Server

    Loia, Vincenzo; Lee, Chang-Shing; Wang, Mei-Hui

    2013-01-01

    One of the most successful methodology that arose from the worldwide diffusion of Fuzzy Logic is Fuzzy Control. After the first attempts dated in the seventies, this methodology has been widely exploited for controlling many industrial components and systems. At the same time, and very independently from Fuzzy Logic or Fuzzy Control, the birth of the Web has impacted upon almost all aspects of computing discipline. Evolution of Web, Web 2.0 and Web 3.0 has been making scenarios of ubiquitous computing much more feasible;  consequently information technology has been thoroughly integrated into everyday objects and activities. What happens when Fuzzy Logic meets Web technology? Interesting results might come out, as you will discover in this book. Fuzzy Mark-up Language is a son of this synergistic view, where some technological issues of Web are re-interpreted taking into account the transparent notion of Fuzzy Control, as discussed here.  The concept of a Fuzzy Control that is conceived and modeled in terms...

  2. AllerML: markup language for allergens.

    Science.gov (United States)

    Ivanciuc, Ovidiu; Gendel, Steven M; Power, Trevor D; Schein, Catherine H; Braun, Werner

    2011-06-01

    Many concerns have been raised about the potential allergenicity of novel, recombinant proteins into food crops. Guidelines, proposed by WHO/FAO and EFSA, include the use of bioinformatics screening to assess the risk of potential allergenicity or cross-reactivities of all proteins introduced, for example, to improve nutritional value or promote crop resistance. However, there are no universally accepted standards that can be used to encode data on the biology of allergens to facilitate using data from multiple databases in this screening. Therefore, we developed AllerML a markup language for allergens to assist in the automated exchange of information between databases and in the integration of the bioinformatics tools that are used to investigate allergenicity and cross-reactivity. As proof of concept, AllerML was implemented using the Structural Database of Allergenic Proteins (SDAP; http://fermi.utmb.edu/SDAP/) database. General implementation of AllerML will promote automatic flow of validated data that will aid in allergy research and regulatory analysis. Copyright © 2011 Elsevier Inc. All rights reserved.

  3. PIML: the Pathogen Information Markup Language.

    Science.gov (United States)

    He, Yongqun; Vines, Richard R; Wattam, Alice R; Abramochkin, Georgiy V; Dickerman, Allan W; Eckart, J Dana; Sobral, Bruno W S

    2005-01-01

    A vast amount of information about human, animal and plant pathogens has been acquired, stored and displayed in varied formats through different resources, both electronically and otherwise. However, there is no community standard format for organizing this information or agreement on machine-readable format(s) for data exchange, thereby hampering interoperation efforts across information systems harboring such infectious disease data. The Pathogen Information Markup Language (PIML) is a free, open, XML-based format for representing pathogen information. XSLT-based visual presentations of valid PIML documents were developed and can be accessed through the PathInfo website or as part of the interoperable web services federation known as ToolBus/PathPort. Currently, detailed PIML documents are available for 21 pathogens deemed of high priority with regard to public health and national biological defense. A dynamic query system allows simple queries as well as comparisons among these pathogens. Continuing efforts are being taken to include other groups' supporting PIML and to develop more PIML documents. All the PIML-related information is accessible from http://www.vbi.vt.edu/pathport/pathinfo/

  4. Tiny Devices Project Sharp, Colorful Images

    Science.gov (United States)

    2009-01-01

    Displaytech Inc., based in Longmont, Colorado and recently acquired by Micron Technology Inc. of Boise, Idaho, first received a Small Business Innovation Research contract in 1993 from Johnson Space Center to develop tiny, electronic, color displays, called microdisplays. Displaytech has since sold over 20 million microdisplays and was ranked one of the fastest growing technology companies by Deloitte and Touche in 2005. Customers currently incorporate the microdisplays in tiny pico-projectors, which weigh only a few ounces and attach to media players, cell phones, and other devices. The projectors can convert a digital image from the typical postage stamp size into a bright, clear, four-foot projection. The company believes sales of this type of pico-projector may exceed $1.1 billion within 5 years.

  5. Nuclear Fuel Assembly Assessment Project and Image Categorization

    Energy Technology Data Exchange (ETDEWEB)

    Lindsey, C.S.; Lindblad, T.; Waldemark, K. [Royal Inst. of Tech., Stockholm (Sweden); Hildingsson, Lars [Swedish Nuclear Power Inspectorate, Stockholm (Sweden)

    1998-07-01

    A project has been underway to add digital imaging and processing to the inspection of nuclear fuel by the International Atomic Energy Agency. The ultimate goals are to provide the inspector not only with the advantages of Ccd imaging, such as high sensitivity and digital image enhancements, but also with an intelligent agent that can analyze the images and provide useful information about the fuel assemblies in real time. The project is still in the early stages and several interesting sub-projects have been inspired. Here we give first a review of the work on the fuel assembly image analysis and then give a brief status report on one of these sub-projects that concerns automatic categorization of fuel assembly images. The technique could be of benefit to the general challenge of image categorization

  6. XML schemas and mark-up practices of taxonomic literature.

    Science.gov (United States)

    Penev, Lyubomir; Lyal, Christopher Hc; Weitzman, Anna; Morse, David R; King, David; Sautter, Guido; Georgiev, Teodor; Morris, Robert A; Catapano, Terry; Agosti, Donat

    2011-01-01

    We review the three most widely used XML schemas used to mark-up taxonomic texts, TaxonX, TaxPub and taXMLit. These are described from the viewpoint of their development history, current status, implementation, and use cases. The concept of "taxon treatment" from the viewpoint of taxonomy mark-up into XML is discussed. TaxonX and taXMLit are primarily designed for legacy literature, the former being more lightweight and with a focus on recovery of taxon treatments, the latter providing a much more detailed set of tags to facilitate data extraction and analysis. TaxPub is an extension of the National Library of Medicine Document Type Definition (NLM DTD) for taxonomy focussed on layout and recovery and, as such, is best suited for mark-up of new publications and their archiving in PubMedCentral. All three schemas have their advantages and shortcomings and can be used for different purposes.

  7. Interexaminer variation of minutia markup on latent fingerprints.

    Science.gov (United States)

    Ulery, Bradford T; Hicklin, R Austin; Roberts, Maria Antonia; Buscaglia, JoAnn

    2016-07-01

    Latent print examiners often differ in the number of minutiae they mark during analysis of a latent, and also during comparison of a latent with an exemplar. Differences in minutia counts understate interexaminer variability: examiners' markups may have similar minutia counts but differ greatly in which specific minutiae were marked. We assessed variability in minutia markup among 170 volunteer latent print examiners. Each provided detailed markup documenting their examinations of 22 latent-exemplar pairs of prints randomly assigned from a pool of 320 pairs. An average of 12 examiners marked each latent. The primary factors associated with minutia reproducibility were clarity, which regions of the prints examiners chose to mark, and agreement on value or comparison determinations. In clear areas (where the examiner was "certain of the location, presence, and absence of all minutiae"), median reproducibility was 82%; in unclear areas, median reproducibility was 46%. Differing interpretations regarding which regions should be marked (e.g., when there is ambiguity in the continuity of a print) contributed to variability in minutia markup: especially in unclear areas, marked minutiae were often far from the nearest minutia marked by a majority of examiners. Low reproducibility was also associated with differences in value or comparison determinations. Lack of standardization in minutia markup and unfamiliarity with test procedures presumably contribute to the variability we observed. We have identified factors accounting for interexaminer variability; implementing standards for detailed markup as part of documentation and focusing future training efforts on these factors may help to facilitate transparency and reduce subjectivity in the examination process. Published by Elsevier Ireland Ltd.

  8. Trade reforms, mark-ups and bargaining power of workers: the case ...

    African Journals Online (AJOL)

    Ethiopian Journal of Economics ... workers between 1996 and 2007, a model of mark-up with labor bargaining power was estimated using random effects and LDPDM. ... Keywords: Trade reform, mark-up, bargaining power, rent, trade unions ...

  9. Fast image matching algorithm based on projection characteristics

    Science.gov (United States)

    Zhou, Lijuan; Yue, Xiaobo; Zhou, Lijun

    2011-06-01

    Based on analyzing the traditional template matching algorithm, this paper identified the key factors restricting the speed of matching and put forward a brand new fast matching algorithm based on projection. Projecting the grayscale image, this algorithm converts the two-dimensional information of the image into one-dimensional one, and then matches and identifies through one-dimensional correlation, meanwhile, because of normalization has been done, when the image brightness or signal amplitude increasing in proportion, it could also perform correct matching. Experimental results show that the projection characteristics based image registration method proposed in this article could greatly improve the matching speed, which ensuring the matching accuracy as well.

  10. SGML-Based Markup for Literary Texts: Two Problems and Some Solutions.

    Science.gov (United States)

    Barnard, David; And Others

    1988-01-01

    Identifies the Standard Generalized Markup Language (SGML) as the best basis for a markup standard for encoding literary texts. Outlines solutions to problems using SGML and discusses the problem of maintaining multiple views of a document. Examines several ways of reducing the burden of markups. (GEA)

  11. Variance in parametric images: direct estimation from parametric projections

    International Nuclear Information System (INIS)

    Maguire, R.P.; Leenders, K.L.; Spyrou, N.M.

    2000-01-01

    Recent work has shown that it is possible to apply linear kinetic models to dynamic projection data in PET in order to calculate parameter projections. These can subsequently be back-projected to form parametric images - maps of parameters of physiological interest. Critical to the application of these maps, to test for significant changes between normal and pathophysiology, is an assessment of the statistical uncertainty. In this context, parametric images also include simple integral images from, e.g., [O-15]-water used to calculate statistical parametric maps (SPMs). This paper revisits the concept of parameter projections and presents a more general formulation of the parameter projection derivation as well as a method to estimate parameter variance in projection space, showing which analysis methods (models) can be used. Using simulated pharmacokinetic image data we show that a method based on an analysis in projection space inherently calculates the mathematically rigorous pixel variance. This results in an estimation which is as accurate as either estimating variance in image space during model fitting, or estimation by comparison across sets of parametric images - as might be done between individuals in a group pharmacokinetic PET study. The method based on projections has, however, a higher computational efficiency, and is also shown to be more precise, as reflected in smooth variance distribution images when compared to the other methods. (author)

  12. The Petri Net Markup Language : concepts, technology, and tools

    NARCIS (Netherlands)

    Billington, J.; Christensen, S.; Hee, van K.M.; Kindler, E.; Kummer, O.; Petrucci, L.; Post, R.D.J.; Stehno, C.; Weber, M.; Aalst, van der W.M.P.; Best, E.

    2003-01-01

    The Petri Net Markup Language (PNML) is an XML-based interchange format for Petri nets. In order to support different versions of Petri nets and, in particular, future versions of Petri nets, PNML allows the definition of Petri net types.Due to this flexibility, PNML is a starting point for a

  13. Wanda ML - a markup language for digital annotation

    NARCIS (Netherlands)

    Franke, K.Y.; Guyon, I.; Schomaker, L.R.B.; Vuurpijl, L.G.

    2004-01-01

    WANDAML is an XML-based markup language for the annotation and filter journaling of digital documents. It addresses in particular the needs of forensic handwriting data examination, by allowing experts to enter information about writer, material (pen, paper), script and content, and to record chains

  14. The WANDAML Markup Language for Digital Document Annotation

    NARCIS (Netherlands)

    Franke, K.; Guyon, I.; Schomaker, L.; Vuurpijl, L.

    2004-01-01

    WANDAML is an XML-based markup language for the annotation and filter journaling of digital documents. It addresses in particular the needs of forensic handwriting data examination, by allowing experts to enter information about writer, material (pen, paper), script and content, and to record chains

  15. [Managing digital medical imaging projects in healthcare services: lessons learned].

    Science.gov (United States)

    Rojas de la Escalera, D

    2013-01-01

    Medical imaging is one of the most important diagnostic instruments in clinical practice. The technological development of digital medical imaging has enabled healthcare services to undertake large scale projects that require the participation and collaboration of many professionals of varied backgrounds and interests as well as substantial investments in infrastructures. Rather than focusing on systems for dealing with digital medical images, this article deals with the management of projects for implementing these systems, reviewing various organizational, technological, and human factors that are critical to ensure the success of these projects and to guarantee the compatibility and integration of digital medical imaging systems with other health information systems. To this end, the author relates several lessons learned from a review of the literature and the author's own experience in the technical coordination of digital medical imaging projects. Copyright © 2012 SERAM. Published by Elsevier Espana. All rights reserved.

  16. Improving Interoperability by Incorporating UnitsML Into Markup Languages.

    Science.gov (United States)

    Celebi, Ismet; Dragoset, Robert A; Olsen, Karen J; Schaefer, Reinhold; Kramer, Gary W

    2010-01-01

    Maintaining the integrity of analytical data over time is a challenge. Years ago, data were recorded on paper that was pasted directly into a laboratory notebook. The digital age has made maintaining the integrity of data harder. Nowadays, digitized analytical data are often separated from information about how the sample was collected and prepared for analysis and how the data were acquired. The data are stored on digital media, while the related information about the data may be written in a paper notebook or stored separately in other digital files. Sometimes the connection between this "scientific meta-data" and the analytical data is lost, rendering the spectrum or chromatogram useless. We have been working with ASTM Subcommittee E13.15 on Analytical Data to create the Analytical Information Markup Language or AnIML-a new way to interchange and store spectroscopy and chromatography data based on XML (Extensible Markup Language). XML is a language for describing what data are by enclosing them in computer-useable tags. Recording the units associated with the analytical data and metadata is an essential issue for any data representation scheme that must be addressed by all domain-specific markup languages. As scientific markup languages proliferate, it is very desirable to have a single scheme for handling units to facilitate moving information between different data domains. At NIST, we have been developing a general markup language just for units that we call UnitsML. This presentation will describe how UnitsML is used and how it is being incorporated into AnIML.

  17. The Long-Run Relationship Between Inflation and the Markup in the U.S.

    OpenAIRE

    Sandeep Mazumder

    2011-01-01

    This paper examines the long-run relationship between inflation and a new measure of the price-marginal cost markup. This new markup index is derived while accounting for labor adjustment costs, which a large number of the papers that estimate the markup have ignored. We then examine the long-run relationship between this markup measure, which is estimated using U.S. manufacturing data, and inflation. We find that decreases in the markup that are associated with a percentage point increase in...

  18. Photoacoustic projection imaging using an all-optical detector array

    Science.gov (United States)

    Bauer-Marschallinger, J.; Felbermayer, K.; Berer, T.

    2018-02-01

    We present a prototype for all-optical photoacoustic projection imaging. By generating projection images, photoacoustic information of large volumes can be retrieved with less effort compared to common photoacoustic computed tomography where many detectors and/or multiple measurements are required. In our approach, an array of 60 integrating line detectors is used to acquire photoacoustic waves. The line detector array consists of fiber-optic MachZehnder interferometers, distributed on a cylindrical surface. From the measured variation of the optical path lengths of the interferometers, induced by photoacoustic waves, a photoacoustic projection image can be reconstructed. The resulting images represent the projection of the three-dimensional spatial light absorbance within the imaged object onto a two-dimensional plane, perpendicular to the line detector array. The fiber-optic detectors achieve a noise-equivalent pressure of 24 Pascal at a 10 MHz bandwidth. We present the operational principle, the structure of the array, and resulting images. The system can acquire high-resolution projection images of large volumes within a short period of time. Imaging large volumes at high frame rates facilitates monitoring of dynamic processes.

  19. Scanned Image Projection System Employing Intermediate Image Plane

    Science.gov (United States)

    DeJong, Christian Dean (Inventor); Hudman, Joshua M. (Inventor)

    2014-01-01

    In imaging system, a spatial light modulator is configured to produce images by scanning a plurality light beams. A first optical element is configured to cause the plurality of light beams to converge along an optical path defined between the first optical element and the spatial light modulator. A second optical element is disposed between the spatial light modulator and a waveguide. The first optical element and the spatial light modulator are arranged such that an image plane is created between the spatial light modulator and the second optical element. The second optical element is configured to collect the diverging light from the image plane and collimate it. The second optical element then delivers the collimated light to a pupil at an input of the waveguide.

  20. Image reconstruction technique using projection data from neutron tomography system

    Directory of Open Access Journals (Sweden)

    Waleed Abd el Bar

    2015-12-01

    Full Text Available Neutron tomography is a very powerful technique for nondestructive evaluation of heavy industrial components as well as for soft hydrogenous materials enclosed in heavy metals which are usually difficult to image using X-rays. Due to the properties of the image acquisition system, the projection images are distorted by several artifacts, and these reduce the quality of the reconstruction. In order to eliminate these harmful effects the projection images should be corrected before reconstruction. This paper gives a description of a filter back projection (FBP technique, which is used for reconstruction of projected data obtained from transmission measurements by neutron tomography system We demonstrated the use of spatial Discrete Fourier Transform (DFT and the 2D Inverse DFT in the formulation of the method, and outlined the theory of reconstruction of a 2D neutron image from a sequence of 1D projections taken at different angles between 0 and π in MATLAB environment. Projections are generated by applying the Radon transform to the original image at different angles.

  1. Total Variation and Tomographic Imaging from Projections

    DEFF Research Database (Denmark)

    Hansen, Per Christian; Jørgensen, Jakob Heide

    2011-01-01

    or 3D reconstruction from noisy projections. We demonstrate that for a small signal-to-noise ratio, this new approach allows us to compute better (i.e., more reliable) reconstructions than those obtained by classical methods. This is possible due to the use of the TV reconstruction model, which...

  2. Cross Cultural Images: The ETSU/NAU Special Photography Project.

    Science.gov (United States)

    Montgomery, Donna; Sluss, Dorothy; Lewis, Jamie; Vervelde, Peggy; Prater, Greg; Minner, Sam

    Recreation is a significant part of a full and rich life but is frequently overlooked in relation to handicapped children. A project called Cross-Cultural Images aimed to improve the quality of life for handicapped children by teaching them avocational photography skills. The project involved mildly handicapped children aged 7-11 in Appalachia, on…

  3. Biological Dynamics Markup Language (BDML): an open format for representing quantitative biological dynamics data.

    Science.gov (United States)

    Kyoda, Koji; Tohsato, Yukako; Ho, Kenneth H L; Onami, Shuichi

    2015-04-01

    Recent progress in live-cell imaging and modeling techniques has resulted in generation of a large amount of quantitative data (from experimental measurements and computer simulations) on spatiotemporal dynamics of biological objects such as molecules, cells and organisms. Although many research groups have independently dedicated their efforts to developing software tools for visualizing and analyzing these data, these tools are often not compatible with each other because of different data formats. We developed an open unified format, Biological Dynamics Markup Language (BDML; current version: 0.2), which provides a basic framework for representing quantitative biological dynamics data for objects ranging from molecules to cells to organisms. BDML is based on Extensible Markup Language (XML). Its advantages are machine and human readability and extensibility. BDML will improve the efficiency of development and evaluation of software tools for data visualization and analysis. A specification and a schema file for BDML are freely available online at http://ssbd.qbic.riken.jp/bdml/. Supplementary data are available at Bioinformatics online. © The Author 2014. Published by Oxford University Press.

  4. The Systems Biology Markup Language (SBML: Language Specification for Level 3 Version 2 Core

    Directory of Open Access Journals (Sweden)

    Hucka Michael

    2018-03-01

    Full Text Available Computational models can help researchers to interpret data, understand biological functions, and make quantitative predictions. The Systems Biology Markup Language (SBML is a file format for representing computational models in a declarative form that different software systems can exchange. SBML is oriented towards describing biological processes of the sort common in research on a number of topics, including metabolic pathways, cell signaling pathways, and many others. By supporting SBML as an input/output format, different tools can all operate on an identical representation of a model, removing opportunities for translation errors and assuring a common starting point for analyses and simulations. This document provides the specification for Version 2 of SBML Level 3 Core. The specification defines the data structures prescribed by SBML, their encoding in XML (the eXtensible Markup Language, validation rules that determine the validity of an SBML document, and examples of models in SBML form. The design of Version 2 differs from Version 1 principally in allowing new MathML constructs, making more child elements optional, and adding identifiers to all SBML elements instead of only selected elements. Other materials and software are available from the SBML project website at http://sbml.org/.

  5. Systems Biology Markup Language (SBML Level 2 Version 5: Structures and Facilities for Model Definitions

    Directory of Open Access Journals (Sweden)

    Hucka Michael

    2015-06-01

    Full Text Available Computational models can help researchers to interpret data, understand biological function, and make quantitative predictions. The Systems Biology Markup Language (SBML is a file format for representing computational models in a declarative form that can be exchanged between different software systems. SBML is oriented towards describing biological processes of the sort common in research on a number of topics, including metabolic pathways, cell signaling pathways, and many others. By supporting SBML as an input/output format, different tools can all operate on an identical representation of a model, removing opportunities for translation errors and assuring a common starting point for analyses and simulations. This document provides the specification for Version 5 of SBML Level 2. The specification defines the data structures prescribed by SBML as well as their encoding in XML, the eXtensible Markup Language. This specification also defines validation rules that determine the validity of an SBML document, and provides many examples of models in SBML form. Other materials and software are available from the SBML project web site, http://sbml.org/.

  6. Systems Biology Markup Language (SBML) Level 2 Version 5: Structures and Facilities for Model Definitions.

    Science.gov (United States)

    Hucka, Michael; Bergmann, Frank T; Dräger, Andreas; Hoops, Stefan; Keating, Sarah M; Le Novère, Nicolas; Myers, Chris J; Olivier, Brett G; Sahle, Sven; Schaff, James C; Smith, Lucian P; Waltemath, Dagmar; Wilkinson, Darren J

    2015-09-04

    Computational models can help researchers to interpret data, understand biological function, and make quantitative predictions. The Systems Biology Markup Language (SBML) is a file format for representing computational models in a declarative form that can be exchanged between different software systems. SBML is oriented towards describing biological processes of the sort common in research on a number of topics, including metabolic pathways, cell signaling pathways, and many others. By supporting SBML as an input/output format, different tools can all operate on an identical representation of a model, removing opportunities for translation errors and assuring a common starting point for analyses and simulations. This document provides the specification for Version 5 of SBML Level 2. The specification defines the data structures prescribed by SBML as well as their encoding in XML, the eXtensible Markup Language. This specification also defines validation rules that determine the validity of an SBML document, and provides many examples of models in SBML form. Other materials and software are available from the SBML project web site, http://sbml.org.

  7. The Systems Biology Markup Language (SBML): Language Specification for Level 3 Version 2 Core.

    Science.gov (United States)

    Hucka, Michael; Bergmann, Frank T; Dräger, Andreas; Hoops, Stefan; Keating, Sarah M; Le Novère, Nicolas; Myers, Chris J; Olivier, Brett G; Sahle, Sven; Schaff, James C; Smith, Lucian P; Waltemath, Dagmar; Wilkinson, Darren J

    2018-03-09

    Computational models can help researchers to interpret data, understand biological functions, and make quantitative predictions. The Systems Biology Markup Language (SBML) is a file format for representing computational models in a declarative form that different software systems can exchange. SBML is oriented towards describing biological processes of the sort common in research on a number of topics, including metabolic pathways, cell signaling pathways, and many others. By supporting SBML as an input/output format, different tools can all operate on an identical representation of a model, removing opportunities for translation errors and assuring a common starting point for analyses and simulations. This document provides the specification for Version 2 of SBML Level 3 Core. The specification defines the data structures prescribed by SBML, their encoding in XML (the eXtensible Markup Language), validation rules that determine the validity of an SBML document, and examples of models in SBML form. The design of Version 2 differs from Version 1 principally in allowing new MathML constructs, making more child elements optional, and adding identifiers to all SBML elements instead of only selected elements. Other materials and software are available from the SBML project website at http://sbml.org/.

  8. The Systems Biology Markup Language (SBML): Language Specification for Level 3 Version 1 Core.

    Science.gov (United States)

    Hucka, Michael; Bergmann, Frank T; Hoops, Stefan; Keating, Sarah M; Sahle, Sven; Schaff, James C; Smith, Lucian P; Wilkinson, Darren J

    2015-09-04

    Computational models can help researchers to interpret data, understand biological function, and make quantitative predictions. The Systems Biology Markup Language (SBML) is a file format for representing computational models in a declarative form that can be exchanged between different software systems. SBML is oriented towards describing biological processes of the sort common in research on a number of topics, including metabolic pathways, cell signaling pathways, and many others. By supporting SBML as an input/output format, different tools can all operate on an identical representation of a model, removing opportunities for translation errors and assuring a common starting point for analyses and simulations. This document provides the specification for Version 1 of SBML Level 3 Core. The specification defines the data structures prescribed by SBML as well as their encoding in XML, the eXtensible Markup Language. This specification also defines validation rules that determine the validity of an SBML document, and provides many examples of models in SBML form. Other materials and software are available from the SBML project web site, http://sbml.org/.

  9. The Systems Biology Markup Language (SBML: Language Specification for Level 3 Version 1 Core

    Directory of Open Access Journals (Sweden)

    Hucka Michael

    2018-04-01

    Full Text Available Computational models can help researchers to interpret data, understand biological functions, and make quantitative predictions. The Systems Biology Markup Language (SBML is a file format for representing computational models in a declarative form that different software systems can exchange. SBML is oriented towards describing biological processes of the sort common in research on a number of topics, including metabolic pathways, cell signaling pathways, and many others. By supporting SBML as an input/output format, different tools can all operate on an identical representation of a model, removing opportunities for translation errors and assuring a common starting point for analyses and simulations. This document provides the specification for Release 2 of Version 1 of SBML Level 3 Core. The specification defines the data structures prescribed by SBML, their encoding in XML (the eXtensible Markup Language, validation rules that determine the validity of an SBML document, and examples of models in SBML form. No design changes have been made to the description of models between Release 1 and Release 2; changes are restricted to the format of annotations, the correction of errata and the addition of clarifications. Other materials and software are available from the SBML project website at http://sbml.org/.

  10. Semantic markup of nouns and adjectives for the Electronic corpus of texts in Tuvan language

    Directory of Open Access Journals (Sweden)

    Bajlak Ch. Oorzhak

    2016-12-01

    Full Text Available The article examines the progress of semantic markup of the Electronic corpus of texts in Tuvan language (ECTTL, which is another stage of adding Tuvan texts to the database and marking up the corpus. ECTTL is a collaborative project by researchers from Tuvan State University (Research and Education Center of Turkic Studies and Department of Information Technologies. Semantic markup of Tuvan lexis will come as a search engine and reference system which will help users find text snippets containing words with desired meanings in ECTTL. The first stage of this process is setting up databases of basic lexemes of Tuvan language. All meaningful lexemes were classified into the following semantic groups: humans, animals, objects, natural objects and phenomena, and abstract concepts. All Tuvan object nouns, as well as both descriptive and relative adjectives, were assigned to one of these lexico-semantic classes. Each class, sub-class and descriptor is tagged in Tuvan, Russian and English; these tags, in turn, will help automatize searching. The databases of meaningful lexemes of Tuvan language will also outline their lexical combinations. The automatized system will contain information on semantic combinations of adjectives with nouns, adverbs with verbs, nouns with verbs, as well as on the combinations which are semantically incompatible.

  11. Extreme Markup: The Fifty US Hospitals With The Highest Charge-To-Cost Ratios.

    Science.gov (United States)

    Bai, Ge; Anderson, Gerard F

    2015-06-01

    Using Medicare cost reports, we examined the fifty US hospitals with the highest charge-to-cost ratios in 2012. These hospitals have markups (ratios of charges over Medicare-allowable costs) approximately ten times their Medicare-allowable costs compared to a national average of 3.4 and a mode of 2.4. Analysis of the fifty hospitals showed that forty-nine are for profit (98 percent), forty-six are owned by for-profit hospital systems (92 percent), and twenty (40 percent) operate in Florida. One for-profit hospital system owns half of these fifty hospitals. While most public and private health insurers do not use hospital charges to set their payment rates, uninsured patients are commonly asked to pay the full charges, and out-of-network patients and casualty and workers' compensation insurers are often expected to pay a large portion of the full charges. Because it is difficult for patients to compare prices, market forces fail to constrain hospital charges. Federal and state governments may want to consider limitations on the charge-to-cost ratio, some form of all-payer rate setting, or mandated price disclosure to regulate hospital markups. Project HOPE—The People-to-People Health Foundation, Inc.

  12. Dual scan CT image recovery from truncated projections

    Science.gov (United States)

    Sarkar, Shubhabrata; Wahi, Pankaj; Munshi, Prabhat

    2017-12-01

    There are computerized tomography (CT) scanners available commercially for imaging small objects and they are often categorized as mini-CT X-ray machines. One major limitation of these machines is their inability to scan large objects with good image quality because of the truncation of projection data. An algorithm is proposed in this work which enables such machines to scan large objects while maintaining the quality of the recovered image.

  13. Quantitative imaging studies with PET VI. Project II

    International Nuclear Information System (INIS)

    Copper, M.; Chen, C.T.; Yasillo, N.; Gatley, J.; Ortega, C.; DeJesus, O.; Friedman, A.

    1985-01-01

    This project is focused upon the development of hardware and software to improve PET image analysis and upon clinical applications of PET. In this report the laboratory's progress in various attenuation correction methods for brain imaging are described. The use of time-of-flight information for image reconstruction is evaluated. The location of dopamine D1 and D2 receptors in brain was found to be largely in the basal ganghia. 1 tab. (DT)

  14. Reconstruction of CT images by the Bayes- back projection method

    CERN Document Server

    Haruyama, M; Takase, M; Tobita, H

    2002-01-01

    In the course of research on quantitative assay of non-destructive measurement of radioactive waste, the have developed a unique program based on the Bayesian theory for reconstruction of transmission computed tomography (TCT) image. The reconstruction of cross-section images in the CT technology usually employs the Filtered Back Projection method. The new imaging reconstruction program reported here is based on the Bayesian Back Projection method, and it has a function of iterative improvement images by every step of measurement. Namely, this method has the capability of prompt display of a cross-section image corresponding to each angled projection data from every measurement. Hence, it is possible to observe an improved cross-section view by reflecting each projection data in almost real time. From the basic theory of Baysian Back Projection method, it can be not only applied to CT types of 1st, 2nd, and 3rd generation. This reported deals with a reconstruction program of cross-section images in the CT of ...

  15. Field Data and the Gas Hydrate Markup Language

    Directory of Open Access Journals (Sweden)

    Ralf Löwner

    2007-06-01

    Full Text Available Data and information exchange are crucial for any kind of scientific research activities and are becoming more and more important. The comparison between different data sets and different disciplines creates new data, adds value, and finally accumulates knowledge. Also the distribution and accessibility of research results is an important factor for international work. The gas hydrate research community is dispersed across the globe and therefore, a common technical communication language or format is strongly demanded. The CODATA Gas Hydrate Data Task Group is creating the Gas Hydrate Markup Language (GHML, a standard based on the Extensible Markup Language (XML to enable the transport, modeling, and storage of all manner of objects related to gas hydrate research. GHML initially offers an easily deducible content because of the text-based encoding of information, which does not use binary data. The result of these investigations is a custom-designed application schema, which describes the features, elements, and their properties, defining all aspects of Gas Hydrates. One of the components of GHML is the "Field Data" module, which is used for all data and information coming from the field. It considers international standards, particularly the standards defined by the W3C (World Wide Web Consortium and the OGC (Open Geospatial Consortium. Various related standards were analyzed and compared with our requirements (in particular the Geographic Markup Language (ISO19136, GML and the whole ISO19000 series. However, the requirements demanded a quick solution and an XML application schema readable for any scientist without a background in information technology. Therefore, ideas, concepts and definitions have been used to build up the modules of GHML without importing any of these Markup languages. This enables a comprehensive schema and simple use.

  16. Are the determinants of markup size industry-specific? The case of Slovenian manufacturing firms

    Directory of Open Access Journals (Sweden)

    Ponikvar Nina

    2011-01-01

    Full Text Available The aim of this paper is to identify factors that affect the pricing policy in Slovenian manufacturing firms in terms of the markup size and, most of all, to explicitly account for the possibility of differences in pricing procedures among manufacturing industries. Accordingly, the analysis of the dynamic panel is carried out on an industry-by-industry basis, allowing the coefficients on the markup determinants to vary across industries. We find that the oligopoly theory of markup determination for the most part holds for the manufacturing sector as a whole, although large variability in markup determinants exists across industries within the Slovenian manufacturing. Our main conclusion is that each industry should be investigated separately in detail in order to assess the precise role of markup factors in the markup-determination process.

  17. The Commercial Office Market and the Markup for Full Service Leases

    OpenAIRE

    Jonathan A. Wiley; Yu Liu; Dongshin Kim; Tom Springer

    2014-01-01

    Because landlords assume all of the operating expense risk, rents for gross leases exceed those for net leases. The markup, or spread, for gross leases varies between properties and across markets. Specifically, the markup is expected to increase with the cost of real estate services at the property, and to be influenced by market conditions. A matching procedure is applied to measure the services markup as the percentage difference between the actual rent on a gross lease relative to the act...

  18. The Price-Marginal Cost Markup and its Determinants in U.S. Manufacturing

    OpenAIRE

    Mazumder, Sandeep

    2009-01-01

    This paper estimates the price-marginal cost markup for US manufacturing using a new methodology. Most existing techniques of estimating the markup are a variant on Hall's (1988) framework involving the manipulation of the Solow Residual. However this paper argues that this notion is based on the unreasonable assumption that labor can be costlessly adjusted at a fixed wage rate. By relaxing this assumption, we are able to derive a generalized markup index, which when estimated using manufactu...

  19. The Van Sant AVHRR image projected onto a rhombicosidodecahedron

    Science.gov (United States)

    Baron, Michael; Morain, Stan

    1996-03-01

    IDEATION, a design and development corporation, Santa Fe, New Mexico, has modeled Tom Van Sant's ``The Earth From Space'' image to a rhombicosidodecahedron. ``The Earth from Space'' image, produced by the Geosphere® Project in Santa Monica, California, was developed from hundreds of AVHRR pictures and published as a Mercator projection. IDEATION, utilizing a digitized Robinson Projection, fitted the image to foldable, paper components which, when interconnected by means of a unique tabular system, results in a rhombicosidodecahedron representation of the Earth exposing 30 square, 20 triangular, and 12 pentagonal faces. Because the resulting model is not spherical, the borders of the represented features were rectified to match the intersecting planes of the model's faces. The resulting product will be licensed and commercially produced for use by elementary and secondary students. Market research indicates the model will be used in both the demonstration of geometric principles and the teaching of fundamental spatial relations of the Earth's lands and oceans.

  20. Projection Operators and Moment Invariants to Image Blurring

    Czech Academy of Sciences Publication Activity Database

    Flusser, Jan; Suk, Tomáš; Boldyš, Jiří; Zitová, Barbara

    2015-01-01

    Roč. 37, č. 4 (2015), s. 786-802 ISSN 0162-8828 R&D Projects: GA ČR GA13-29225S; GA ČR GAP103/11/1552 Institutional support: RVO:67985556 Keywords : Blurred image * N-fold rotation symmetry * projection operators * image moments * moment invariants * blur invariants * object recognition Subject RIV: JD - Computer Applications, Robotics Impact factor: 6.077, year: 2015 http://library.utia.cas.cz/separaty/2014/ZOI/flusser-0434521.pdf

  1. The gel electrophoresis markup language (GelML) from the Proteomics Standards Initiative.

    Science.gov (United States)

    Gibson, Frank; Hoogland, Christine; Martinez-Bartolomé, Salvador; Medina-Aunon, J Alberto; Albar, Juan Pablo; Babnigg, Gyorgy; Wipat, Anil; Hermjakob, Henning; Almeida, Jonas S; Stanislaus, Romesh; Paton, Norman W; Jones, Andrew R

    2010-09-01

    The Human Proteome Organisation's Proteomics Standards Initiative has developed the GelML (gel electrophoresis markup language) data exchange format for representing gel electrophoresis experiments performed in proteomics investigations. The format closely follows the reporting guidelines for gel electrophoresis, which are part of the Minimum Information About a Proteomics Experiment (MIAPE) set of modules. GelML supports the capture of metadata (such as experimental protocols) and data (such as gel images) resulting from gel electrophoresis so that laboratories can be compliant with the MIAPE Gel Electrophoresis guidelines, while allowing such data sets to be exchanged or downloaded from public repositories. The format is sufficiently flexible to capture data from a broad range of experimental processes, and complements other PSI formats for MS data and the results of protein and peptide identifications to capture entire gel-based proteome workflows. GelML has resulted from the open standardisation process of PSI consisting of both public consultation and anonymous review of the specifications.

  2. Discriminating Projections for Estimating Face Age in Wild Images

    Energy Technology Data Exchange (ETDEWEB)

    Tokola, Ryan A [ORNL; Bolme, David S [ORNL; Ricanek, Karl [ORNL; Barstow, Del R [ORNL; Boehnen, Chris Bensing [ORNL

    2014-01-01

    We introduce a novel approach to estimating the age of a human from a single uncontrolled image. Current face age estimation algorithms work well in highly controlled images, and some are robust to changes in illumination, but it is usually assumed that images are close to frontal. This bias is clearly seen in the datasets that are commonly used to evaluate age estimation, which either entirely or mostly consist of frontal images. Using pose-specific projections, our algorithm maps image features into a pose-insensitive latent space that is discriminative with respect to age. Age estimation is then performed using a multi-class SVM. We show that our approach outperforms other published results on the Images of Groups dataset, which is the only age-related dataset with a non-trivial number of off-axis face images, and that we are competitive with recent age estimation algorithms on the mostly-frontal FG-NET dataset. We also experimentally demonstrate that our feature projections introduce insensitivity to pose.

  3. Integrated variable projection approach (IVAPA) for parallel magnetic resonance imaging.

    Science.gov (United States)

    Zhang, Qiao; Sheng, Jinhua

    2012-10-01

    Parallel magnetic resonance imaging (pMRI) is a fast method which requires algorithms for the reconstructing image from a small number of measured k-space lines. The accurate estimation of the coil sensitivity functions is still a challenging problem in parallel imaging. The joint estimation of the coil sensitivity functions and the desired image has recently been proposed to improve the situation by iteratively optimizing both the coil sensitivity functions and the image reconstruction. It regards both the coil sensitivities and the desired images as unknowns to be solved for jointly. In this paper, we propose an integrated variable projection approach (IVAPA) for pMRI, which integrates two individual processing steps (coil sensitivity estimation and image reconstruction) into a single processing step to improve the accuracy of the coil sensitivity estimation using the variable projection approach. The method is demonstrated to be able to give an optimal solution with considerably reduced artifacts for high reduction factors and a low number of auto-calibration signal (ACS) lines, and our implementation has a fast convergence rate. The performance of the proposed method is evaluated using a set of in vivo experiment data. Copyright © 2012 Elsevier Ltd. All rights reserved.

  4. Object-Image Correspondence for Algebraic Curves under Projections

    Directory of Open Access Journals (Sweden)

    Joseph M. Burdis

    2013-03-01

    Full Text Available We present a novel algorithm for deciding whether a given planar curve is an image of a given spatial curve, obtained by a central or a parallel projection with unknown parameters. The motivation comes from the problem of establishing a correspondence between an object and an image, taken by a camera with unknown position and parameters. A straightforward approach to this problem consists of setting up a system of conditions on the projection parameters and then checking whether or not this system has a solution. The computational advantage of the algorithm presented here, in comparison to algorithms based on the straightforward approach, lies in a significant reduction of a number of real parameters that need to be eliminated in order to establish existence or non-existence of a projection that maps a given spatial curve to a given planar curve. Our algorithm is based on projection criteria that reduce the projection problem to a certain modification of the equivalence problem of planar curves under affine and projective transformations. To solve the latter problem we make an algebraic adaptation of signature construction that has been used to solve the equivalence problems for smooth curves. We introduce a notion of a classifying set of rational differential invariants and produce explicit formulas for such invariants for the actions of the projective and the affine groups on the plane.

  5. Field Markup Language: biological field representation in XML.

    Science.gov (United States)

    Chang, David; Lovell, Nigel H; Dokos, Socrates

    2007-01-01

    With an ever increasing number of biological models available on the internet, a standardized modeling framework is required to allow information to be accessed or visualized. Based on the Physiome Modeling Framework, the Field Markup Language (FML) is being developed to describe and exchange field information for biological models. In this paper, we describe the basic features of FML, its supporting application framework and its ability to incorporate CellML models to construct tissue-scale biological models. As a typical application example, we present a spatially-heterogeneous cardiac pacemaker model which utilizes both FML and CellML to describe and solve the underlying equations of electrical activation and propagation.

  6. Experimental Applications of Automatic Test Markup Language (ATML)

    Science.gov (United States)

    Lansdowne, Chatwin A.; McCartney, Patrick; Gorringe, Chris

    2012-01-01

    The authors describe challenging use-cases for Automatic Test Markup Language (ATML), and evaluate solutions. The first case uses ATML Test Results to deliver active features to support test procedure development and test flow, and bridging mixed software development environments. The second case examines adding attributes to Systems Modelling Language (SysML) to create a linkage for deriving information from a model to fill in an ATML document set. Both cases are outside the original concept of operations for ATML but are typical when integrating large heterogeneous systems with modular contributions from multiple disciplines.

  7. Color image quality in projection displays: a case study

    Science.gov (United States)

    Strand, Monica; Hardeberg, Jon Y.; Nussbaum, Peter

    2005-01-01

    Recently the use of projection displays has increased dramatically in different applications such as digital cinema, home theatre, and business and educational presentations. Even if the color image quality of these devices has improved significantly over the years, it is still a common situation for users of projection displays that the projected colors differ significantly from the intended ones. This study presented in this paper attempts to analyze the color image quality of a large set of projection display devices, particularly investigating the variations in color reproduction. As a case study, a set of 14 projectors (LCD and DLP technology) at Gjovik University College have been tested under four different conditions: dark and light room, with and without using an ICC-profile. To find out more about the importance of the illumination conditions in a room, and the degree of improvement when using an ICC-profile, the results from the measurements was processed and analyzed. Eye-One Beamer from GretagMacbeth was used to make the profiles. The color image quality was evaluated both visually and by color difference calculations. The results from the analysis indicated large visual and colorimetric differences between the projectors. Our DLP projectors have generally smaller color gamut than LCD projectors. The color gamuts of older projectors are significantly smaller than that of newer ones. The amount of ambient light reaching the screen is of great importance for the visual impression. If too much reflections and other ambient light reaches the screen, the projected image gets pale and has low contrast. When using a profile, the differences in colors between the projectors gets smaller and the colors appears more correct. For one device, the average ΔE*ab color difference when compared to a relative white reference was reduced from 22 to 11, for another from 13 to 6. Blue colors have the largest variations among the projection displays and makes them

  8. Geospatial Visualization of Scientific Data Through Keyhole Markup Language

    Science.gov (United States)

    Wernecke, J.; Bailey, J. E.

    2008-12-01

    The development of virtual globes has provided a fun and innovative tool for exploring the surface of the Earth. However, it has been the paralleling maturation of Keyhole Markup Language (KML) that has created a new medium and perspective through which to visualize scientific datasets. Originally created by Keyhole Inc., and then acquired by Google in 2004, in 2007 KML was given over to the Open Geospatial Consortium (OGC). It became an OGC international standard on 14 April 2008, and has subsequently been adopted by all major geobrowser developers (e.g., Google, Microsoft, ESRI, NASA) and many smaller ones (e.g., Earthbrowser). By making KML a standard at a relatively young stage in its evolution, developers of the language are seeking to avoid the issues that plagued the early World Wide Web and development of Hypertext Markup Language (HTML). The popularity and utility of Google Earth, in particular, has been enhanced by KML features such as the Smithsonian volcano layer and the dynamic weather layers. Through KML, users can view real-time earthquake locations (USGS), view animations of polar sea-ice coverage (NSIDC), or read about the daily activities of chimpanzees (Jane Goodall Institute). Perhaps even more powerful is the fact that any users can create, edit, and share their own KML, with no or relatively little knowledge of manipulating computer code. We present an overview of the best current scientific uses of KML and a guide to how scientists can learn to use KML themselves.

  9. A study of images of Projective Angles of pulmonary veins

    Energy Technology Data Exchange (ETDEWEB)

    Wang Jue [Beijing Anzhen Hospital, Beijing (China); Zhaoqi, Zhang [Beijing Anzhen Hospital, Beijing (China)], E-mail: zhaoqi5000@vip.sohu.com; Yu Wei; Miao Cuilian; Yan Zixu; Zhao Yike [Beijing Anzhen Hospital, Beijing (China)

    2009-09-15

    Aims: In images of magnetic resonance and computed tomography (CT) there are visible angles between pulmonary veins and the coronary, transversal or sagittal section of body. In this study these angles are measured and defined as Projective Angles of pulmonary veins. Several possible influential factors and characters of distribution are studied and analyzed for a better understanding of this imaging anatomic character of pulmonary veins. And it could be the anatomic base of adjusting correctly the angle of the central X-ray of the angiography of pulmonary veins undergoing the catheter ablation of atrial fibrillation (AF). Method: Images of contrast enhanced magnetic resonance angiography (CEMRA) and contrast enhanced computer tomography (CECT) of the left atrium and pulmonary veins of 137 health objects and patients with atrial fibrillation (AF) are processed with the technique of post-processing, and Projective Angles to the coronary and transversal sections are measured and analyzed statistically. Result: Project Angles of pulmonary veins are one of real and steady imaging anatomic characteristics of pulmonary veins. The statistical distribution of variables is relatively concentrated, with a fairly good representation of average value. It is possible to improve the angle of the central X-ray according to the average value in the selective angiography of pulmonary veins undergoing the catheter ablation of AF.

  10. Parametric image reconstruction using spectral analysis of PET projection data

    International Nuclear Information System (INIS)

    Meikle, Steven R.; Matthews, Julian C.; Cunningham, Vincent J.; Bailey, Dale L.; Livieratos, Lefteris; Jones, Terry; Price, Pat

    1998-01-01

    Spectral analysis is a general modelling approach that enables calculation of parametric images from reconstructed tracer kinetic data independent of an assumed compartmental structure. We investigated the validity of applying spectral analysis directly to projection data motivated by the advantages that: (i) the number of reconstructions is reduced by an order of magnitude and (ii) iterative reconstruction becomes practical which may improve signal-to-noise ratio (SNR). A dynamic software phantom with typical 2-[ 11 C]thymidine kinetics was used to compare projection-based and image-based methods and to assess bias-variance trade-offs using iterative expectation maximization (EM) reconstruction. We found that the two approaches are not exactly equivalent due to properties of the non-negative least-squares algorithm. However, the differences are small ( 1 and, to a lesser extent, VD). The optimal number of EM iterations was 15-30 with up to a two-fold improvement in SNR over filtered back projection. We conclude that projection-based spectral analysis with EM reconstruction yields accurate parametric images with high SNR and has potential application to a wide range of positron emission tomography ligands. (author)

  11. Planned growth as a determinant of the markup: the case of Slovenian manufacturing

    Directory of Open Access Journals (Sweden)

    Maks Tajnikar

    2009-11-01

    Full Text Available The paper follows the idea of heterodox economists that a cost-plus price is above all a reproductive price and growth price. The authors apply a firm-level model of markup determination which, in line with theory and empirical evidence, contains proposed firm-specific determinants of the markup, including the firm’s planned growth. The positive firm-level relationship between growth and markup that is found in data for Slovenian manufacturing firms implies that retained profits gathered via the markup are an important source of growth financing and that the investment decisions of Slovenian manufacturing firms affect their pricing policy and decisions on the markup size as proposed by Post-Keynesian theory. The authors thus conclude that at least a partial trade-off between a firm’s growth and competitive outcome exists in Slovenian manufacturing.

  12. Comparison of power spectra for tomosynthesis projections and reconstructed images

    International Nuclear Information System (INIS)

    Engstrom, Emma; Reiser, Ingrid; Nishikawa, Robert

    2009-01-01

    Burgess et al. [Med. Phys. 28, 419-437 (2001)] showed that the power spectrum of mammographic breast background follows a power law and that lesion detectability is affected by the power-law exponent β which measures the amount of structure in the background. Following the study of Burgess et al., the authors measured and compared the power-law exponent of mammographic backgrounds in tomosynthesis projections and reconstructed slices to investigate the effect of tomosynthesis imaging on background structure. Our data set consisted of 55 patient cases. For each case, regions of interest (ROIs) were extracted from both projection images and reconstructed slices. The periodogram of each ROI was computed by taking the squared modulus of the Fourier transform of the ROI. The power-law exponent was determined for each periodogram and averaged across all ROIs extracted from all projections or reconstructed slices for each patient data set. For the projections, the mean β averaged across the 55 cases was 3.06 (standard deviation of 0.21), while it was 2.87 (0.24) for the corresponding reconstructions. The difference in β for a given patient between the projection ROIs and the reconstructed ROIs averaged across the 55 cases was 0.194, which was statistically significant (p<0.001). The 95% CI for the difference between the mean value of β for the projections and reconstructions was [0.170, 0.218]. The results are consistent with the observation that the amount of breast structure in the tomosynthesis slice is reduced compared to projection mammography and that this may lead to improved lesion detectability.

  13. Root system markup language: toward a unified root architecture description language.

    Science.gov (United States)

    Lobet, Guillaume; Pound, Michael P; Diener, Julien; Pradal, Christophe; Draye, Xavier; Godin, Christophe; Javaux, Mathieu; Leitner, Daniel; Meunier, Félicien; Nacry, Philippe; Pridmore, Tony P; Schnepf, Andrea

    2015-03-01

    The number of image analysis tools supporting the extraction of architectural features of root systems has increased in recent years. These tools offer a handy set of complementary facilities, yet it is widely accepted that none of these software tools is able to extract in an efficient way the growing array of static and dynamic features for different types of images and species. We describe the Root System Markup Language (RSML), which has been designed to overcome two major challenges: (1) to enable portability of root architecture data between different software tools in an easy and interoperable manner, allowing seamless collaborative work; and (2) to provide a standard format upon which to base central repositories that will soon arise following the expanding worldwide root phenotyping effort. RSML follows the XML standard to store two- or three-dimensional image metadata, plant and root properties and geometries, continuous functions along individual root paths, and a suite of annotations at the image, plant, or root scale at one or several time points. Plant ontologies are used to describe botanical entities that are relevant at the scale of root system architecture. An XML schema describes the features and constraints of RSML, and open-source packages have been developed in several languages (R, Excel, Java, Python, and C#) to enable researchers to integrate RSML files into popular research workflow. © 2015 American Society of Plant Biologists. All Rights Reserved.

  14. Measuring Brand Image Effects of Flagship Projects for Place Brands

    DEFF Research Database (Denmark)

    Zenker, Sebastian; Beckmann, Suzanne C.

    2013-01-01

    Cities invest large sums of money in ‘flagship projects’, with the aim of not only developing the city as such, but also changing the perceptions of the city brand towards a desired image. The city of Hamburg, Germany, is currently investing euro575 million in order to build a new symphony hall...... (Elbphilharmonie), euro400 million to develop the ‘International Architectural Fair’ and it is also considering candidature again for the ‘Olympic Games’ in 2024/2028. As assessing the image effects of such projects is rather difficult, this article introduces an improved version of the Brand Concept Map approach......, which was originally developed for product brands. An experimental design was used to first measure the Hamburg brand as such and then the changes in the brand perceptions after priming the participants (N=209) for one of the three different flagship projects. The findings reveal several important...

  15. Projection model for flame chemiluminescence tomography based on lens imaging

    Science.gov (United States)

    Wan, Minggang; Zhuang, Jihui

    2018-04-01

    For flame chemiluminescence tomography (FCT) based on lens imaging, the projection model is essential because it formulates the mathematical relation between the flame projections captured by cameras and the chemiluminescence field, and, through this relation, the field is reconstructed. This work proposed the blurry-spot (BS) model, which takes more universal assumptions and has higher accuracy than the widely applied line-of-sight model. By combining the geometrical camera model and the thin-lens equation, the BS model takes into account perspective effect of the camera lens; by combining ray-tracing technique and Monte Carlo simulation, it also considers inhomogeneous distribution of captured radiance on the image plane. Performance of these two models in FCT was numerically compared, and results showed that using the BS model could lead to better reconstruction quality in wider application ranges.

  16. Earth Science Markup Language: Transitioning From Design to Application

    Science.gov (United States)

    Moe, Karen; Graves, Sara; Ramachandran, Rahul

    2002-01-01

    The primary objective of the proposed Earth Science Markup Language (ESML) research is to transition from design to application. The resulting schema and prototype software will foster community acceptance for the "define once, use anywhere" concept central to ESML. Supporting goals include: 1. Refinement of the ESML schema and software libraries in cooperation with the user community. 2. Application of the ESML schema and software libraries to a variety of Earth science data sets and analysis tools. 3. Development of supporting prototype software for enhanced ease of use. 4. Cooperation with standards bodies in order to assure ESML is aligned with related metadata standards as appropriate. 5. Widespread publication of the ESML approach, schema, and software.

  17. Pathology data integration with eXtensible Markup Language.

    Science.gov (United States)

    Berman, Jules J

    2005-02-01

    It is impossible to overstate the importance of XML (eXtensible Markup Language) as a data organization tool. With XML, pathologists can annotate all of their data (clinical and anatomic) in a format that can transform every pathology report into a database, without compromising narrative structure. The purpose of this manuscript is to provide an overview of XML for pathologists. Examples will demonstrate how pathologists can use XML to annotate individual data elements and to structure reports in a common format that can be merged with other XML files or queried using standard XML tools. This manuscript gives pathologists a glimpse into how XML allows pathology data to be linked to other types of biomedical data and reduces our dependence on centralized proprietary databases.

  18. Variation in markup of general surgical procedures by hospital market concentration.

    Science.gov (United States)

    Cerullo, Marcelo; Chen, Sophia Y; Dillhoff, Mary; Schmidt, Carl R; Canner, Joseph K; Pawlik, Timothy M

    2018-04-01

    Increasing hospital market concentration (with concomitantly decreasing hospital market competition) may be associated with rising hospital prices. Hospital markup - the relative increase in price over costs - has been associated with greater hospital market concentration. Patients undergoing a cardiothoracic or gastrointestinal procedure in the 2008-2011 Nationwide Inpatient Sample (NIS) were identified and linked to Hospital Market Structure Files. The association between market concentration, hospital markup and hospital for-profit status was assessed using mixed-effects log-linear models. A weighted total of 1,181,936 patients were identified. In highly concentrated markets, private for-profit status was associated with an 80.8% higher markup compared to public/private not-for-profit status (95%CI: +69.5% - +96.9%; p markup compared to public/private not-for-profit status in unconcentrated markets (95%CI: +45.4% - +81.1%; p markup. Government and private not-for-profit hospitals employed lower markups in more concentrated markets, whereas private for-profit hospitals employed higher markups in more concentrated markets. Copyright © 2017 Elsevier Inc. All rights reserved.

  19. Data on the interexaminer variation of minutia markup on latent fingerprints.

    Science.gov (United States)

    Ulery, Bradford T; Hicklin, R Austin; Roberts, Maria Antonia; Buscaglia, JoAnn

    2016-09-01

    The data in this article supports the research paper entitled "Interexaminer variation of minutia markup on latent fingerprints" [1]. The data in this article describes the variability in minutia markup during both analysis of the latents and comparison between latents and exemplars. The data was collected in the "White Box Latent Print Examiner Study," in which each of 170 volunteer latent print examiners provided detailed markup documenting their examinations of latent-exemplar pairs of prints randomly assigned from a pool of 320 pairs. Each examiner examined 22 latent-exemplar pairs; an average of 12 examiners marked each latent.

  20. A standard MIGS/MIMS compliant XML Schema: toward the development of the Genomic Contextual Data Markup Language (GCDML).

    Science.gov (United States)

    Kottmann, Renzo; Gray, Tanya; Murphy, Sean; Kagan, Leonid; Kravitz, Saul; Lombardot, Thierry; Field, Dawn; Glöckner, Frank Oliver

    2008-06-01

    The Genomic Contextual Data Markup Language (GCDML) is a core project of the Genomic Standards Consortium (GSC) that implements the "Minimum Information about a Genome Sequence" (MIGS) specification and its extension, the "Minimum Information about a Metagenome Sequence" (MIMS). GCDML is an XML Schema for generating MIGS/MIMS compliant reports for data entry, exchange, and storage. When mature, this sample-centric, strongly-typed schema will provide a diverse set of descriptors for describing the exact origin and processing of a biological sample, from sampling to sequencing, and subsequent analysis. Here we describe the need for such a project, outline design principles required to support the project, and make an open call for participation in defining the future content of GCDML. GCDML is freely available, and can be downloaded, along with documentation, from the GSC Web site (http://gensc.org).

  1. Qualitative and quantitative analysis of reconstructed images using projections with noises

    International Nuclear Information System (INIS)

    Lopes, R.T.; Assis, J.T. de

    1988-01-01

    The reconstruction of a two-dimencional image from one-dimensional projections in an analytic algorithm ''convolution method'' is simulated on a microcomputer. In this work it was analysed the effects caused in the reconstructed image in function of the number of projections and noise level added to the projection data. Qualitative and quantitative (distortion and image noise) comparison were done with the original image and the reconstructed images. (author) [pt

  2. Sparse Nonlinear Electromagnetic Imaging Accelerated With Projected Steepest Descent Algorithm

    KAUST Repository

    Desmal, Abdulla

    2017-04-03

    An efficient electromagnetic inversion scheme for imaging sparse 3-D domains is proposed. The scheme achieves its efficiency and accuracy by integrating two concepts. First, the nonlinear optimization problem is constrained using L₀ or L₁-norm of the solution as the penalty term to alleviate the ill-posedness of the inverse problem. The resulting Tikhonov minimization problem is solved using nonlinear Landweber iterations (NLW). Second, the efficiency of the NLW is significantly increased using a steepest descent algorithm. The algorithm uses a projection operator to enforce the sparsity constraint by thresholding the solution at every iteration. Thresholding level and iteration step are selected carefully to increase the efficiency without sacrificing the convergence of the algorithm. Numerical results demonstrate the efficiency and accuracy of the proposed imaging scheme in reconstructing sparse 3-D dielectric profiles.

  3. Image reconstruction from multiple fan-beam projections

    International Nuclear Information System (INIS)

    Jelinek, J.; Overton, T.R.

    1984-01-01

    Special-purpose third-generation fan-beam CT systems can be greatly simplified by limiting the number of detectors, but this requires a different mode of data collection to provide a set of projections appropriate to the required spatial resolution in the reconstructed image. Repeated rotation of the source-detector fan, combined with shift of the detector array and perhaps offset of the source with respect to the fan's axis after each 360 0 rotation(cycle), provides a fairly general pattern of projection space filling. The authors' investigated the problem of optimal data-collection geometry for a multiple-rotation fan-beam scanner and of corresponding reconstruction algorithm

  4. IMAGE CONSTRUCTION TO AUTOMATION OF PROJECTIVE TECHNIQUES FOR PSYCHOPHYSIOLOGICAL ANALYSIS

    Directory of Open Access Journals (Sweden)

    Natalia Pavlova

    2018-04-01

    Full Text Available The search for a solution of automation of the process of assessment of a psychological analysis of the person drawings created by it from an available set of some templates are presented at this article. It will allow to reveal more effectively infringements of persons mentality. In particular, such decision can be used for work with children who possess the developed figurative thinking, but are not yet capable of an accurate statement of the thoughts and experiences. For automation of testing by using a projective method, we construct interactive environment for visualization of compositions of the several images and then analyse

  5. FuGEFlow: data model and markup language for flow cytometry

    Directory of Open Access Journals (Sweden)

    Manion Frank J

    2009-06-01

    . Additional project documentation, including reusable design patterns and a guide for setting up a development environment, was contributed back to the FuGE project. Conclusion We have shown that an extension of FuGE can be used to transform minimum information requirements in natural language to markup language in XML. Extending FuGE required significant effort, but in our experiences the benefits outweighed the costs. The FuGEFlow is expected to play a central role in describing flow cytometry experiments and ultimately facilitating data exchange including public flow cytometry repositories currently under development.

  6. FuGEFlow: data model and markup language for flow cytometry.

    Science.gov (United States)

    Qian, Yu; Tchuvatkina, Olga; Spidlen, Josef; Wilkinson, Peter; Gasparetto, Maura; Jones, Andrew R; Manion, Frank J; Scheuermann, Richard H; Sekaly, Rafick-Pierre; Brinkman, Ryan R

    2009-06-16

    Flow cytometry technology is widely used in both health care and research. The rapid expansion of flow cytometry applications has outpaced the development of data storage and analysis tools. Collaborative efforts being taken to eliminate this gap include building common vocabularies and ontologies, designing generic data models, and defining data exchange formats. The Minimum Information about a Flow Cytometry Experiment (MIFlowCyt) standard was recently adopted by the International Society for Advancement of Cytometry. This standard guides researchers on the information that should be included in peer reviewed publications, but it is insufficient for data exchange and integration between computational systems. The Functional Genomics Experiment (FuGE) formalizes common aspects of comprehensive and high throughput experiments across different biological technologies. We have extended FuGE object model to accommodate flow cytometry data and metadata. We used the MagicDraw modelling tool to design a UML model (Flow-OM) according to the FuGE extension guidelines and the AndroMDA toolkit to transform the model to a markup language (Flow-ML). We mapped each MIFlowCyt term to either an existing FuGE class or to a new FuGEFlow class. The development environment was validated by comparing the official FuGE XSD to the schema we generated from the FuGE object model using our configuration. After the Flow-OM model was completed, the final version of the Flow-ML was generated and validated against an example MIFlowCyt compliant experiment description. The extension of FuGE for flow cytometry has resulted in a generic FuGE-compliant data model (FuGEFlow), which accommodates and links together all information required by MIFlowCyt. The FuGEFlow model can be used to build software and databases using FuGE software toolkits to facilitate automated exchange and manipulation of potentially large flow cytometry experimental data sets. Additional project documentation, including

  7. Final Project Report: Imaging Fault Zones Using a Novel Elastic Reverse-Time Migration Imaging Technique

    Energy Technology Data Exchange (ETDEWEB)

    Huang, Lianjie [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Chen, Ting [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Tan, Sirui [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Lin, Youzuo [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Gao, Kai [Los Alamos National Lab. (LANL), Los Alamos, NM (United States)

    2017-05-10

    Imaging fault zones and fractures is crucial for geothermal operators, providing important information for reservoir evaluation and management strategies. However, there are no existing techniques available for directly and clearly imaging fault zones, particularly for steeply dipping faults and fracture zones. In this project, we developed novel acoustic- and elastic-waveform inversion methods for high-resolution velocity model building. In addition, we developed acoustic and elastic reverse-time migration methods for high-resolution subsurface imaging of complex subsurface structures and steeply-dipping fault/fracture zones. We first evaluated and verified the improved capabilities of our newly developed seismic inversion and migration imaging methods using synthetic seismic data. Our numerical tests verified that our new methods directly image subsurface fracture/fault zones using surface seismic reflection data. We then applied our novel seismic inversion and migration imaging methods to a field 3D surface seismic dataset acquired at the Soda Lake geothermal field using Vibroseis sources. Our migration images of the Soda Lake geothermal field obtained using our seismic inversion and migration imaging algorithms revealed several possible fault/fracture zones. AltaRock Energy, Inc. is working with Cyrq Energy, Inc. to refine the geologic interpretation at the Soda Lake geothermal field. Trenton Cladouhos, Senior Vice President R&D of AltaRock, was very interested in our imaging results of 3D surface seismic data from the Soda Lake geothermal field. He planed to perform detailed interpretation of our images in collaboration with James Faulds and Holly McLachlan of University of Nevada at Reno. Using our high-resolution seismic inversion and migration imaging results can help determine the optimal locations to drill wells for geothermal energy production and reduce the risk of geothermal exploration.

  8. QUESTION ANSWERING SYSTEM BERBASIS ARTIFICIAL INTELLIGENCE MARKUP LANGUAGE SEBAGAI MEDIA INFORMASI

    Directory of Open Access Journals (Sweden)

    Fajrin Azwary

    2016-04-01

    Full Text Available Artificial intelligence technology nowadays, can be processed with a variety of forms, such as chatbot, and the various methods, one of them using Artificial Intelligence Markup Language (AIML. AIML using template matching, by comparing the specific patterns in the database. AIML template design process begins with determining the necessary information, then formed into questions, these questions adapted to AIML pattern. From the results of the study, can be known that the Question-Answering System in the chatbot using Artificial Intelligence Markup Language are able to communicate and deliver information. Keywords: Artificial Intelligence, Template Matching, Artificial Intelligence Markup Language, AIML Teknologi kecerdasan buatan saat ini dapat diolah dengan berbagai macam bentuk, seperti ChatBot, dan berbagai macam metode, salah satunya menggunakan Artificial Intelligence Markup Language (AIML. AIML menggunakan metode template matching yaitu dengan membandingkan pola-pola tertentu pada database. Proses perancangan template AIML diawali dengan menentukan informasi yang diperlukan, kemudian dibentuk menjadi pertanyaan, pertanyaan tersebut disesuaikan dengan bentuk pattern AIML. Hasil penelitian dapat diperoleh bahwa Question-Answering System dalam bentuk ChatBot menggunakan Artificial Intelligence Markup Language dapat berkomunikasi dan menyampaikan informasi. Kata kunci : Kecerdasan Buatan, Pencocokan Pola, Artificial Intelligence Markup Language, AIML

  9. Nonnegative least-squares image deblurring: improved gradient projection approaches

    Science.gov (United States)

    Benvenuto, F.; Zanella, R.; Zanni, L.; Bertero, M.

    2010-02-01

    The least-squares approach to image deblurring leads to an ill-posed problem. The addition of the nonnegativity constraint, when appropriate, does not provide regularization, even if, as far as we know, a thorough investigation of the ill-posedness of the resulting constrained least-squares problem has still to be done. Iterative methods, converging to nonnegative least-squares solutions, have been proposed. Some of them have the 'semi-convergence' property, i.e. early stopping of the iteration provides 'regularized' solutions. In this paper we consider two of these methods: the projected Landweber (PL) method and the iterative image space reconstruction algorithm (ISRA). Even if they work well in many instances, they are not frequently used in practice because, in general, they require a large number of iterations before providing a sensible solution. Therefore, the main purpose of this paper is to refresh these methods by increasing their efficiency. Starting from the remark that PL and ISRA require only the computation of the gradient of the functional, we propose the application to these algorithms of special acceleration techniques that have been recently developed in the area of the gradient methods. In particular, we propose the application of efficient step-length selection rules and line-search strategies. Moreover, remarking that ISRA is a scaled gradient algorithm, we evaluate its behaviour in comparison with a recent scaled gradient projection (SGP) method for image deblurring. Numerical experiments demonstrate that the accelerated methods still exhibit the semi-convergence property, with a considerable gain both in the number of iterations and in the computational time; in particular, SGP appears definitely the most efficient one.

  10. Remote Imaging Projects In Research And Astrophotography With Starpals

    Science.gov (United States)

    Fischer, Audrey; Kingan, J.

    2008-05-01

    StarPals is a nascent non-profit organization with the goal of providing opportunities for international collaboration between students of all ages within space science research. We believe that by encouraging an interest in the cosmos, the one thing that is truly Universal, from a young age, students will not only further their knowledge of and interest in science but will learn valuable teamwork and life skills. The goal is to foster respect, understanding and appreciation of cultural diversity among all StarPals participants, whether students, teachers, or mentors. StarPals aims to inspire students by providing opportunities in which, more than simply visualizing themselves as research scientists, they can actually become one. The technologies of robotic telescopes, videoconferencing, and online classrooms are expanding the possibilities like never before. In honor of IYA2009, StarPals would like to encourage 400 schools to participate on a global scale in astronomy/cosmology research on various concurrent projects. We will offer in-person or online workshops and training sessions to teach the teachers. We will be seeking publication in scientific journals for some student research. For our current project, the Double Stars Challenge, students use the robotic telescopes to take a series of four images of one of 30 double stars from a list furnished by the US Naval Observatory and then use MPO Canopus software to take distance and position angle measurements. StarPals provides students with hands-on training, telescope time, and software to complete the imaging and measuring. A paper will be drafted from our research data and submitted to the Journal of Double Star Observations. The kids who participate in this project may potentially be the youngest contributors to an article in a vetted scientific journal. Kids rapidly adapt and improve their computer skills operating these telescopes and discover for themselves that science is COOL!

  11. Image applications for coastal resource planning: Elkhorn Slough Pilot Project

    Science.gov (United States)

    Kvitek, Rikk G.; Sharp, Gary D.; VanCoops, Jonathan; Fitzgerald, Michael

    1995-01-01

    The purpose of this project has been to evaluate the utility of digital spectral imagery at two levels of resolution for large scale, accurate, auto-classification of land cover along the Central California Coast. Although remote sensing technology offers obvious advantages over on-the-ground mapping, there are substantial trade-offs that must be made between resolving power and costs. Higher resolution images can theoretically be used to identify smaller habitat patches, but they usually require more scenes to cover a given area and processing these images is computationally intense requiring much more computer time and memory. Lower resolution images can cover much larger areas, are less costly to store, process, and manipulate, but due to their larger pixel size can lack the resolving power of the denser images. This lack of resolving power can be critical in regions such as the Central California Coast where important habitat change often occurs on a scale of 10 meters. Our approach has been to compare vegetation and habitat classification results from two aircraft-based spectral scenes covering the same study area but at different levels of resolution with a previously produced ground-truthed land cover base map of the area. Both of the spectral images used for this project were of significantly higher resolution than the satellite-based LandSat scenes used in the C-CAP program. The lower reaches of the Elkhorn Slough watershed was chosen as an ideal study site because it encompasses a suite of important vegetation types and habitat loss processes characteristic of the central coast region. Dramatic habitat alterations have and are occurring within the Elkhorn Slough drainage area, including erosion and sedimentation, land use conversion, wetland loss, and incremental loss due to development and encroachnnent by agriculture. Additonally, much attention has already been focused on the Elkhorn Slough due to its status as a National Marine Education and Research

  12. Multiview Discriminative Geometry Preserving Projection for Image Classification

    Directory of Open Access Journals (Sweden)

    Ziqiang Wang

    2014-01-01

    Full Text Available In many image classification applications, it is common to extract multiple visual features from different views to describe an image. Since different visual features have their own specific statistical properties and discriminative powers for image classification, the conventional solution for multiple view data is to concatenate these feature vectors as a new feature vector. However, this simple concatenation strategy not only ignores the complementary nature of different views, but also ends up with “curse of dimensionality.” To address this problem, we propose a novel multiview subspace learning algorithm in this paper, named multiview discriminative geometry preserving projection (MDGPP for feature extraction and classification. MDGPP can not only preserve the intraclass geometry and interclass discrimination information under a single view, but also explore the complementary property of different views to obtain a low-dimensional optimal consensus embedding by using an alternating-optimization-based iterative algorithm. Experimental results on face recognition and facial expression recognition demonstrate the effectiveness of the proposed algorithm.

  13. Computerization of guidelines: towards a "guideline markup language".

    Science.gov (United States)

    Dart, T; Xu, Y; Chatellier, G; Degoulet, P

    2001-01-01

    Medical decision making is one of the most difficult daily tasks for physicians. Guidelines have been designed to reduce variance between physicians in daily practice, to improve patient outcomes and to control costs. In fact, few physicians use guidelines in daily practice. A way to ease the use of guidelines is to implement computerised guidelines (computer reminders). We present in this paper a method of computerising guidelines. Our objectives were: 1) to propose a generic model that can be instantiated for any specific guidelines; 2) to use eXtensible Markup Language (XML) as a guideline representation language to instantiate the generic model for a specific guideline. Our model is an object representation of a clinical algorithm, it has been validated by running two different guidelines issued by a French official Agency. In spite of some limitations, we found that this model is expressive enough to represent complex guidelines devoted to diabetes and hypertension management. We conclude that XML can be used as a description format to structure guidelines and as an interface between paper-based guidelines and computer applications.

  14. Extensions to the Dynamic Aerospace Vehicle Exchange Markup Language

    Science.gov (United States)

    Brian, Geoffrey J.; Jackson, E. Bruce

    2011-01-01

    The Dynamic Aerospace Vehicle Exchange Markup Language (DAVE-ML) is a syntactical language for exchanging flight vehicle dynamic model data. It provides a framework for encoding entire flight vehicle dynamic model data packages for exchange and/or long-term archiving. Version 2.0.1 of DAVE-ML provides much of the functionality envisioned for exchanging aerospace vehicle data; however, it is limited in only supporting scalar time-independent data. Additional functionality is required to support vector and matrix data, abstracting sub-system models, detailing dynamics system models (both discrete and continuous), and defining a dynamic data format (such as time sequenced data) for validation of dynamics system models and vehicle simulation packages. Extensions to DAVE-ML have been proposed to manage data as vectors and n-dimensional matrices, and record dynamic data in a compatible form. These capabilities will improve the clarity of data being exchanged, simplify the naming of parameters, and permit static and dynamic data to be stored using a common syntax within a single file; thereby enhancing the framework provided by DAVE-ML for exchanging entire flight vehicle dynamic simulation models.

  15. The basics of CrossRef extensible markup language

    Directory of Open Access Journals (Sweden)

    Rachael Lammey

    2014-08-01

    Full Text Available CrossRef is an association of scholarly publishers that develops shared infrastructure to support more effective scholarly communications. Launched in 2000, CrossRef’s citation-linking network today covers over 68 million journal articles and other content items (books chapters, data, theses, and technical reports from thousands of scholarly and professional publishers around the globe. CrossRef has over 4,000 member publishers who join as members in order to avail of a number of CrossRef services, reference linking via the Digital Object Identifier (DOI being the core service. To deposit CrossRef DOIs, publishers and editors need to become familiar with the basics of extensible markup language (XML. This article will give an introduction to CrossRef XML and what publishers need to do in order to start to deposit DOIs with CrossRef and thus ensure their publications are discoverable and can be linked to consistently in an online environment.

  16. Tomographic image via background subtraction using an x-ray projection image and a priori computed tomography

    International Nuclear Information System (INIS)

    Zhang Jin; Yi Byongyong; Lasio, Giovanni; Suntharalingam, Mohan; Yu, Cedric

    2009-01-01

    Kilovoltage x-ray projection images (kV images for brevity) are increasingly available in image guided radiotherapy (IGRT) for patient positioning. These images are two-dimensional (2D) projections of a three-dimensional (3D) object along the x-ray beam direction. Projecting a 3D object onto a plane may lead to ambiguities in the identification of anatomical structures and to poor contrast in kV images. Therefore, the use of kV images in IGRT is mainly limited to bony landmark alignments. This work proposes a novel subtraction technique that isolates a slice of interest (SOI) from a kV image with the assistance of a priori information from a previous CT scan. The method separates structural information within a preselected SOI by suppressing contributions to the unprocessed projection from out-of-SOI-plane structures. Up to a five-fold increase in the contrast-to-noise ratios (CNRs) was observed in selected regions of the isolated SOI, when compared to the original unprocessed kV image. The tomographic image via background subtraction (TIBS) technique aims to provide a quick snapshot of the slice of interest with greatly enhanced image contrast over conventional kV x-ray projections for fast and accurate image guidance of radiation therapy. With further refinements, TIBS could, in principle, provide real-time tumor localization using gantry-mounted x-ray imaging systems without the need for implanted markers.

  17. Tomography of images with poisson miose: pre-processing of projections

    International Nuclear Information System (INIS)

    Furuie, S.S.

    1989-01-01

    This work present an alternative approach in order to reconstruct images with low signal to noise ratio. Basically it consist of smoothing projections taking into account that the noise is Poisson. These filtered projections are used to reconstruct the original image, applying direct Fourier method. This approach is compared with convolution back projection and EM (Expectation-Maximization). (author) [pt

  18. National Land Imaging Requirements (NLIR) Pilot Project summary report: summary of moderate resolution imaging user requirements

    Science.gov (United States)

    Vadnais, Carolyn; Stensaas, Gregory

    2014-01-01

    Under the National Land Imaging Requirements (NLIR) Project, the U.S. Geological Survey (USGS) is developing a functional capability to obtain, characterize, manage, maintain and prioritize all Earth observing (EO) land remote sensing user requirements. The goal is a better understanding of community needs that can be supported with land remote sensing resources, and a means to match needs with appropriate solutions in an effective and efficient way. The NLIR Project is composed of two components. The first component is focused on the development of the Earth Observation Requirements Evaluation System (EORES) to capture, store and analyze user requirements, whereas, the second component is the mechanism and processes to elicit and document the user requirements that will populate the EORES. To develop the second component, the requirements elicitation methodology was exercised and refined through a pilot project conducted from June to September 2013. The pilot project focused specifically on applications and user requirements for moderate resolution imagery (5–120 meter resolution) as the test case for requirements development. The purpose of this summary report is to provide a high-level overview of the requirements elicitation process that was exercised through the pilot project and an early analysis of the moderate resolution imaging user requirements acquired to date to support ongoing USGS sustainable land imaging study needs. The pilot project engaged a limited set of Federal Government users from the operational and research communities and therefore the information captured represents only a subset of all land imaging user requirements. However, based on a comparison of results, trends, and analysis, the pilot captured a strong baseline of typical applications areas and user needs for moderate resolution imagery. Because these results are preliminary and represent only a sample of users and application areas, the information from this report should only

  19. Maximum intensity projection MR angiography using shifted image data

    International Nuclear Information System (INIS)

    Machida, Yoshio; Ichinose, Nobuyasu; Hatanaka, Masahiko; Goro, Takehiko; Kitake, Shinichi; Hatta, Junicchi.

    1992-01-01

    The quality of MR angiograms has been significantly improved in past several years. Spatial resolution, however, is not sufficient for clinical use. On the other hand, MR image data can be filled at anywhere using Fourier shift theorem, and the quality of multi-planar reformed image has been reported to be improved remarkably using 'shifted data'. In this paper, we have clarified the efficiency of 'shifted data' for maximum intensity projection MR angiography. Our experimental studies and theoretical consideration showd that the quality of MR angiograms has been significantly improved using 'shifted data' as follows; 1) remarkable reduction of mosaic artifact, 2) improvement of spatial continuity for the blood vessels, and 3) reduction of variance for the signal intensity along the blood vessels. In other words, the angiograms looks much 'finer' than conventional ones, although the spatial resolution is not improved theoretically. Furthermore, we found the quality of MR angiograms dose not improve significantly with the 'shifted data' more than twice as dense as ordinal ones. (author)

  20. Systematic reconstruction of TRANSPATH data into cell system markup language.

    Science.gov (United States)

    Nagasaki, Masao; Saito, Ayumu; Li, Chen; Jeong, Euna; Miyano, Satoru

    2008-06-23

    Many biological repositories store information based on experimental study of the biological processes within a cell, such as protein-protein interactions, metabolic pathways, signal transduction pathways, or regulations of transcription factors and miRNA. Unfortunately, it is difficult to directly use such information when generating simulation-based models. Thus, modeling rules for encoding biological knowledge into system-dynamics-oriented standardized formats would be very useful for fully understanding cellular dynamics at the system level. We selected the TRANSPATH database, a manually curated high-quality pathway database, which provides a plentiful source of cellular events in humans, mice, and rats, collected from over 31,500 publications. In this work, we have developed 16 modeling rules based on hybrid functional Petri net with extension (HFPNe), which is suitable for graphical representing and simulating biological processes. In the modeling rules, each Petri net element is incorporated with Cell System Ontology to enable semantic interoperability of models. As a formal ontology for biological pathway modeling with dynamics, CSO also defines biological terminology and corresponding icons. By combining HFPNe with the CSO features, it is possible to make TRANSPATH data to simulation-based and semantically valid models. The results are encoded into a biological pathway format, Cell System Markup Language (CSML), which eases the exchange and integration of biological data and models. By using the 16 modeling rules, 97% of the reactions in TRANSPATH are converted into simulation-based models represented in CSML. This reconstruction demonstrates that it is possible to use our rules to generate quantitative models from static pathway descriptions.

  1. A segmentation algorithm based on image projection for complex text layout

    Science.gov (United States)

    Zhu, Wangsheng; Chen, Qin; Wei, Chuanyi; Li, Ziyang

    2017-10-01

    Segmentation algorithm is an important part of layout analysis, considering the efficiency advantage of the top-down approach and the particularity of the object, a breakdown of projection layout segmentation algorithm. Firstly, the algorithm will algorithm first partitions the text image, and divided into several columns, then for each column scanning projection, the text image is divided into several sub regions through multiple projection. The experimental results show that, this method inherits the projection itself and rapid calculation speed, but also can avoid the effect of arc image information page segmentation, and also can accurate segmentation of the text image layout is complex.

  2. ART-ML: a new markup language for modelling and representation of biological processes in cardiovascular diseases.

    Science.gov (United States)

    Karvounis, E C; Exarchos, T P; Fotiou, E; Sakellarios, A I; Iliopoulou, D; Koutsouris, D; Fotiadis, D I

    2013-01-01

    With an ever increasing number of biological models available on the internet, a standardized modelling framework is required to allow information to be accessed and visualized. In this paper we propose a novel Extensible Markup Language (XML) based format called ART-ML that aims at supporting the interoperability and the reuse of models of geometry, blood flow, plaque progression and stent modelling, exported by any cardiovascular disease modelling software. ART-ML has been developed and tested using ARTool. ARTool is a platform for the automatic processing of various image modalities of coronary and carotid arteries. The images and their content are fused to develop morphological models of the arteries in 3D representations. All the above described procedures integrate disparate data formats, protocols and tools. ART-ML proposes a representation way, expanding ARTool, for interpretability of the individual resources, creating a standard unified model for the description of data and, consequently, a format for their exchange and representation that is machine independent. More specifically, ARTool platform incorporates efficient algorithms which are able to perform blood flow simulations and atherosclerotic plaque evolution modelling. Integration of data layers between different modules within ARTool are based upon the interchange of information included in the ART-ML model repository. ART-ML provides a markup representation that enables the representation and management of embedded models within the cardiovascular disease modelling platform, the storage and interchange of well-defined information. The corresponding ART-ML model incorporates all relevant information regarding geometry, blood flow, plaque progression and stent modelling procedures. All created models are stored in a model repository database which is accessible to the research community using efficient web interfaces, enabling the interoperability of any cardiovascular disease modelling software

  3. A methodology to annotate systems biology markup language models with the synthetic biology open language.

    Science.gov (United States)

    Roehner, Nicholas; Myers, Chris J

    2014-02-21

    Recently, we have begun to witness the potential of synthetic biology, noted here in the form of bacteria and yeast that have been genetically engineered to produce biofuels, manufacture drug precursors, and even invade tumor cells. The success of these projects, however, has often failed in translation and application to new projects, a problem exacerbated by a lack of engineering standards that combine descriptions of the structure and function of DNA. To address this need, this paper describes a methodology to connect the systems biology markup language (SBML) to the synthetic biology open language (SBOL), existing standards that describe biochemical models and DNA components, respectively. Our methodology involves first annotating SBML model elements such as species and reactions with SBOL DNA components. A graph is then constructed from the model, with vertices corresponding to elements within the model and edges corresponding to the cause-and-effect relationships between these elements. Lastly, the graph is traversed to assemble the annotating DNA components into a composite DNA component, which is used to annotate the model itself and can be referenced by other composite models and DNA components. In this way, our methodology can be used to build up a hierarchical library of models annotated with DNA components. Such a library is a useful input to any future genetic technology mapping algorithm that would automate the process of composing DNA components to satisfy a behavioral specification. Our methodology for SBML-to-SBOL annotation is implemented in the latest version of our genetic design automation (GDA) software tool, iBioSim.

  4. Color correction of projected image on color-screen for mobile beam-projector

    Science.gov (United States)

    Son, Chang-Hwan; Sung, Soo-Jin; Ha, Yeong-Ho

    2008-01-01

    With the current trend of digital convergence in mobile phones, mobile manufacturers are researching how to develop a mobile beam-projector to cope with the limitations of a small screen size and to offer a better feeling of movement while watching movies or satellite broadcasting. However, mobile beam-projectors may project an image on arbitrary surfaces, such as a colored wall and paper, not on a white screen mainly used in an office environment. Thus, color correction method for the projected image is proposed to achieve good image quality irrespective of the surface colors. Initially, luminance values of original image transformed into the YCbCr space are changed to compensate for spatially nonuniform luminance distribution of arbitrary surface, depending on the pixel values of surface image captured by mobile camera. Next, the chromaticity values for each surface and white-screen image are calculated using the ratio of the sum of three RGB values to one another. Then their chromaticity ratios are multiplied by converted original image through an inverse YCbCr matrix to reduce an influence of modulating the appearance of projected image due to spatially different reflectance on the surface. By projecting corrected original image on a texture pattern or single color surface, the image quality of projected image can be improved more, as well as that of projected image on white screen.

  5. Reconstruction of a cone-beam CT image via forward iterative projection matching

    International Nuclear Information System (INIS)

    Brock, R. Scott; Docef, Alen; Murphy, Martin J.

    2010-01-01

    Purpose: To demonstrate the feasibility of reconstructing a cone-beam CT (CBCT) image by deformably altering a prior fan-beam CT (FBCT) image such that it matches the anatomy portrayed in the CBCT projection data set. Methods: A prior FBCT image of the patient is assumed to be available as a source image. A CBCT projection data set is obtained and used as a target image set. A parametrized deformation model is applied to the source FBCT image, digitally reconstructed radiographs (DRRs) that emulate the CBCT projection image geometry are calculated and compared to the target CBCT projection data, and the deformation model parameters are adjusted iteratively until the DRRs optimally match the CBCT projection data set. The resulting deformed FBCT image is hypothesized to be an accurate representation of the patient's anatomy imaged by the CBCT system. The process is demonstrated via numerical simulation. A known deformation is applied to a prior FBCT image and used to create a synthetic set of CBCT target projections. The iterative projection matching process is then applied to reconstruct the deformation represented in the synthetic target projections; the reconstructed deformation is then compared to the known deformation. The sensitivity of the process to the number of projections and the DRR/CBCT projection mismatch is explored by systematically adding noise to and perturbing the contrast of the target projections relative to the iterated source DRRs and by reducing the number of projections. Results: When there is no noise or contrast mismatch in the CBCT projection images, a set of 64 projections allows the known deformed CT image to be reconstructed to within a nRMS error of 1% and the known deformation to within a nRMS error of 7%. A CT image nRMS error of less than 4% is maintained at noise levels up to 3% of the mean projection intensity, at which the deformation error is 13%. At 1% noise level, the number of projections can be reduced to 8 while maintaining

  6. Image/patient registration from (partial) projection data by the Fourier phase matching method

    International Nuclear Information System (INIS)

    Weiguo Lu; You, J.

    1999-01-01

    A technique for 2D or 3D image/patient registration, PFPM (projection based Fourier phase matching method), is proposed. This technique provides image/patient registration directly from sequential tomographic projection data. The method can also deal with image files by generating 2D Radon transforms slice by slice. The registration in projection space is done by calculating a Fourier invariant (FI) descriptor for each one-dimensional projection datum, and then registering the FI descriptor by the Fourier phase matching (FPM) method. The algorithm has been tested on both synthetic and experimental data. When dealing with translated, rotated and uniformly scaled 2D image registration, the performance of the PFPM method is comparable to that of the IFPM (image based Fourier phase matching) method in robustness, efficiency, insensitivity to the offset between images, and registration time. The advantages of the former are that subpixel resolution is feasible, and it is more insensitive to image noise due to the averaging effect of the projection acquisition. Furthermore, the PFPM method offers the ability to generalize to 3D image/patient registration and to register partial projection data. By applying patient registration directly from tomographic projection data, image reconstruction is not needed in the therapy set-up verification, thus reducing computational time and artefacts. In addition, real time registration is feasible. Registration from partial projection data meets the geometry and dose requirements in many application cases and makes dynamic set-up verification possible in tomotherapy. (author)

  7. An object-oriented approach for harmonization of multimedia markup languages

    Science.gov (United States)

    Chen, Yih-Feng; Kuo, May-Chen; Sun, Xiaoming; Kuo, C.-C. Jay

    2003-12-01

    An object-oriented methodology is proposed to harmonize several different markup languages in this research. First, we adopt the Unified Modelling Language (UML) as the data model to formalize the concept and the process of the harmonization process between the eXtensible Markup Language (XML) applications. Then, we design the Harmonization eXtensible Markup Language (HXML) based on the data model and formalize the transformation between the Document Type Definitions (DTDs) of the original XML applications and HXML. The transformation between instances is also discussed. We use the harmonization of SMIL and X3D as an example to demonstrate the proposed methodology. This methodology can be generalized to various application domains.

  8. Development of clinical contents model markup language for electronic health records.

    Science.gov (United States)

    Yun, Ji-Hyun; Ahn, Sun-Ju; Kim, Yoon

    2012-09-01

    To develop dedicated markup language for clinical contents models (CCM) to facilitate the active use of CCM in electronic health record systems. Based on analysis of the structure and characteristics of CCM in the clinical domain, we designed extensible markup language (XML) based CCM markup language (CCML) schema manually. CCML faithfully reflects CCM in both the syntactic and semantic aspects. As this language is based on XML, it can be expressed and processed in computer systems and can be used in a technology-neutral way. CCML HAS THE FOLLOWING STRENGTHS: it is machine-readable and highly human-readable, it does not require a dedicated parser, and it can be applied for existing electronic health record systems.

  9. Collaborative Tracking of Image Features Based on Projective Invariance

    Science.gov (United States)

    Jiang, Jinwei

    -mode sensors for improving the flexibility and robustness of the system. From the experimental results during three field tests for the LASOIS system, we observed that most of the errors in the image processing algorithm are caused by the incorrect feature tracking. This dissertation addresses the feature tracking problem in image sequences acquired from cameras. Despite many alternatives to feature tracking problem, iterative least squares solution solving the optical flow equation has been the most popular approach used by many in the field. This dissertation attempts to leverage the former efforts to enhance feature tracking methods by introducing a view geometric constraint to the tracking problem, which provides collaboration among features. In contrast to alternative geometry based methods, the proposed approach provides an online solution to optical flow estimation in a collaborative fashion by exploiting Horn and Schunck flow estimation regularized by view geometric constraints. Proposed collaborative tracker estimates the motion of a feature based on the geometry of the scene and how the other features are moving. Alternative to this approach, a new closed form solution to tracking that combines the image appearance with the view geometry is also introduced. We particularly use invariants in the projective coordinates and conjecture that the traditional appearance solution can be significantly improved using view geometry. The geometric constraint is introduced by defining a new optical flow equation which exploits the scene geometry from a set drawn from tracked features. At the end of each tracking loop the quality of the tracked features is judged using both appearance similarity and geometric consistency. Our experiments demonstrate robust tracking performance even when the features are occluded or they undergo appearance changes due to projective deformation of the template. The proposed collaborative tracking method is also tested in the visual navigation

  10. Development of the atomic and molecular data markup language for internet data exchange

    International Nuclear Information System (INIS)

    Ralchenko, Yuri; Clark Robert E.H.; Humbert, Denis; Schultz, David R.; Kato, Takako; Rhee, Yong Joo

    2006-01-01

    Accelerated development of the Internet technologies, including those relevant to the atomic and molecular physics, poses new requirements for the proper communication between computers, users and applications. To this end, a new standard for atomic and molecular data exchange that would reflect the recent achievements in this field becomes a necessity. We report here on development of the Atomic and Molecular Data Markup Language (AMDML) that is based on eXtensible Markup Language (XML). The present version of the AMDML Schema covers atomic spectroscopic data as well as the electron-impact collisions. (author)

  11. SuML: A Survey Markup Language for Generalized Survey Encoding

    Science.gov (United States)

    Barclay, MW; Lober, WB; Karras, BT

    2002-01-01

    There is a need in clinical and research settings for a sophisticated, generalized, web based survey tool that supports complex logic, separation of content and presentation, and computable guidelines. There are many commercial and open source survey packages available that provide simple logic; few provide sophistication beyond “goto” statements; none support the use of guidelines. These tools are driven by databases, static web pages, and structured documents using markup languages such as eXtensible Markup Language (XML). We propose a generalized, guideline aware language and an implementation architecture using open source standards.

  12. Reproducible computational biology experiments with SED-ML--the Simulation Experiment Description Markup Language.

    Science.gov (United States)

    Waltemath, Dagmar; Adams, Richard; Bergmann, Frank T; Hucka, Michael; Kolpakov, Fedor; Miller, Andrew K; Moraru, Ion I; Nickerson, David; Sahle, Sven; Snoep, Jacky L; Le Novère, Nicolas

    2011-12-15

    The increasing use of computational simulation experiments to inform modern biological research creates new challenges to annotate, archive, share and reproduce such experiments. The recently published Minimum Information About a Simulation Experiment (MIASE) proposes a minimal set of information that should be provided to allow the reproduction of simulation experiments among users and software tools. In this article, we present the Simulation Experiment Description Markup Language (SED-ML). SED-ML encodes in a computer-readable exchange format the information required by MIASE to enable reproduction of simulation experiments. It has been developed as a community project and it is defined in a detailed technical specification and additionally provides an XML schema. The version of SED-ML described in this publication is Level 1 Version 1. It covers the description of the most frequent type of simulation experiments in the area, namely time course simulations. SED-ML documents specify which models to use in an experiment, modifications to apply on the models before using them, which simulation procedures to run on each model, what analysis results to output, and how the results should be presented. These descriptions are independent of the underlying model implementation. SED-ML is a software-independent format for encoding the description of simulation experiments; it is not specific to particular simulation tools. Here, we demonstrate that with the growing software support for SED-ML we can effectively exchange executable simulation descriptions. With SED-ML, software can exchange simulation experiment descriptions, enabling the validation and reuse of simulation experiments in different tools. Authors of papers reporting simulation experiments can make their simulation protocols available for other scientists to reproduce the results. Because SED-ML is agnostic about exact modeling language(s) used, experiments covering models from different fields of research

  13. Reproducible computational biology experiments with SED-ML - The Simulation Experiment Description Markup Language

    Science.gov (United States)

    2011-01-01

    Background The increasing use of computational simulation experiments to inform modern biological research creates new challenges to annotate, archive, share and reproduce such experiments. The recently published Minimum Information About a Simulation Experiment (MIASE) proposes a minimal set of information that should be provided to allow the reproduction of simulation experiments among users and software tools. Results In this article, we present the Simulation Experiment Description Markup Language (SED-ML). SED-ML encodes in a computer-readable exchange format the information required by MIASE to enable reproduction of simulation experiments. It has been developed as a community project and it is defined in a detailed technical specification and additionally provides an XML schema. The version of SED-ML described in this publication is Level 1 Version 1. It covers the description of the most frequent type of simulation experiments in the area, namely time course simulations. SED-ML documents specify which models to use in an experiment, modifications to apply on the models before using them, which simulation procedures to run on each model, what analysis results to output, and how the results should be presented. These descriptions are independent of the underlying model implementation. SED-ML is a software-independent format for encoding the description of simulation experiments; it is not specific to particular simulation tools. Here, we demonstrate that with the growing software support for SED-ML we can effectively exchange executable simulation descriptions. Conclusions With SED-ML, software can exchange simulation experiment descriptions, enabling the validation and reuse of simulation experiments in different tools. Authors of papers reporting simulation experiments can make their simulation protocols available for other scientists to reproduce the results. Because SED-ML is agnostic about exact modeling language(s) used, experiments covering models from

  14. The evolution of the CUAHSI Water Markup Language (WaterML)

    Science.gov (United States)

    Zaslavsky, I.; Valentine, D.; Maidment, D.; Tarboton, D. G.; Whiteaker, T.; Hooper, R.; Kirschtel, D.; Rodriguez, M.

    2009-04-01

    The CUAHSI Hydrologic Information System (HIS, his.cuahsi.org) uses web services as the core data exchange mechanism which provides programmatic connection between many heterogeneous sources of hydrologic data and a variety of online and desktop client applications. The service message schema follows the CUAHSI Water Markup Language (WaterML) 1.x specification (see OGC Discussion Paper 07-041r1). Data sources that can be queried via WaterML-compliant water data services include national and international repositories such as USGS NWIS (National Water Information System), USEPA STORET (Storage & Retrieval), USDA SNOTEL (Snowpack Telemetry), NCDC ISH and ISD(Integrated Surface Hourly and Daily Data), MODIS (Moderate Resolution Imaging Spectroradiometer), and DAYMET (Daily Surface Weather Data and Climatological Summaries). Besides government data sources, CUAHSI HIS provides access to a growing number of academic hydrologic observation networks. These networks are registered by researchers associated with 11 hydrologic observatory testbeds around the US, and other research, government and commercial groups wishing to join the emerging CUAHSI Water Data Federation. The Hydrologic Information Server (HIS Server) software stack deployed at NSF-supported hydrologic observatory sites and other universities around the country, supports a hydrologic data publication workflow which includes the following steps: (1) observational data are loaded from static files or streamed from sensors into a local instance of an Observations Data Model (ODM) database; (2) a generic web service template is configured for the new ODM instance to expose the data as a WaterML-compliant water data service, and (3) the new water data service is registered at the HISCentral registry (hiscentral.cuahsi.org), its metadata are harvested and semantically tagged using concepts from a hydrologic ontology. As a result, the new service is indexed in the CUAHSI central metadata catalog, and becomes

  15. From whole-body counting to imaging: The computer aided collimation gamma camera project (CACAO)

    Energy Technology Data Exchange (ETDEWEB)

    Jeanguillaume, C.; Begot, S.; Quartuccio, M.; Douiri, A.; Ballongue, P

    2000-07-01

    Whole-body counting is the method of choice for in vivo detection of contamination. To extend this well established method, the possible advantages of imaging radiocontaminants are examined. The use of the CACAO project is then studied. A comparison of simulated reconstructed images obtained by the CACAO project and by a conventional gamma camera used in nuclear medicine follows. Imaging a radionuclide contaminant with a geometrical sensitivity of 10{sup -2} seems possible in the near future. (author)

  16. From whole-body counting to imaging: The computer aided collimation gamma camera project (CACAO)

    International Nuclear Information System (INIS)

    Jeanguillaume, C.; Begot, S.; Quartuccio, M.; Douiri, A.; Ballongue, P.

    2000-01-01

    Whole-body counting is the method of choice for in vivo detection of contamination. To extend this well established method, the possible advantages of imaging radiocontaminants are examined. The use of the CACAO project is then studied. A comparison of simulated reconstructed images obtained by the CACAO project and by a conventional gamma camera used in nuclear medicine follows. Imaging a radionuclide contaminant with a geometrical sensitivity of 10 -2 seems possible in the near future. (author)

  17. Resolving Controlled Vocabulary in DITA Markup: A Case Example in Agroforestry

    Science.gov (United States)

    Zschocke, Thomas

    2012-01-01

    Purpose: This paper aims to address the issue of matching controlled vocabulary on agroforestry from knowledge organization systems (KOS) and incorporating these terms in DITA markup. The paper has been selected for an extended version from MTSR'11. Design/methodology/approach: After a general description of the steps taken to harmonize controlled…

  18. A primer on the Petri Net Markup Language and ISO/IEC 15909-2

    DEFF Research Database (Denmark)

    Hillah, L. M.; Kindler, Ekkart; Kordon, F.

    2009-01-01

    Standard, defines a transfer format for high-level nets. The transfer format defined in Part 2 of ISO/IEC 15909 is (or is based on) the \\emph{Petri Net Markup Language} (PNML), which was originally introduced as an interchange format for different kinds of Petri nets. In ISO/IEC 15909-2, however...

  19. A methodology for evaluation of a markup-based specification of clinical guidelines.

    Science.gov (United States)

    Shalom, Erez; Shahar, Yuval; Taieb-Maimon, Meirav; Lunenfeld, Eitan

    2008-11-06

    We introduce a three-phase, nine-step methodology for specification of clinical guidelines (GLs) by expert physicians, clinical editors, and knowledge engineers, and for quantitative evaluation of the specification's quality. We applied this methodology to a particular framework for incremental GL structuring (mark-up) and to GLs in three clinical domains with encouraging results.

  20. Chemical Markup, XML, and the World Wide Web. 7. CMLSpect, an XML vocabulary for spectral data.

    Science.gov (United States)

    Kuhn, Stefan; Helmus, Tobias; Lancashire, Robert J; Murray-Rust, Peter; Rzepa, Henry S; Steinbeck, Christoph; Willighagen, Egon L

    2007-01-01

    CMLSpect is an extension of Chemical Markup Language (CML) for managing spectral and other analytical data. It is designed to be flexible enough to contain a wide variety of spectral data. The paper describes the CMLElements used and gives practical examples for common types of spectra. In addition it demonstrates how different views of the data can be expressed and what problems still exist.

  1. Developing a Markup Language for Encoding Graphic Content in Plan Documents

    Science.gov (United States)

    Li, Jinghuan

    2009-01-01

    While deliberating and making decisions, participants in urban development processes need easy access to the pertinent content scattered among different plans. A Planning Markup Language (PML) has been proposed to represent the underlying structure of plans in an XML-compliant way. However, PML currently covers only textual information and lacks…

  2. Anisotropic conductivity imaging with MREIT using equipotential projection algorithm

    Energy Technology Data Exchange (ETDEWEB)

    Degirmenci, Evren [Department of Electrical and Electronics Engineering, Mersin University, Mersin (Turkey); Eyueboglu, B Murat [Department of Electrical and Electronics Engineering, Middle East Technical University, 06531, Ankara (Turkey)

    2007-12-21

    Magnetic resonance electrical impedance tomography (MREIT) combines magnetic flux or current density measurements obtained by magnetic resonance imaging (MRI) and surface potential measurements to reconstruct images of true conductivity with high spatial resolution. Most of the biological tissues have anisotropic conductivity; therefore, anisotropy should be taken into account in conductivity image reconstruction. Almost all of the MREIT reconstruction algorithms proposed to date assume isotropic conductivity distribution. In this study, a novel MREIT image reconstruction algorithm is proposed to image anisotropic conductivity. Relative anisotropic conductivity values are reconstructed iteratively, using only current density measurements without any potential measurement. In order to obtain true conductivity values, only either one potential or conductivity measurement is sufficient to determine a scaling factor. The proposed technique is evaluated on simulated data for isotropic and anisotropic conductivity distributions, with and without measurement noise. Simulation results show that the images of both anisotropic and isotropic conductivity distributions can be reconstructed successfully.

  3. Intelligent and interactive computer image of a nuclear power plant: The ImagIn project

    International Nuclear Information System (INIS)

    Haubensack, D.; Malvache, P.; Valleix, P.

    1998-01-01

    The ImagIn project consists in a method and a set of computer tools apt to bring perceptible and assessable improvements in the operational safety of a nuclear plant. Its aim is to design an information system that would maintain a highly detailed computerized representation of a nuclear plant in its initial state and throughout its in-service life. It is not a tool to drive or help driving the nuclear plant, but a tool that manages concurrent operations that modify the plant configuration in a very general was (maintenance for example). The configuration of the plant, as well as rules and constraints about it, are described in a object-oriented knowledge database, which is built using a generic ImagIn meta-model based on the semantical network theory. An inference engine works on this database and is connected to reality through interfaces to operators and captors on the installation; it verifies constantly in real-time the consistency of the database according to its inner rules, and reports eventual problems to concerned operators. A special effort is made on interfaces to provide natural and intuitive tools (using virtual reality, natural language, voice recognition and synthesis). A laboratory application on a fictive but realistic installation already exists and is used to simulate various tests and scenarii. A real application is being constructed on Siloe, an experimental reactor of the CEA. (author)

  4. Adaptive fringe-pattern projection for image saturation avoidance in 3D surface-shape measurement.

    Science.gov (United States)

    Li, Dong; Kofman, Jonathan

    2014-04-21

    In fringe-projection 3D surface-shape measurement, image saturation results in incorrect intensities in captured images of fringe patterns, leading to phase and measurement errors. An adaptive fringe-pattern projection (AFPP) method was developed to adapt the maximum input gray level in projected fringe patterns to the local reflectivity of an object surface being measured. The AFPP method demonstrated improved 3D measurement accuracy by avoiding image saturation in highly-reflective surface regions while maintaining high intensity modulation across the entire surface. The AFPP method can avoid image saturation and handle varying surface reflectivity, using only two prior rounds of fringe-pattern projection and image capture to generate the adapted fringe patterns.

  5. Kernel based subspace projection of near infrared hyperspectral images of maize kernels

    DEFF Research Database (Denmark)

    Larsen, Rasmus; Arngren, Morten; Hansen, Per Waaben

    2009-01-01

    In this paper we present an exploratory analysis of hyper- spectral 900-1700 nm images of maize kernels. The imaging device is a line scanning hyper spectral camera using a broadband NIR illumi- nation. In order to explore the hyperspectral data we compare a series of subspace projection methods ......- tor transform outperform the linear methods as well as kernel principal components in producing interesting projections of the data.......In this paper we present an exploratory analysis of hyper- spectral 900-1700 nm images of maize kernels. The imaging device is a line scanning hyper spectral camera using a broadband NIR illumi- nation. In order to explore the hyperspectral data we compare a series of subspace projection methods...... including principal component analysis and maximum autocorrelation factor analysis. The latter utilizes the fact that interesting phenomena in images exhibit spatial autocorrelation. However, linear projections often fail to grasp the underlying variability on the data. Therefore we propose to use so...

  6. Automation and integration of components for generalized semantic markup of electronic medical texts.

    Science.gov (United States)

    Dugan, J M; Berrios, D C; Liu, X; Kim, D K; Kaizer, H; Fagan, L M

    1999-01-01

    Our group has built an information retrieval system based on a complex semantic markup of medical textbooks. We describe the construction of a set of web-based knowledge-acquisition tools that expedites the collection and maintenance of the concepts required for text markup and the search interface required for information retrieval from the marked text. In the text markup system, domain experts (DEs) identify sections of text that contain one or more elements from a finite set of concepts. End users can then query the text using a predefined set of questions, each of which identifies a subset of complementary concepts. The search process matches that subset of concepts to relevant points in the text. The current process requires that the DE invest significant time to generate the required concepts and questions. We propose a new system--called ACQUIRE (Acquisition of Concepts and Queries in an Integrated Retrieval Environment)--that assists a DE in two essential tasks in the text-markup process. First, it helps her to develop, edit, and maintain the concept model: the set of concepts with which she marks the text. Second, ACQUIRE helps her to develop a query model: the set of specific questions that end users can later use to search the marked text. The DE incorporates concepts from the concept model when she creates the questions in the query model. The major benefit of the ACQUIRE system is a reduction in the time and effort required for the text-markup process. We compared the process of concept- and query-model creation using ACQUIRE to the process used in previous work by rebuilding two existing models that we previously constructed manually. We observed a significant decrease in the time required to build and maintain the concept and query models.

  7. Accelerated Optical Projection Tomography Applied to In Vivo Imaging of Zebrafish.

    Directory of Open Access Journals (Sweden)

    Teresa Correia

    Full Text Available Optical projection tomography (OPT provides a non-invasive 3-D imaging modality that can be applied to longitudinal studies of live disease models, including in zebrafish. Current limitations include the requirement of a minimum number of angular projections for reconstruction of reasonable OPT images using filtered back projection (FBP, which is typically several hundred, leading to acquisition times of several minutes. It is highly desirable to decrease the number of required angular projections to decrease both the total acquisition time and the light dose to the sample. This is particularly important to enable longitudinal studies, which involve measurements of the same fish at different time points. In this work, we demonstrate that the use of an iterative algorithm to reconstruct sparsely sampled OPT data sets can provide useful 3-D images with 50 or fewer projections, thereby significantly decreasing the minimum acquisition time and light dose while maintaining image quality. A transgenic zebrafish embryo with fluorescent labelling of the vasculature was imaged to acquire densely sampled (800 projections and under-sampled data sets of transmitted and fluorescence projection images. The under-sampled OPT data sets were reconstructed using an iterative total variation-based image reconstruction algorithm and compared against FBP reconstructions of the densely sampled data sets. To illustrate the potential for quantitative analysis following rapid OPT data acquisition, a Hessian-based method was applied to automatically segment the reconstructed images to select the vasculature network. Results showed that 3-D images of the zebrafish embryo and its vasculature of sufficient visual quality for quantitative analysis can be reconstructed using the iterative algorithm from only 32 projections-achieving up to 28 times improvement in imaging speed and leading to total acquisition times of a few seconds.

  8. Fluorescence guided lymph node biopsy in large animals using direct image projection device

    Science.gov (United States)

    Ringhausen, Elizabeth; Wang, Tylon; Pitts, Jonathan; Akers, Walter J.

    2016-03-01

    The use of fluorescence imaging for aiding oncologic surgery is a fast growing field in biomedical imaging, revolutionizing open and minimally invasive surgery practices. We have designed, constructed, and tested a system for fluorescence image acquisition and direct display on the surgical field for fluorescence guided surgery. The system uses a near-infrared sensitive CMOS camera for image acquisition, a near-infra LED light source for excitation, and DLP digital projector for projection of fluorescence image data onto the operating field in real time. Instrument control was implemented in Matlab for image capture, processing of acquired data and alignment of image parameters with the projected pattern. Accuracy of alignment was evaluated statistically to demonstrate sensitivity to small objects and alignment throughout the imaging field. After verification of accurate alignment, feasibility for clinical application was demonstrated in large animal models of sentinel lymph node biopsy. Indocyanine green was injected subcutaneously in Yorkshire pigs at various locations to model sentinel lymph node biopsy in gynecologic cancers, head and neck cancer, and melanoma. Fluorescence was detected by the camera system during operations and projected onto the imaging field, accurately identifying tissues containing the fluorescent tracer at up to 15 frames per second. Fluorescence information was projected as binary green regions after thresholding and denoising raw intensity data. Promising results with this initial clinical scale prototype provided encouraging results for the feasibility of optical projection of acquired luminescence during open oncologic surgeries.

  9. Optimized image acquisition for breast tomosynthesis in projection and reconstruction space

    International Nuclear Information System (INIS)

    Chawla, Amarpreet S.; Lo, Joseph Y.; Baker, Jay A.; Samei, Ehsan

    2009-01-01

    Breast tomosynthesis has been an exciting new development in the field of breast imaging. While the diagnostic improvement via tomosynthesis is notable, the full potential of tomosynthesis has not yet been realized. This may be attributed to the dependency of the diagnostic quality of tomosynthesis on multiple variables, each of which needs to be optimized. Those include dose, number of angular projections, and the total angular span of those projections. In this study, the authors investigated the effects of these acquisition parameters on the overall diagnostic image quality of breast tomosynthesis in both the projection and reconstruction space. Five mastectomy specimens were imaged using a prototype tomosynthesis system. 25 angular projections of each specimen were acquired at 6.2 times typical single-view clinical dose level. Images at lower dose levels were then simulated using a noise modification routine. Each projection image was supplemented with 84 simulated 3 mm 3D lesions embedded at the center of 84 nonoverlapping ROIs. The projection images were then reconstructed using a filtered backprojection algorithm at different combinations of acquisition parameters to investigate which of the many possible combinations maximizes the performance. Performance was evaluated in terms of a Laguerre-Gauss channelized Hotelling observer model-based measure of lesion detectability. The analysis was also performed without reconstruction by combining the model results from projection images using Bayesian decision fusion algorithm. The effect of acquisition parameters on projection images and reconstructed slices were then compared to derive an optimization rule for tomosynthesis. The results indicated that projection images yield comparable but higher performance than reconstructed images. Both modes, however, offered similar trends: Performance improved with an increase in the total acquisition dose level and the angular span. Using a constant dose level and angular

  10. Pre-analytic process control: projecting a quality image.

    Science.gov (United States)

    Serafin, Mark D

    2006-09-26

    Within the health-care system, the term "ancillary department" often describes the laboratory. Thus, laboratories may find it difficult to define their image and with it, customer perception of department quality. Regulatory requirements give laboratories who so desire an elegant way to address image and perception issues--a comprehensive pre-analytic system solution. Since large laboratories use such systems--laboratory service manuals--I describe and illustrate the process for the benefit of smaller facilities. There exist resources to help even small laboratories produce a professional service manual--an elegant solution to image and customer perception of quality.

  11. Recent advances and future projections in clinical radionuclide imaging

    International Nuclear Information System (INIS)

    Peters, A.M.

    1990-01-01

    This outline review of recent advances in radionuclide imaging draws attention to developments in nuclear medicine of the urinary tract such as Captopril renography and the introduction of MAG-3, the technetium-99m labelled mimic of hippuran, the use of radionuclides for infection diagnosis, advances in lung perfusion scanning, new radiopharmaceuticals for cardiac imaging, and developments in radiopharmaceuticals for imaging tumours, including gallium-67, thallium-201, and the development of radiolabelled monoclonal antibodies. Attention is drawn to the wider use of nuclear medicine in child care. (author)

  12. Non-Stationary Inflation and the Markup: an Overview of the Research and some Implications for Policy

    OpenAIRE

    Bill Russell

    2006-01-01

    This paper reports on research into the negative relationship between inflation and the markup. It is argued that this relationship can be thought of as ‘long-run’ in nature which suggests that inflation has a persistent effect on the markup and, therefore, the real wage. A ‘rule of thumb’ from the estimates indicate that a 10 percentage point increase in inflation (as occurred worldwide in the 1970s) is associated with around a 7 per cent fall in the markup accompanied by a similar increase ...

  13. Rapid Acquisition Imaging Spectrograph (RAISE) Renewal Proposal Project

    Data.gov (United States)

    National Aeronautics and Space Administration — The optical design of RAISE is based on a new class of UV/EUV imaging spectrometers that use  only two reflections to provide quasi-stigmatic performance...

  14. Sparse Nonlinear Electromagnetic Imaging Accelerated With Projected Steepest Descent Algorithm

    KAUST Repository

    Desmal, Abdulla; Bagci, Hakan

    2017-01-01

    steepest descent algorithm. The algorithm uses a projection operator to enforce the sparsity constraint by thresholding the solution at every iteration. Thresholding level and iteration step are selected carefully to increase the efficiency without

  15. Digital projection radiography. Technical principles, image properties and potential applications

    International Nuclear Information System (INIS)

    Busch, H.P.

    1999-01-01

    The history of development of digital projection radiography as a diagnostic method is presented in a comprehensive survey. The various technical principles are explained in detail and illustrated by means of graphic figures and digital X-ray pictures. A comparative assessment of currently applied radiographic systems is given and the potential clinical applications of the method of digital projection radiography are discussed. (orig./CB) [de

  16. THE IMAGE REGISTRATION OF FOURIER-MELLIN BASED ON THE COMBINATION OF PROJECTION AND GRADIENT PREPROCESSING

    Directory of Open Access Journals (Sweden)

    D. Gao

    2017-09-01

    Full Text Available Image registration is one of the most important applications in the field of image processing. The method of Fourier Merlin transform, which has the advantages of high precision and good robustness to change in light and shade, partial blocking, noise influence and so on, is widely used. However, not only this method can’t obtain the unique mutual power pulse function for non-parallel image pairs, even part of image pairs also can’t get the mutual power function pulse. In this paper, an image registration method based on Fourier-Mellin transformation in the view of projection-gradient preprocessing is proposed. According to the projection conformational equation, the method calculates the matrix of image projection transformation to correct the tilt image; then, gradient preprocessing and Fourier-Mellin transformation are performed on the corrected image to obtain the registration parameters. Eventually, the experiment results show that the method makes the image registration of Fourier-Mellin transformation not only applicable to the registration of the parallel image pairs, but also to the registration of non-parallel image pairs. What’s more, the better registration effect can be obtained

  17. Correction of projective distortion in long-image-sequence mosaics without prior information

    Science.gov (United States)

    Yang, Chenhui; Mao, Hongwei; Abousleman, Glen; Si, Jennie

    2010-04-01

    Image mosaicking is the process of piecing together multiple video frames or still images from a moving camera to form a wide-area or panoramic view of the scene being imaged. Mosaics have widespread applications in many areas such as security surveillance, remote sensing, geographical exploration, agricultural field surveillance, virtual reality, digital video, and medical image analysis, among others. When mosaicking a large number of still images or video frames, the quality of the resulting mosaic is compromised by projective distortion. That is, during the mosaicking process, the image frames that are transformed and pasted to the mosaic become significantly scaled down and appear out of proportion with respect to the mosaic. As more frames continue to be transformed, important target information in the frames can be lost since the transformed frames become too small, which eventually leads to the inability to continue further. Some projective distortion correction techniques make use of prior information such as GPS information embedded within the image, or camera internal and external parameters. Alternatively, this paper proposes a new algorithm to reduce the projective distortion without using any prior information whatsoever. Based on the analysis of the projective distortion, we approximate the projective matrix that describes the transformation between image frames using an affine model. Using singular value decomposition, we can deduce the affine model scaling factor that is usually very close to 1. By resetting the image scale of the affine model to 1, the transformed image size remains unchanged. Even though the proposed correction introduces some error in the image matching, this error is typically acceptable and more importantly, the final mosaic preserves the original image size after transformation. We demonstrate the effectiveness of this new correction algorithm on two real-world unmanned air vehicle (UAV) sequences. The proposed method is

  18. The Multidimensional Integrated Intelligent Imaging project (MI-3)

    International Nuclear Information System (INIS)

    Allinson, N.; Anaxagoras, T.; Aveyard, J.; Arvanitis, C.; Bates, R.; Blue, A.; Bohndiek, S.; Cabello, J.; Chen, L.; Chen, S.; Clark, A.; Clayton, C.; Cook, E.; Cossins, A.; Crooks, J.; El-Gomati, M.; Evans, P.M.; Faruqi, W.; French, M.; Gow, J.

    2009-01-01

    MI-3 is a consortium of 11 universities and research laboratories whose mission is to develop complementary metal-oxide semiconductor (CMOS) active pixel sensors (APS) and to apply these sensors to a range of imaging challenges. A range of sensors has been developed: On-Pixel Intelligent CMOS (OPIC)-designed for in-pixel intelligence; FPN-designed to develop novel techniques for reducing fixed pattern noise; HDR-designed to develop novel techniques for increasing dynamic range; Vanilla/PEAPS-with digital and analogue modes and regions of interest, which has also been back-thinned; Large Area Sensor (LAS)-a novel, stitched LAS; and eLeNA-which develops a range of low noise pixels. Applications being developed include autoradiography, a gamma camera system, radiotherapy verification, tissue diffraction imaging, X-ray phase-contrast imaging, DNA sequencing and electron microscopy.

  19. The Multidimensional Integrated Intelligent Imaging project (MI-3)

    Energy Technology Data Exchange (ETDEWEB)

    Allinson, N.; Anaxagoras, T. [Vision and Information Engineering, University of Sheffield (United Kingdom); Aveyard, J. [Laboratory for Environmental Gene Regulation, University of Liverpool (United Kingdom); Arvanitis, C. [Radiation Physics, University College, London (United Kingdom); Bates, R.; Blue, A. [Experimental Particle Physics, University of Glasgow (United Kingdom); Bohndiek, S. [Radiation Physics, University College, London (United Kingdom); Cabello, J. [Centre for Vision, Speech and Signal Processing, University of Surrey, Guildford (United Kingdom); Chen, L. [Electron Optics, Applied Electromagnetics and Electron Optics, University of York (United Kingdom); Chen, S. [MRC Laboratory for Molecular Biology, Cambridge (United Kingdom); Clark, A. [STFC Rutherford Appleton Laboratories (United Kingdom); Clayton, C. [Vision and Information Engineering, University of Sheffield (United Kingdom); Cook, E. [Radiation Physics, University College, London (United Kingdom); Cossins, A. [Laboratory for Environmental Gene Regulation, University of Liverpool (United Kingdom); Crooks, J. [STFC Rutherford Appleton Laboratories (United Kingdom); El-Gomati, M. [Electron Optics, Applied Electromagnetics and Electron Optics, University of York (United Kingdom); Evans, P.M. [Institute of Cancer Research, Sutton, Surrey SM2 5PT (United Kingdom)], E-mail: phil.evans@icr.ac.uk; Faruqi, W. [MRC Laboratory for Molecular Biology, Cambridge (United Kingdom); French, M. [STFC Rutherford Appleton Laboratories (United Kingdom); Gow, J. [Imaging for Space and Terrestrial Applications, Brunel University, London (United Kingdom)] (and others)

    2009-06-01

    MI-3 is a consortium of 11 universities and research laboratories whose mission is to develop complementary metal-oxide semiconductor (CMOS) active pixel sensors (APS) and to apply these sensors to a range of imaging challenges. A range of sensors has been developed: On-Pixel Intelligent CMOS (OPIC)-designed for in-pixel intelligence; FPN-designed to develop novel techniques for reducing fixed pattern noise; HDR-designed to develop novel techniques for increasing dynamic range; Vanilla/PEAPS-with digital and analogue modes and regions of interest, which has also been back-thinned; Large Area Sensor (LAS)-a novel, stitched LAS; and eLeNA-which develops a range of low noise pixels. Applications being developed include autoradiography, a gamma camera system, radiotherapy verification, tissue diffraction imaging, X-ray phase-contrast imaging, DNA sequencing and electron microscopy.

  20. Informatics in radiology: automated structured reporting of imaging findings using the AIM standard and XML.

    Science.gov (United States)

    Zimmerman, Stefan L; Kim, Woojin; Boonn, William W

    2011-01-01

    Quantitative and descriptive imaging data are a vital component of the radiology report and are frequently of paramount importance to the ordering physician. Unfortunately, current methods of recording these data in the report are both inefficient and error prone. In addition, the free-text, unstructured format of a radiology report makes aggregate analysis of data from multiple reports difficult or even impossible without manual intervention. A structured reporting work flow has been developed that allows quantitative data created at an advanced imaging workstation to be seamlessly integrated into the radiology report with minimal radiologist intervention. As an intermediary step between the workstation and the reporting software, quantitative and descriptive data are converted into an extensible markup language (XML) file in a standardized format specified by the Annotation and Image Markup (AIM) project of the National Institutes of Health Cancer Biomedical Informatics Grid. The AIM standard was created to allow image annotation data to be stored in a uniform machine-readable format. These XML files containing imaging data can also be stored on a local database for data mining and analysis. This structured work flow solution has the potential to improve radiologist efficiency, reduce errors, and facilitate storage of quantitative and descriptive imaging data for research. Copyright © RSNA, 2011.

  1. Multi-example feature-constrained back-projection method for image super-resolution

    Institute of Scientific and Technical Information of China (English)

    Junlei Zhang; Dianguang Gai; Xin Zhang; Xuemei Li

    2017-01-01

    Example-based super-resolution algorithms,which predict unknown high-resolution image information using a relationship model learnt from known high- and low-resolution image pairs, have attracted considerable interest in the field of image processing. In this paper, we propose a multi-example feature-constrained back-projection method for image super-resolution. Firstly, we take advantage of a feature-constrained polynomial interpolation method to enlarge the low-resolution image. Next, we consider low-frequency images of different resolutions to provide an example pair. Then, we use adaptive k NN search to find similar patches in the low-resolution image for every image patch in the high-resolution low-frequency image, leading to a regression model between similar patches to be learnt. The learnt model is applied to the low-resolution high-frequency image to produce high-resolution high-frequency information. An iterative back-projection algorithm is used as the final step to determine the final high-resolution image.Experimental results demonstrate that our method improves the visual quality of the high-resolution image.

  2. Research interface for experimental ultrasound imaging - the CFU grabber project

    DEFF Research Database (Denmark)

    Pedersen, Mads Møller; Hemmsen, Martin Christian; Jensen, Jørgen Arendt

    system RASMUS. Furthermore precise scanner settings are stored for inter- and intra-observer studies. The resulting images are used for clinical evaluation. Method and materials The ultrasound scanners research interface is connected to a graphical grabber card in a Windows PC (Grabber PC). The grabber...

  3. Image restoration by the method of convex projections: part 2 applications and numerical results.

    Science.gov (United States)

    Sezan, M I; Stark, H

    1982-01-01

    The image restoration theory discussed in a previous paper by Youla and Webb [1] is applied to a simulated image and the results compared with the well-known method known as the Gerchberg-Papoulis algorithm. The results show that the method of image restoration by projection onto convex sets, by providing a convenient technique for utilizing a priori information, performs significantly better than the Gerchberg-Papoulis method.

  4. ArdenML: The Arden Syntax Markup Language (or Arden Syntax: It's Not Just Text Any More!)

    Science.gov (United States)

    Sailors, R. Matthew

    2001-01-01

    It is no longer necessary to think of Arden Syntax as simply a text-based knowledge base format. The development of ArdenML (Arden Syntax Markup Language), an XML-based markup language allows structured access to most of the maintenance and library categories without the need to write or buy a compiler may lead to the development of simple commercial and freeware tools for processing Arden Syntax Medical Logic Modules (MLMs)

  5. The duality of XML Markup and Programming notation

    DEFF Research Database (Denmark)

    Nørmark, Kurt

    2003-01-01

    In web projects it is often necessary to mix XML notation and program notation in a single document or program. In mono-lingual situations, the XML notation is either subsumed in the program or the program notation is subsumed in the XML document. As an introduction we analyze XML notation and pr...

  6. Evaluation of noise limits to improve image processing in soft X-ray projection microscopy.

    Science.gov (United States)

    Jamsranjav, Erdenetogtokh; Kuge, Kenichi; Ito, Atsushi; Kinjo, Yasuhito; Shiina, Tatsuo

    2017-03-03

    Soft X-ray microscopy has been developed for high resolution imaging of hydrated biological specimens due to the availability of water window region. In particular, a projection type microscopy has advantages in wide viewing area, easy zooming function and easy extensibility to computed tomography (CT). The blur of projection image due to the Fresnel diffraction of X-rays, which eventually reduces spatial resolution, could be corrected by an iteration procedure, i.e., repetition of Fresnel and inverse Fresnel transformations. However, it was found that the correction is not enough to be effective for all images, especially for images with low contrast. In order to improve the effectiveness of image correction by computer processing, we in this study evaluated the influence of background noise in the iteration procedure through a simulation study. In the study, images of model specimen with known morphology were used as a substitute for the chromosome images, one of the targets of our microscope. Under the condition that artificial noise was distributed on the images randomly, we introduced two different parameters to evaluate noise effects according to each situation where the iteration procedure was not successful, and proposed an upper limit of the noise within which the effective iteration procedure for the chromosome images was possible. The study indicated that applying the new simulation and noise evaluation method was useful for image processing where background noises cannot be ignored compared with specimen images.

  7. Semantically supporting data discovery, markup and aggregation in the European Marine Observation and Data Network (EMODnet)

    Science.gov (United States)

    Lowry, Roy; Leadbetter, Adam

    2014-05-01

    The semantic content of the NERC Vocabulary Server (NVS) has been developed over thirty years. It has been used to mark up metadata and data in a wide range of international projects, including the European Commission (EC) Framework Programme 7 projects SeaDataNet and The Open Service Network for Marine Environmental Data (NETMAR). Within the United States, the National Science Foundation projects Rolling Deck to Repository and Biological & Chemical Data Management Office (BCO-DMO) use concepts from NVS for markup. Further, typed relationships between NVS concepts and terms served by the Marine Metadata Interoperability Ontology Registry and Repository. The vast majority of the concepts publicly served from NVS (35% of ~82,000) form the British Oceanographic Data Centre (BODC) Parameter Usage Vocabulary (PUV). The PUV is instantiated on the NVS as a SKOS concept collection. These terms are used to describe the individual channels in data and metadata served by, for example, BODC, SeaDataNet and BCO-DMO. The PUV terms are designed to be very precise and may contain a high level of detail. Some users have reported that the PUV is difficult to navigate due to its size and complexity (a problem CSIRO have begun to address by deploying a SISSVoc interface to the NVS), and it has been difficult to aggregate data as multiple PUV terms can - with full validity - be used to describe the same data channels. Better approaches to data aggregation are required as a use case for the PUV from the EC European Marine Observation and Data Network (EMODnet) Chemistry project. One solution, proposed and demonstrated during the course of the NETMAR project, is to build new SKOS concept collections which formalise the desired aggregations for given applications, and uses typed relationships to state which PUV concepts contribute to a specific aggregation. Development of these new collections requires input from a group of experts in the application domain who can decide which PUV

  8. Quantitative estimation of brain atrophy and function with PET and MRI two-dimensional projection images

    International Nuclear Information System (INIS)

    Saito, Reiko; Uemura, Koji; Uchiyama, Akihiko; Toyama, Hinako; Ishii, Kenji; Senda, Michio

    2001-01-01

    The purpose of this paper is to estimate the extent of atrophy and the decline in brain function objectively and quantitatively. Two-dimensional (2D) projection images of three-dimensional (3D) transaxial images of positron emission tomography (PET) and magnetic resonance imaging (MRI) were made by means of the Mollweide method which keeps the area of the brain surface. A correlation image was generated between 2D projection images of MRI and cerebral blood flow (CBF) or 18 F-fluorodeoxyglucose (FDG) PET images and the sulcus was extracted from the correlation image clustered by K-means method. Furthermore, the extent of atrophy was evaluated from the extracted sulcus on 2D-projection MRI and the cerebral cortical function such as blood flow or glucose metabolic rate was assessed in the cortex excluding sulcus on 2D-projection PET image, and then the relationship between the cerebral atrophy and function was evaluated. This method was applied to the two groups, the young and the aged normal subjects, and the relationship between the age and the rate of atrophy or the cerebral blood flow was investigated. This method was also applied to FDG-PET and MRI studies in the normal controls and in patients with corticobasal degeneration. The mean rate of atrophy in the aged group was found to be higher than that in the young. The mean value and the variance of the cerebral blood flow for the young are greater than those of the aged. The sulci were similarly extracted using either CBF or FDG PET images. The purposed method using 2-D projection images of MRI and PET is clinically useful for quantitative assessment of atrophic change and functional disorder of cerebral cortex. (author)

  9. A projection graphic display for the computer aided analysis of bubble chamber images

    International Nuclear Information System (INIS)

    Solomos, E.

    1979-01-01

    A projection graphic display for aiding the analysis of bubble chamber photographs has been developed by the Instrumentation Group of EF Division at CERN. The display image is generated on a very high brightness cathode ray tube and projected on to the table of the scanning-measuring machines as a superposition to the image of the bubble chamber. The display can send messages to the operator and aid the measurement by indicating directly on the chamber image the tracks which are measured correctly or not. (orig.)

  10. Development of Markup Language for Medical Record Charting: A Charting Language.

    Science.gov (United States)

    Jung, Won-Mo; Chae, Younbyoung; Jang, Bo-Hyoung

    2015-01-01

    Nowadays a lot of trials for collecting electronic medical records (EMRs) exist. However, structuring data format for EMR is an especially labour-intensive task for practitioners. Here we propose a new mark-up language for medical record charting (called Charting Language), which borrows useful properties from programming languages. Thus, with Charting Language, the text data described in dynamic situation can be easily used to extract information.

  11. QUESTION ANSWERING SYSTEM BERBASIS ARTIFICIAL INTELLIGENCE MARKUP LANGUAGE SEBAGAI MEDIA INFORMASI

    OpenAIRE

    Fajrin Azwary; Fatma Indriani; Dodon T. Nugrahadi

    2016-01-01

    Artificial intelligence technology nowadays, can be processed with a variety of forms, such as chatbot, and the various methods, one of them using Artificial Intelligence Markup Language (AIML). AIML using template matching, by comparing the specific patterns in the database. AIML template design process begins with determining the necessary information, then formed into questions, these questions adapted to AIML pattern. From the results of the study, can be known that the Question-Answering...

  12. Firm Dynamics and Markup Variations: Implications for Sunspot Equilibria and Endogenous Economic Fluctuation

    OpenAIRE

    Nir Jaimovich

    2007-01-01

    This paper analyzes how the interaction between firms’ entry-and-exit decisions and variations in competition gives rise to self-fulfilling, expectation-driven fluctuations in aggregate economic activity and in measured total factor productivity (TFP). The analysis is based on a dynamic general equilibrium model in which net business formation is endogenously procyclical and leads to endogenous countercyclical variations in markups. This interaction leads to indeterminacy in which economic fl...

  13. Standard generalized markup language: A guide for transmitting encoded bibliographic records

    International Nuclear Information System (INIS)

    1994-09-01

    This document provides the guidance necessary to transmit to DOE's Office of Scientific and Technical Information (OSTI) an encoded bibliographic record that conforms to International Standard ISO 8879, Information Processing -- Text and office systems -- Standard Generalized Markup Language (SGML). Included in this document are element and attribute tag definitions, sample bibliographic records, the bibliographic document type definition, and instructions on how to transmit a bibliographic record electronically to OSTI

  14. Standard generalized markup language: A guide for transmitting encoded bibliographic records

    Energy Technology Data Exchange (ETDEWEB)

    1994-09-01

    This document provides the guidance necessary to transmit to DOE`s Office of Scientific and Technical Information (OSTI) an encoded bibliographic record that conforms to International Standard ISO 8879, Information Processing -- Text and office systems -- Standard Generalized Markup Language (SGML). Included in this document are element and attribute tag definitions, sample bibliographic records, the bibliographic document type definition, and instructions on how to transmit a bibliographic record electronically to OSTI.

  15. Design and Development of a New Multi-Projection X-Ray System for Chest Imaging

    Science.gov (United States)

    Chawla, Amarpreet S.; Boyce, Sarah; Washington, Lacey; McAdams, H. Page; Samei, Ehsan

    2009-02-01

    Overlapping anatomical structures may confound the detection of abnormal pathology, including lung nodules, in conventional single-projection chest radiography. To minimize this fundamental limiting factor, a dedicated digital multi-projection system for chest imaging was recently developed at the Radiology Department of Duke University. We are reporting the design of the multi-projection imaging system and its initial performance in an ongoing clinical trial. The system is capable of acquiring multiple full-field projections of the same patient along both the horizontal and vertical axes at variable speeds and acquisition frame rates. These images acquired in rapid succession from slightly different angles about the posterior-anterior (PA) orientation can be correlated to minimize the influence of overlying anatomy. The developed system has been tested for repeatability and motion blur artifacts to investigate its robustness for clinical trials. Excellent geometrical consistency was found in the tube motion, with positional errors for clinical settings within 1%. The effect of tube-motion on the image quality measured in terms of impact on the modulation transfer function (MTF) was found to be minimal. The system was deemed clinic-ready and a clinical trial was subsequently launched. The flexibility of image acquisition built into the system provides a unique opportunity to easily modify it for different clinical applications, including tomosynthesis, correlation imaging (CI), and stereoscopic imaging.

  16. Semi-automated XML markup of biosystematic legacy literature with the GoldenGATE editor.

    Science.gov (United States)

    Sautter, Guido; Böhm, Klemens; Agosti, Donat

    2007-01-01

    Today, digitization of legacy literature is a big issue. This also applies to the domain of biosystematics, where this process has just started. Digitized biosystematics literature requires a very precise and fine grained markup in order to be useful for detailed search, data linkage and mining. However, manual markup on sentence level and below is cumbersome and time consuming. In this paper, we present and evaluate the GoldenGATE editor, which is designed for the special needs of marking up OCR output with XML. It is built in order to support the user in this process as far as possible: Its functionality ranges from easy, intuitive tagging through markup conversion to dynamic binding of configurable plug-ins provided by third parties. Our evaluation shows that marking up an OCR document using GoldenGATE is three to four times faster than with an off-the-shelf XML editor like XML-Spy. Using domain-specific NLP-based plug-ins, these numbers are even higher.

  17. Automated Text Markup for Information Retrieval from an Electronic Textbook of Infectious Disease

    Science.gov (United States)

    Berrios, Daniel C.; Kehler, Andrew; Kim, David K.; Yu, Victor L.; Fagan, Lawrence M.

    1998-01-01

    The information needs of practicing clinicians frequently require textbook or journal searches. Making these sources available in electronic form improves the speed of these searches, but precision (i.e., the fraction of relevant to total documents retrieved) remains low. Improving the traditional keyword search by transforming search terms into canonical concepts does not improve search precision greatly. Kim et al. have designed and built a prototype system (MYCIN II) for computer-based information retrieval from a forthcoming electronic textbook of infectious disease. The system requires manual indexing by experts in the form of complex text markup. However, this mark-up process is time consuming (about 3 person-hours to generate, review, and transcribe the index for each of 218 chapters). We have designed and implemented a system to semiautomate the markup process. The system, information extraction for semiautomated indexing of documents (ISAID), uses query models and existing information-extraction tools to provide support for any user, including the author of the source material, to mark up tertiary information sources quickly and accurately.

  18. Augmented reality-guided neurosurgery: accuracy and intraoperative application of an image projection technique.

    Science.gov (United States)

    Besharati Tabrizi, Leila; Mahvash, Mehran

    2015-07-01

    An augmented reality system has been developed for image-guided neurosurgery to project images with regions of interest onto the patient's head, skull, or brain surface in real time. The aim of this study was to evaluate system accuracy and to perform the first intraoperative application. Images of segmented brain tumors in different localizations and sizes were created in 10 cases and were projected to a head phantom using a video projector. Registration was performed using 5 fiducial markers. After each registration, the distance of the 5 fiducial markers from the visualized tumor borders was measured on the virtual image and on the phantom. The difference was considered a projection error. Moreover, the image projection technique was intraoperatively applied in 5 patients and was compared with a standard navigation system. Augmented reality visualization of the tumors succeeded in all cases. The mean time for registration was 3.8 minutes (range 2-7 minutes). The mean projection error was 0.8 ± 0.25 mm. There were no significant differences in accuracy according to the localization and size of the tumor. Clinical feasibility and reliability of the augmented reality system could be proved intraoperatively in 5 patients (projection error 1.2 ± 0.54 mm). The augmented reality system is accurate and reliable for the intraoperative projection of images to the head, skull, and brain surface. The ergonomic advantage of this technique improves the planning of neurosurgical procedures and enables the surgeon to use direct visualization for image-guided neurosurgery.

  19. Dynamic PET simulator via tomographic emission projection for kinetic modeling and parametric image studies.

    Science.gov (United States)

    Häggström, Ida; Beattie, Bradley J; Schmidtlein, C Ross

    2016-06-01

    To develop and evaluate a fast and simple tool called dpetstep (Dynamic PET Simulator of Tracers via Emission Projection), for dynamic PET simulations as an alternative to Monte Carlo (MC), useful for educational purposes and evaluation of the effects of the clinical environment, postprocessing choices, etc., on dynamic and parametric images. The tool was developed in matlab using both new and previously reported modules of petstep (PET Simulator of Tracers via Emission Projection). Time activity curves are generated for each voxel of the input parametric image, whereby effects of imaging system blurring, counting noise, scatters, randoms, and attenuation are simulated for each frame. Each frame is then reconstructed into images according to the user specified method, settings, and corrections. Reconstructed images were compared to MC data, and simple Gaussian noised time activity curves (GAUSS). dpetstep was 8000 times faster than MC. Dynamic images from dpetstep had a root mean square error that was within 4% on average of that of MC images, whereas the GAUSS images were within 11%. The average bias in dpetstep and MC images was the same, while GAUSS differed by 3% points. Noise profiles in dpetstep images conformed well to MC images, confirmed visually by scatter plot histograms, and statistically by tumor region of interest histogram comparisons that showed no significant differences (p dynamic PET and parametric images, and demonstrated that it generates both images and subsequent parametric images with very similar noise properties to those of MC images, in a fraction of the time. They believe dpetstep to be very useful for generating fast, simple, and realistic results, however since it uses simple scatter and random models it may not be suitable for studies investigating these phenomena. dpetstep can be downloaded free of cost from https://github.com/CRossSchmidtlein/dPETSTEP.

  20. WE-G-BRF-04: Robust Real-Time Volumetric Imaging Based On One Single Projection

    International Nuclear Information System (INIS)

    Xu, Y; Yan, H; Ouyang, L; Wang, J; Jiang, S; Jia, X; Zhou, L

    2014-01-01

    Purpose: Real-time volumetric imaging is highly desirable to provide instantaneous image guidance for lung radiation therapy. This study proposes a scheme to achieve this goal using one single projection by utilizing sparse learning and a principal component analysis (PCA) based lung motion model. Methods: A patient-specific PCA-based lung motion model is first constructed by analyzing deformable vector fields (DVFs) between a reference image and 4DCT images at each phase. At the training stage, we “learn” the relationship between the DVFs and the projection using sparse learning. Specifically, we first partition the projections into patches, and then apply sparse learning to automatically identify patches that best correlate with the principal components of the DVFs. Once the relationship is established, at the application stage, we first employ a patchbased intensity correction method to overcome the problem of different intensity scale between the calculated projection in the training stage and the measured projection in the application stage. The corrected projection image is then fed to the trained model to derive a DVF, which is applied to the reference image, yielding a volumetric image corresponding to the projection. We have validated our method through a NCAT phantom simulation case and one experiment case. Results: Sparse learning can automatically select those patches containing motion information, such as those around diaphragm. For the simulation case, over 98% of the lung region pass the generalized gamma test (10HU/1mm), indicating combined accuracy in both intensity and spatial domain. For the experimental case, the average tumor localization errors projected to the imager are 0.68 mm and 0.4 mm on the axial and tangential direction, respectively. Conclusion: The proposed method is capable of accurately generating a volumetric image using one single projection. It will potentially offer real-time volumetric image guidance to facilitate lung

  1. WE-G-BRF-04: Robust Real-Time Volumetric Imaging Based On One Single Projection

    Energy Technology Data Exchange (ETDEWEB)

    Xu, Y [UT Southwestern Medical Center, Dallas, TX (United States); Southern Medical University, Guangzhou (China); Yan, H; Ouyang, L; Wang, J; Jiang, S; Jia, X [UT Southwestern Medical Center, Dallas, TX (United States); Zhou, L [Southern Medical University, Guangzhou (China)

    2014-06-15

    Purpose: Real-time volumetric imaging is highly desirable to provide instantaneous image guidance for lung radiation therapy. This study proposes a scheme to achieve this goal using one single projection by utilizing sparse learning and a principal component analysis (PCA) based lung motion model. Methods: A patient-specific PCA-based lung motion model is first constructed by analyzing deformable vector fields (DVFs) between a reference image and 4DCT images at each phase. At the training stage, we “learn” the relationship between the DVFs and the projection using sparse learning. Specifically, we first partition the projections into patches, and then apply sparse learning to automatically identify patches that best correlate with the principal components of the DVFs. Once the relationship is established, at the application stage, we first employ a patchbased intensity correction method to overcome the problem of different intensity scale between the calculated projection in the training stage and the measured projection in the application stage. The corrected projection image is then fed to the trained model to derive a DVF, which is applied to the reference image, yielding a volumetric image corresponding to the projection. We have validated our method through a NCAT phantom simulation case and one experiment case. Results: Sparse learning can automatically select those patches containing motion information, such as those around diaphragm. For the simulation case, over 98% of the lung region pass the generalized gamma test (10HU/1mm), indicating combined accuracy in both intensity and spatial domain. For the experimental case, the average tumor localization errors projected to the imager are 0.68 mm and 0.4 mm on the axial and tangential direction, respectively. Conclusion: The proposed method is capable of accurately generating a volumetric image using one single projection. It will potentially offer real-time volumetric image guidance to facilitate lung

  2. Improvement of image quality using interpolated projection data estimation method in SPECT

    International Nuclear Information System (INIS)

    Takaki, Akihiro; Soma, Tsutomu; Murase, Kenya; Kojima, Akihiro; Asao, Kimie; Kamada, Shinya; Matsumoto, Masanori

    2009-01-01

    General data acquisition for single photon emission computed tomography (SPECT) is performed in 90 or 60 directions, with a coarse pitch of approximately 4-6 deg for a rotation of 360 deg or 180 deg, using a gamma camera. No data between adjacent projections will be sampled under these circumstances. The aim of the study was to develop a method to improve SPECT image quality by generating lacking projection data through interpolation of data obtained with a coarse pitch such as 6 deg. The projection data set at each individual degree in 360 directions was generated by a weighted average interpolation method from the projection data acquired with a coarse sampling angle (interpolated projection data estimation processing method, IPDE method). The IPDE method was applied to the numerical digital phantom data, actual phantom data and clinical brain data with Tc-99m ethyle cysteinate dimer (ECD). All SPECT images were reconstructed by the filtered back-projection method and compared with the original SPECT images. The results confirmed that streak artifacts decreased by apparently increasing a sampling number in SPECT after interpolation and also improved signal-to-noise (S/N) ratio of the root mean square uncertainty value. Furthermore, the normalized mean square error values, compared with standard images, had similar ones after interpolation. Moreover, the contrast and concentration ratios increased their effects after interpolation. These results indicate that effective improvement of image quality can be expected with interpolation. Thus, image quality and the ability to depict images can be improved while maintaining the present acquisition time and image quality. In addition, this can be achieved more effectively than at present even if the acquisition time is reduced. (author)

  3. Image reconstruction for digital breast tomosynthesis (DBT) by using projection-angle-dependent filter functions

    Energy Technology Data Exchange (ETDEWEB)

    Park, Yeonok; Park, Chulkyu; Cho, Hyosung; Je, Uikyu; Hong, Daeki; Lee, Minsik; Cho, Heemoon; Choi, Sungil; Koo, Yangseo [Yonsei University, Wonju (Korea, Republic of)

    2014-09-15

    Digital breast tomosynthesis (DBT) is considered in clinics as a standard three-dimensional imaging modality, allowing the earlier detection of cancer. It typically acquires only 10-30 projections over a limited angle range of 15 - 60 .deg. with a stationary detector and typically uses a computationally-efficient filtered-backprojection (FBP) algorithm for image reconstruction. However, a common FBP algorithm yields poor image quality resulting from the loss of average image value and the presence of severe image artifacts due to the elimination of the dc component of the image by the ramp filter and to the incomplete data, respectively. As an alternative, iterative reconstruction methods are often used in DBT to overcome these difficulties, even though they are still computationally expensive. In this study, as a compromise, we considered a projection-angle dependent filtering method in which one-dimensional geometry-adapted filter kernels are computed with the aid of a conjugate-gradient method and are incorporated into the standard FBP framework. We implemented the proposed algorithm and performed systematic simulation works to investigate the imaging characteristics. Our results indicate that the proposed method is superior to a conventional FBP method for DBT imaging and has a comparable computational cost, while preserving good image homogeneity and edge sharpening with no serious image artifacts.

  4. Coalescence measurements for evolving foams monitored by real-time projection imaging

    International Nuclear Information System (INIS)

    Myagotin, A; Helfen, L; Baumbach, T

    2009-01-01

    Real-time radiographic projection imaging together with novel spatio-temporal image analysis is presented to be a powerful technique for the quantitative analysis of coalescence processes accompanying the generation and temporal evolution of foams and emulsions. Coalescence events can be identified as discontinuities in a spatio-temporal image representing a sequence of projection images. Detection, identification of intensity and localization of the discontinuities exploit a violation criterion of the Fourier shift theorem and are based on recursive spatio-temporal image partitioning. The proposed method is suited for automated measurements of discontinuity rates (i.e., discontinuity intensity per unit time), so that large series of radiographs can be analyzed without user intervention. The application potential is demonstrated by the quantification of coalescence during the formation and decay of metal foams monitored by real-time x-ray radiography

  5. Improved detection of chronic myocardial infarction with Fourier amplitude and phase imaging in two projections

    International Nuclear Information System (INIS)

    Akins, E.W.; Scott, E.A.; Williams, C.M.

    1987-01-01

    Twenty-seven patients with 33 chronic myocaridal infarctions underwent MR imaging and radionuclide ventriculography at rest. The radionuclide ventriculographs, in left anterior oblique (LAO) and left posterior oblique (LPO) projections, were analyzed by two independent observers by visual inspection and combined Fourier-transformed amplitude and phase imaging. Only 15 (45%) of the 33 infarctions were detected by visual inspection, but 21 (64%) were detected on the LAO Fourier-transformed images along. Thirty (91%) were detected by using both LAO and LPO Fourier-transformed images. On MR imaging, 28 (85%) of the myocardial infarctions appeared as areas of focal wall thinning. Combined Fourier-transformed amplitude and phase imaging in both LAO and LPO views discloses more myocardial infarctions than visual inspection or LAO Fourier-transformed images alone because inferior infarctions, which are frequently missed in the LAO view, are easily seen in the LPO view

  6. WE-E-18A-11: Fluoro-Tomographic Images From Projections of On-Board Imager (OBI) While Gantry Is Moving

    Energy Technology Data Exchange (ETDEWEB)

    Yi, B; Hu, E; Yu, C; Lasio, G [Univ. of Maryland School Of Medicine, Baltimore, MD (United States)

    2014-06-15

    Purpose: A method to generate a series of fluoro-tomographic images (FTI) of the slice of interest (SOI) from the projection images of the On-board imager (OBI) while gantry is moving is developed and tested. Methods: Tomographic image via background subtraction, TIBS has been published by our group. TIBS uses a priori anatomical information from a previous CT scan to isolate a SOI from a planar kV image by factoring out the attenuations by tissues outside the SOI (background). We extended the idea to 4D TIBS, which enables to generate from the projection of different gantry angles. A set of background images for different angles are prepared. A background image at a given gantry angle is subtracted from the projection image at the same angle to generate a TIBS image. Then the TIBS image is converted to a reference angle. The 4D TIBS is the set of TIBS that originated from gantry angles other than the reference angle. Projection images of lung patients for CBCT acquisition are used to test the 4D TIBS. Results: Fluoroscopic images of a coronal plane of lung patients are acquired from the CBCT projections at different gantry angles and times. Change of morphology of hilar vessels due to breathing and heart beating are visible in the coronal plane, which are generated from the set of the projection images at gantry angles other than antero-posterior. Breathing surrogate or sorting process is not needed. Unlike tomosynthesis, FTI from 4D TIBS maintains the independence of each of the projections thereby reveals temporal variations within the SOI. Conclusion: FTI, fluoroscopic imaging of a SOI with x-ray projections, directly generated from the x-ray projection images at different gantry angles is tested with a lung case and proven feasible. This technique can be used for on-line imaging of moving targets. NIH Grant R01CA133539.

  7. Measurement of inter and intra fraction organ motion in radiotherapy using cone beam CT projection images

    International Nuclear Information System (INIS)

    Marchant, T E; Amer, A M; Moore, C J

    2008-01-01

    A method is presented for extraction of intra and inter fraction motion of seeds/markers within the patient from cone beam CT (CBCT) projection images. The position of the marker is determined on each projection image and fitted to a function describing the projection of a fixed point onto the imaging panel at different gantry angles. The fitted parameters provide the mean marker position with respect to the isocentre. Differences between the theoretical function and the actual projected marker positions are used to estimate the range of intra fraction motion and the principal motion axis in the transverse plane. The method was validated using CBCT projection images of a static marker at known locations and of a marker moving with known amplitude. The mean difference between actual and measured motion range was less than 1 mm in all directions, although errors of up to 5 mm were observed when large amplitude motion was present in an orthogonal direction. In these cases it was possible to calculate the range of motion magnitudes consistent with the observed marker trajectory. The method was shown to be feasible using clinical CBCT projections of a pancreas cancer patient

  8. Fluorescence In Situ Hybridization (FISH Signal Analysis Using Automated Generated Projection Images

    Directory of Open Access Journals (Sweden)

    Xingwei Wang

    2012-01-01

    Full Text Available Fluorescence in situ hybridization (FISH tests provide promising molecular imaging biomarkers to more accurately and reliably detect and diagnose cancers and genetic disorders. Since current manual FISH signal analysis is low-efficient and inconsistent, which limits its clinical utility, developing automated FISH image scanning systems and computer-aided detection (CAD schemes has been attracting research interests. To acquire high-resolution FISH images in a multi-spectral scanning mode, a huge amount of image data with the stack of the multiple three-dimensional (3-D image slices is generated from a single specimen. Automated preprocessing these scanned images to eliminate the non-useful and redundant data is important to make the automated FISH tests acceptable in clinical applications. In this study, a dual-detector fluorescence image scanning system was applied to scan four specimen slides with FISH-probed chromosome X. A CAD scheme was developed to detect analyzable interphase cells and map the multiple imaging slices recorded FISH-probed signals into the 2-D projection images. CAD scheme was then applied to each projection image to detect analyzable interphase cells using an adaptive multiple-threshold algorithm, identify FISH-probed signals using a top-hat transform, and compute the ratios between the normal and abnormal cells. To assess CAD performance, the FISH-probed signals were also independently visually detected by an observer. The Kappa coefficients for agreement between CAD and observer ranged from 0.69 to 1.0 in detecting/counting FISH signal spots in four testing samples. The study demonstrated the feasibility of automated FISH signal analysis that applying a CAD scheme to the automated generated 2-D projection images.

  9. An accelerated threshold-based back-projection algorithm for Compton camera image reconstruction

    International Nuclear Information System (INIS)

    Mundy, Daniel W.; Herman, Michael G.

    2011-01-01

    Purpose: Compton camera imaging (CCI) systems are currently under investigation for radiotherapy dose reconstruction and verification. The ability of such a system to provide real-time images during dose delivery will be limited by the computational speed of the image reconstruction algorithm. In this work, the authors present a fast and simple method by which to generate an initial back-projected image from acquired CCI data, suitable for use in a filtered back-projection algorithm or as a starting point for iterative reconstruction algorithms, and compare its performance to the current state of the art. Methods: Each detector event in a CCI system describes a conical surface that includes the true point of origin of the detected photon. Numerical image reconstruction algorithms require, as a first step, the back-projection of each of these conical surfaces into an image space. The algorithm presented here first generates a solution matrix for each slice of the image space by solving the intersection of the conical surface with the image plane. Each element of the solution matrix is proportional to the distance of the corresponding voxel from the true intersection curve. A threshold function was developed to extract those pixels sufficiently close to the true intersection to generate a binary intersection curve. This process is repeated for each image plane for each CCI detector event, resulting in a three-dimensional back-projection image. The performance of this algorithm was tested against a marching algorithm known for speed and accuracy. Results: The threshold-based algorithm was found to be approximately four times faster than the current state of the art with minimal deficit to image quality, arising from the fact that a generically applicable threshold function cannot provide perfect results in all situations. The algorithm fails to extract a complete intersection curve in image slices near the detector surface for detector event cones having axes nearly

  10. JPEG2000-coded image error concealment exploiting convex sets projections.

    Science.gov (United States)

    Atzori, Luigi; Ginesu, Giaime; Raccis, Alessio

    2005-04-01

    Transmission errors in JPEG2000 can be grouped into three main classes, depending on the affected area: LL, high frequencies at the lower decomposition levels, and high frequencies at the higher decomposition levels. The first type of errors are the most annoying but can be concealed exploiting the signal spatial correlation like in a number of techniques proposed in the past; the second are less annoying but more difficult to address; the latter are often imperceptible. In this paper, we address the problem of concealing the second class or errors when high bit-planes are damaged by proposing a new approach based on the theory of projections onto convex sets. Accordingly, the error effects are masked by iteratively applying two procedures: low-pass (LP) filtering in the spatial domain and restoration of the uncorrupted wavelet coefficients in the transform domain. It has been observed that a uniform LP filtering brought to some undesired side effects that negatively compensated the advantages. This problem has been overcome by applying an adaptive solution, which exploits an edge map to choose the optimal filter mask size. Simulation results demonstrated the efficiency of the proposed approach.

  11. Projection correction for the pixel-by-pixel basis in diffraction enhanced imaging

    International Nuclear Information System (INIS)

    Huang Zhifeng; Kang Kejun; Li Zheng

    2006-01-01

    Theories and methods of x-ray diffraction enhanced imaging (DEI) and computed tomography of the DEI (DEI-CT) have been investigated recently. But the phenomenon of projection offsets which may affect the accuracy of the results of extraction methods of refraction-angle images and reconstruction algorithms of the DEI-CT is seldom of concern. This paper focuses on it. Projection offsets are revealed distinctly according to the equivalent rectilinear propagation model of the DEI. Then, an effective correction method using the equivalent positions of projection data is presented to eliminate the errors induced by projection offsets. The correction method is validated by a computer simulation experiment and extraction methods or reconstruction algorithms based on the corrected data can give more accurate results. The limitations of the correction method are discussed at the end

  12. Semantic markup of sensor capabilities: how simple it too simple?

    Science.gov (United States)

    Rueda-Velasquez, C. A.; Janowicz, K.; Fredericks, J.

    2016-12-01

    Semantics plays a key role for the publication, retrieval, integration, and reuse of observational data across the geosciences. In most cases, one can safely assume that the providers of such data, e.g., individual scientists, understand the observation context in which their data are collected,e.g., the used observation procedure, the sampling strategy, the feature of interest being studied, and so forth. However, can we expect that the same is true for the technical details of the used sensors and especially the nuanced changes that can impact observations in often unpredictable ways? Should the burden of annotating the sensor capabilities, firmware, operation ranges, and so forth be really part of a scientist's responsibility? Ideally, semantic annotations should be provided by the parties that understand these details and have a vested interest in maintaining these data. With manufactures providing semantically-enabled metadata for their sensors and instruments, observations could more easily be annotated and thereby enriched using this information. Unfortunately, today's sensor ontologies and tool chains developed for the Semantic Web community require expertise beyond the knowledge and interest of most manufacturers. Consequently, knowledge engineers need to better understand the sweet spot between simple ontologies/vocabularies and sufficient expressivity as well as the tools required to enable manufacturers to share data about their sensors. Here, we report on the current results of EarthCube's X-Domes project that aims to address the questions outlined above.

  13. Restoration of the analytically reconstructed OpenPET images by the method of convex projections

    Energy Technology Data Exchange (ETDEWEB)

    Tashima, Hideaki; Murayama, Hideo; Yamaya, Taiga [National Institute of Radiological Sciences, Chiba (Japan); Katsunuma, Takayuki; Suga, Mikio [Chiba Univ. (Japan). Graduate School of Engineering; Kinouchi, Shoko [National Institute of Radiological Sciences, Chiba (Japan); Chiba Univ. (Japan). Graduate School of Engineering; Obi, Takashi [Tokyo Institute of Technology (Japan). Interdisciplinary Graduate School of Science and Engineering; Kudo, Hiroyuki [Tsukuba Univ. (Japan). Graduate School of Systems and Information Engineering

    2011-07-01

    We have proposed the OpenPET geometry which has gaps between detector rings and physically opened field-of-view. The image reconstruction of the OpenPET is classified into an incomplete problem because it does not satisfy the Orlov's condition. Even so, the simulation and experimental studies have shown that applying iterative methods such as the maximum likelihood expectation maximization (ML-EM) algorithm successfully reconstruct images in the gap area. However, the imaging process of the iterative methods in the OpenPET imaging is not clear. Therefore, the aim of this study is to analytically analyze the OpenPET imaging and estimate implicit constraints involved in the iterative methods. To apply explicit constraints in the OpenPET imaging, we used the method of convex projections for restoration of the images reconstructed by the analytical way in which low-frequency components are lost. Numerical simulations showed that the similar restoration effects are involved both in the ML-EM and the method of convex projections. Therefore, the iterative methods have advantageous effect of restoring lost frequency components of the OpenPET imaging. (orig.)

  14. Text extraction method for historical Tibetan document images based on block projections

    Science.gov (United States)

    Duan, Li-juan; Zhang, Xi-qun; Ma, Long-long; Wu, Jian

    2017-11-01

    Text extraction is an important initial step in digitizing the historical documents. In this paper, we present a text extraction method for historical Tibetan document images based on block projections. The task of text extraction is considered as text area detection and location problem. The images are divided equally into blocks and the blocks are filtered by the information of the categories of connected components and corner point density. By analyzing the filtered blocks' projections, the approximate text areas can be located, and the text regions are extracted. Experiments on the dataset of historical Tibetan documents demonstrate the effectiveness of the proposed method.

  15. Intensity-based bayesian framework for image reconstruction from sparse projection data

    International Nuclear Information System (INIS)

    Rashed, E.A.; Kudo, Hiroyuki

    2009-01-01

    This paper presents a Bayesian framework for iterative image reconstruction from projection data measured over a limited number of views. The classical Nyquist sampling rule yields the minimum number of projection views required for accurate reconstruction. However, challenges exist in many medical and industrial imaging applications in which the projection data is undersampled. Classical analytical reconstruction methods such as filtered backprojection (FBP) are not a good choice for use in such cases because the data undersampling in the angular range introduces aliasing and streak artifacts that degrade lesion detectability. In this paper, we propose a Bayesian framework for maximum likelihood-expectation maximization (ML-EM)-based iterative reconstruction methods that incorporates a priori knowledge obtained from expected intensity information. The proposed framework is based on the fact that, in tomographic imaging, it is often possible to expect a set of intensity values of the reconstructed object with relatively high accuracy. The image reconstruction cost function is modified to include the l 1 norm distance to the a priori known information. The proposed method has the potential to regularize the solution to reduce artifacts without missing lesions that cannot be expected from the a priori information. Numerical studies showed a significant improvement in image quality and lesion detectability under the condition of highly undersampled projection data. (author)

  16. Reconstruction of tomographic image from x-ray projections of a few views

    International Nuclear Information System (INIS)

    Kobayashi, Fujio; Yamaguchi, Shoichiro

    1982-01-01

    Computer tomographs have progressed rapidly, and in the latest high performance types, the photographing time has been shortened to less than 5 sec, but the clear images of hearts have not yet been obtained. The X-ray tomographs used so far irradiate X-ray from many directions and measure the projected data, but by limiting projection direction to a small number, it was planned to shorter the X-ray photographing time and to reduce X-ray exposure as the objective of this study. In this paper, a method is proposed, by which tomographic images are reconstructed from projected data in a small number of direction by generalized inverse matrix penalty method. This method is the calculation method newly devised by the authors for this purpose. It is a kind of the nonlinear planning method added with the restrictive condition using a generalized inverse matrix, and it is characterized by the simple calculation procedure and rapid convergence. Moreover, the effect on reconstructed images when errors are included in projected data was examined. Also, the simple computer simulation to reconstruct tomographic images using the projected data in four directions was performed, and the usefulness of this method was confirmed. It contributes to the development of superhigh speed tomographs in future. (Kako, I.)

  17. Image reconstruction from projections and its application in emission computer tomography

    International Nuclear Information System (INIS)

    Kuba, Attila; Csernay, Laszlo

    1989-01-01

    Computer tomography is an imaging technique for producing cross sectional images by reconstruction from projections. Its two main branches are called transmission and emission computer tomography, TCT and ECT, resp. After an overview of the theory and practice of TCT and ECT, the first Hungarian ECT type MB 9300 SPECT consisting of a gamma camera and Ketronic Medax N computer is described, and its applications to radiological patient observations are discussed briefly. (R.P.) 28 refs.; 4 figs

  18. Optimized image acquisition for breast tomosynthesis in projection and reconstruction space

    OpenAIRE

    Chawla, Amarpreet S.; Lo, Joseph Y.; Baker, Jay A.; Samei, Ehsan

    2009-01-01

    Breast tomosynthesis has been an exciting new development in the field of breast imaging. While the diagnostic improvement via tomosynthesis is notable, the full potential of tomosynthesis has not yet been realized. This may be attributed to the dependency of the diagnostic quality of tomosynthesis on multiple variables, each of which needs to be optimized. Those include dose, number of angular projections, and the total angular span of those projections. In this study, the authors investigat...

  19. Impact of the zero-markup drug policy on hospitalisation expenditure in western rural China: an interrupted time series analysis.

    Science.gov (United States)

    Yang, Caijun; Shen, Qian; Cai, Wenfang; Zhu, Wenwen; Li, Zongjie; Wu, Lina; Fang, Yu

    2017-02-01

    To assess the long-term effects of the introduction of China's zero-markup drug policy on hospitalisation expenditure and hospitalisation expenditures after reimbursement. An interrupted time series was used to evaluate the impact of the zero-markup drug policy on hospitalisation expenditure and hospitalisation expenditure after reimbursement at primary health institutions in Fufeng County of Shaanxi Province, western China. Two regression models were developed. Monthly average hospitalisation expenditure and monthly average hospitalisation expenditure after reimbursement in primary health institutions were analysed covering the period 2009 through to 2013. For the monthly average hospitalisation expenditure, the increasing trend was slowed down after the introduction of the zero-markup drug policy (coefficient = -16.49, P = 0.009). For the monthly average hospitalisation expenditure after reimbursement, the increasing trend was slowed down after the introduction of the zero-markup drug policy (coefficient = -10.84, P = 0.064), and a significant decrease in the intercept was noted after the second intervention of changes in reimbursement schemes of the new rural cooperative medical insurance (coefficient = -220.64, P markup drug policy in western China. However, hospitalisation expenditure and hospitalisation expenditure after reimbursement were still increasing. More effective policies are needed to prevent these costs from continuing to rise. © 2016 John Wiley & Sons Ltd.

  20. Decoding using back-project algorithm from coded image in ICF

    International Nuclear Information System (INIS)

    Jiang shaoen; Liu Zhongli; Zheng Zhijian; Tang Daoyuan

    1999-01-01

    The principle of the coded imaging and its decoding in inertial confinement fusion is described simply. The authors take ring aperture microscope for example and use back-project (BP) algorithm to decode the coded image. The decoding program has been performed for numerical simulation. Simulations of two models are made, and the results show that the accuracy of BP algorithm is high and effect of reconstruction is good. Thus, it indicates that BP algorithm is applicable to decoding for coded image in ICF experiments

  1. Improvement of image quality of holographic projection on tilted plane using iterative algorithm

    Science.gov (United States)

    Pang, Hui; Cao, Axiu; Wang, Jiazhou; Zhang, Man; Deng, Qiling

    2017-12-01

    Holographic image projection on tilted plane has an important application prospect. In this paper, we propose a method to compute the phase-only hologram that can reconstruct a clear image on tilted plane. By adding a constant phase to the target image of the inclined plane, the corresponding light field distribution on the plane that is parallel to the hologram plane is derived through the titled diffraction calculation. Then the phase distribution of the hologram is obtained by the iterative algorithm with amplitude and phase constrain. Simulation and optical experiment are performed to show the effectiveness of the proposed method.

  2. Defense Advanced Research Projects Agency (DARPA) Agent Markup Language Computer Aided Knowledge Acquisition

    Science.gov (United States)

    2005-06-01

    thesaurus ontology and the GEDCOM genealogy ontology. The CALL thesaurus ontology was developed for monolingual thesauri. The Center for Army...corresponding relationships. The ontology design was based on the Guidelines for the Construction, Format and Management of Monolingual Thesauri...rdfs:comment>Terminilogical list of short dictionary containing the terminology of a specific subject field or of related fields</rdfs:comment

  3. Dynamic PET simulator via tomographic emission projection for kinetic modeling and parametric image studies

    Energy Technology Data Exchange (ETDEWEB)

    Häggström, Ida, E-mail: haeggsti@mskcc.org [Department of Medical Physics, Memorial Sloan Kettering Cancer Center, New York, New York 10065 and Department of Radiation Sciences, Umeå University, Umeå 90187 (Sweden); Beattie, Bradley J.; Schmidtlein, C. Ross [Department of Medical Physics, Memorial Sloan Kettering Cancer Center, New York, New York 10065 (United States)

    2016-06-15

    Purpose: To develop and evaluate a fast and simple tool called dPETSTEP (Dynamic PET Simulator of Tracers via Emission Projection), for dynamic PET simulations as an alternative to Monte Carlo (MC), useful for educational purposes and evaluation of the effects of the clinical environment, postprocessing choices, etc., on dynamic and parametric images. Methods: The tool was developed in MATLAB using both new and previously reported modules of PETSTEP (PET Simulator of Tracers via Emission Projection). Time activity curves are generated for each voxel of the input parametric image, whereby effects of imaging system blurring, counting noise, scatters, randoms, and attenuation are simulated for each frame. Each frame is then reconstructed into images according to the user specified method, settings, and corrections. Reconstructed images were compared to MC data, and simple Gaussian noised time activity curves (GAUSS). Results: dPETSTEP was 8000 times faster than MC. Dynamic images from dPETSTEP had a root mean square error that was within 4% on average of that of MC images, whereas the GAUSS images were within 11%. The average bias in dPETSTEP and MC images was the same, while GAUSS differed by 3% points. Noise profiles in dPETSTEP images conformed well to MC images, confirmed visually by scatter plot histograms, and statistically by tumor region of interest histogram comparisons that showed no significant differences (p < 0.01). Compared to GAUSS, dPETSTEP images and noise properties agreed better with MC. Conclusions: The authors have developed a fast and easy one-stop solution for simulations of dynamic PET and parametric images, and demonstrated that it generates both images and subsequent parametric images with very similar noise properties to those of MC images, in a fraction of the time. They believe dPETSTEP to be very useful for generating fast, simple, and realistic results, however since it uses simple scatter and random models it may not be suitable for

  4. Dynamic PET simulator via tomographic emission projection for kinetic modeling and parametric image studies

    International Nuclear Information System (INIS)

    Häggström, Ida; Beattie, Bradley J.; Schmidtlein, C. Ross

    2016-01-01

    Purpose: To develop and evaluate a fast and simple tool called dPETSTEP (Dynamic PET Simulator of Tracers via Emission Projection), for dynamic PET simulations as an alternative to Monte Carlo (MC), useful for educational purposes and evaluation of the effects of the clinical environment, postprocessing choices, etc., on dynamic and parametric images. Methods: The tool was developed in MATLAB using both new and previously reported modules of PETSTEP (PET Simulator of Tracers via Emission Projection). Time activity curves are generated for each voxel of the input parametric image, whereby effects of imaging system blurring, counting noise, scatters, randoms, and attenuation are simulated for each frame. Each frame is then reconstructed into images according to the user specified method, settings, and corrections. Reconstructed images were compared to MC data, and simple Gaussian noised time activity curves (GAUSS). Results: dPETSTEP was 8000 times faster than MC. Dynamic images from dPETSTEP had a root mean square error that was within 4% on average of that of MC images, whereas the GAUSS images were within 11%. The average bias in dPETSTEP and MC images was the same, while GAUSS differed by 3% points. Noise profiles in dPETSTEP images conformed well to MC images, confirmed visually by scatter plot histograms, and statistically by tumor region of interest histogram comparisons that showed no significant differences (p < 0.01). Compared to GAUSS, dPETSTEP images and noise properties agreed better with MC. Conclusions: The authors have developed a fast and easy one-stop solution for simulations of dynamic PET and parametric images, and demonstrated that it generates both images and subsequent parametric images with very similar noise properties to those of MC images, in a fraction of the time. They believe dPETSTEP to be very useful for generating fast, simple, and realistic results, however since it uses simple scatter and random models it may not be suitable for

  5. EISCAT Aperture Synthesis Imaging (EASI _3D) for the EISCAT_3D Project

    Science.gov (United States)

    La Hoz, Cesar; Belyey, Vasyl

    2012-07-01

    Aperture Synthesis Imaging Radar (ASIR) is one of the technologies adopted by the EISCAT_3D project to endow it with imaging capabilities in 3-dimensions that includes sub-beam resolution. Complemented by pulse compression, it will provide 3-dimensional images of certain types of incoherent scatter radar targets resolved to about 100 metres at 100 km range, depending on the signal-to-noise ratio. This ability will open new research opportunities to map small structures associated with non-homogeneous, unstable processes such as aurora, summer and winter polar radar echoes (PMSE and PMWE), Natural Enhanced Ion Acoustic Lines (NEIALs), structures excited by HF ionospheric heating, meteors, space debris, and others. The underlying physico-mathematical principles of the technique are the same as the technique employed in radioastronomy to image stellar objects; both require sophisticated inversion techniques to obtain reliable images.

  6. Pharmacometrics Markup Language (PharmML): Opening New Perspectives for Model Exchange in Drug Development

    Science.gov (United States)

    Swat, MJ; Moodie, S; Wimalaratne, SM; Kristensen, NR; Lavielle, M; Mari, A; Magni, P; Smith, MK; Bizzotto, R; Pasotti, L; Mezzalana, E; Comets, E; Sarr, C; Terranova, N; Blaudez, E; Chan, P; Chard, J; Chatel, K; Chenel, M; Edwards, D; Franklin, C; Giorgino, T; Glont, M; Girard, P; Grenon, P; Harling, K; Hooker, AC; Kaye, R; Keizer, R; Kloft, C; Kok, JN; Kokash, N; Laibe, C; Laveille, C; Lestini, G; Mentré, F; Munafo, A; Nordgren, R; Nyberg, HB; Parra-Guillen, ZP; Plan, E; Ribba, B; Smith, G; Trocóniz, IF; Yvon, F; Milligan, PA; Harnisch, L; Karlsson, M; Hermjakob, H; Le Novère, N

    2015-01-01

    The lack of a common exchange format for mathematical models in pharmacometrics has been a long-standing problem. Such a format has the potential to increase productivity and analysis quality, simplify the handling of complex workflows, ensure reproducibility of research, and facilitate the reuse of existing model resources. Pharmacometrics Markup Language (PharmML), currently under development by the Drug Disease Model Resources (DDMoRe) consortium, is intended to become an exchange standard in pharmacometrics by providing means to encode models, trial designs, and modeling steps. PMID:26225259

  7. Free Trade Agreements and Firm-Product Markups in Chilean Manufacturing

    DEFF Research Database (Denmark)

    Lamorgese, A.R.; Linarello, A.; Warzynski, Frederic Michel Patrick

    In this paper, we use detailed information about firms' product portfolio to study how trade liberalization affects prices, markups and productivity. We document these effects using firm product level data in Chilean manufacturing following two major trade agreements with the EU and the US....... The dataset provides information about the value and quantity of each good produced by the firm, as well as the amount of exports. One additional and unique characteristic of our dataset is that it provides a firm-product level measure of the unit average cost. We use this information to compute a firm...

  8. The carbohydrate sequence markup language (CabosML): an XML description of carbohydrate structures.

    Science.gov (United States)

    Kikuchi, Norihiro; Kameyama, Akihiko; Nakaya, Shuuichi; Ito, Hiromi; Sato, Takashi; Shikanai, Toshihide; Takahashi, Yoriko; Narimatsu, Hisashi

    2005-04-15

    Bioinformatics resources for glycomics are very poor as compared with those for genomics and proteomics. The complexity of carbohydrate sequences makes it difficult to define a common language to represent them, and the development of bioinformatics tools for glycomics has not progressed. In this study, we developed a carbohydrate sequence markup language (CabosML), an XML description of carbohydrate structures. The language definition (XML Schema) and an experimental database of carbohydrate structures using an XML database management system are available at http://www.phoenix.hydra.mki.co.jp/CabosDemo.html kikuchi@hydra.mki.co.jp.

  9. Pharmacometrics Markup Language (PharmML): Opening New Perspectives for Model Exchange in Drug Development.

    Science.gov (United States)

    Swat, M J; Moodie, S; Wimalaratne, S M; Kristensen, N R; Lavielle, M; Mari, A; Magni, P; Smith, M K; Bizzotto, R; Pasotti, L; Mezzalana, E; Comets, E; Sarr, C; Terranova, N; Blaudez, E; Chan, P; Chard, J; Chatel, K; Chenel, M; Edwards, D; Franklin, C; Giorgino, T; Glont, M; Girard, P; Grenon, P; Harling, K; Hooker, A C; Kaye, R; Keizer, R; Kloft, C; Kok, J N; Kokash, N; Laibe, C; Laveille, C; Lestini, G; Mentré, F; Munafo, A; Nordgren, R; Nyberg, H B; Parra-Guillen, Z P; Plan, E; Ribba, B; Smith, G; Trocóniz, I F; Yvon, F; Milligan, P A; Harnisch, L; Karlsson, M; Hermjakob, H; Le Novère, N

    2015-06-01

    The lack of a common exchange format for mathematical models in pharmacometrics has been a long-standing problem. Such a format has the potential to increase productivity and analysis quality, simplify the handling of complex workflows, ensure reproducibility of research, and facilitate the reuse of existing model resources. Pharmacometrics Markup Language (PharmML), currently under development by the Drug Disease Model Resources (DDMoRe) consortium, is intended to become an exchange standard in pharmacometrics by providing means to encode models, trial designs, and modeling steps.

  10. cluML: A markup language for clustering and cluster validity assessment of microarray data.

    Science.gov (United States)

    Bolshakova, Nadia; Cunningham, Pádraig

    2005-01-01

    cluML is a new markup language for microarray data clustering and cluster validity assessment. The XML-based format has been designed to address some of the limitations observed in traditional formats, such as inability to store multiple clustering (including biclustering) and validation results within a dataset. cluML is an effective tool to support biomedical knowledge representation in gene expression data analysis. Although cluML was developed for DNA microarray analysis applications, it can be effectively used for the representation of clustering and for the validation of other biomedical and physical data that has no limitations.

  11. A Converter from the Systems Biology Markup Language to the Synthetic Biology Open Language.

    Science.gov (United States)

    Nguyen, Tramy; Roehner, Nicholas; Zundel, Zach; Myers, Chris J

    2016-06-17

    Standards are important to synthetic biology because they enable exchange and reproducibility of genetic designs. This paper describes a procedure for converting between two standards: the Systems Biology Markup Language (SBML) and the Synthetic Biology Open Language (SBOL). SBML is a standard for behavioral models of biological systems at the molecular level. SBOL describes structural and basic qualitative behavioral aspects of a biological design. Converting SBML to SBOL enables a consistent connection between behavioral and structural information for a biological design. The conversion process described in this paper leverages Systems Biology Ontology (SBO) annotations to enable inference of a designs qualitative function.

  12. Generating Systems Biology Markup Language Models from the Synthetic Biology Open Language.

    Science.gov (United States)

    Roehner, Nicholas; Zhang, Zhen; Nguyen, Tramy; Myers, Chris J

    2015-08-21

    In the context of synthetic biology, model generation is the automated process of constructing biochemical models based on genetic designs. This paper discusses the use cases for model generation in genetic design automation (GDA) software tools and introduces the foundational concepts of standards and model annotation that make this process useful. Finally, this paper presents an implementation of model generation in the GDA software tool iBioSim and provides an example of generating a Systems Biology Markup Language (SBML) model from a design of a 4-input AND sensor written in the Synthetic Biology Open Language (SBOL).

  13. Detection of pulmonary nodules at paediatric CT: maximum intensity projections and axial source images are complementary

    International Nuclear Information System (INIS)

    Kilburn-Toppin, Fleur; Arthurs, Owen J.; Tasker, Angela D.; Set, Patricia A.K.

    2013-01-01

    Maximum intensity projection (MIP) images might be useful in helping to differentiate small pulmonary nodules from adjacent vessels on thoracic multidetector CT (MDCT). The aim was to evaluate the benefits of axial MIP images over axial source images for the paediatric chest in an interobserver variability study. We included 46 children with extra-pulmonary solid organ malignancy who had undergone thoracic MDCT. Three radiologists independently read 2-mm axial and 10-mm MIP image datasets, recording the number of nodules, size and location, overall time taken and confidence. There were 83 nodules (249 total reads among three readers) in 46 children (mean age 10.4 ± 4.98 years, range 0.3-15.9 years; 24 boys). Consensus read was used as the reference standard. Overall, three readers recorded significantly more nodules on MIP images (228 vs. 174; P < 0.05), improving sensitivity from 67% to 77.5% (P < 0.05) but with lower positive predictive value (96% vs. 85%, P < 0.005). MIP images took significantly less time to read (71.6 ± 43.7 s vs. 92.9 ± 48.7 s; P < 0.005) but did not improve confidence levels. Using 10-mm axial MIP images for nodule detection in the paediatric chest enhances diagnostic performance, improving sensitivity and reducing reading time when compared with conventional axial thin-slice images. Axial MIP and axial source images are complementary in thoracic nodule detection. (orig.)

  14. Motion nature projection reduces patient's psycho-physiological anxiety during CT imaging.

    NARCIS (Netherlands)

    Zijlstra, Emma; Hagedoorn, Mariët; Krijnen, Wim; van der Schans, Cees; Mobach, Mark P.

    2017-01-01

    A growing body of evidence indicates that natural environments can positively influence people. This study investigated whether the use of motion nature projection in computed tomography (CT) imaging rooms is effective in mitigating psycho-physiological anxiety (vs. no intervention) using a

  15. Reconstruction of implanted marker trajectories from cone-beam CT projection images using interdimensional correlation modeling

    International Nuclear Information System (INIS)

    Chung, Hyekyun; Poulsen, Per Rugaard; Keall, Paul J.; Cho, Seungryong; Cho, Byungchul

    2016-01-01

    Purpose: Cone-beam CT (CBCT) is a widely used imaging modality for image-guided radiotherapy. Most vendors provide CBCT systems that are mounted on a linac gantry. Thus, CBCT can be used to estimate the actual 3-dimensional (3D) position of moving respiratory targets in the thoracic/abdominal region using 2D projection images. The authors have developed a method for estimating the 3D trajectory of respiratory-induced target motion from CBCT projection images using interdimensional correlation modeling. Methods: Because the superior–inferior (SI) motion of a target can be easily analyzed on projection images of a gantry-mounted CBCT system, the authors investigated the interdimensional correlation of the SI motion with left–right and anterior–posterior (AP) movements while the gantry is rotating. A simple linear model and a state-augmented model were implemented and applied to the interdimensional correlation analysis, and their performance was compared. The parameters of the interdimensional correlation models were determined by least-square estimation of the 2D error between the actual and estimated projected target position. The method was validated using 160 3D tumor trajectories from 46 thoracic/abdominal cancer patients obtained during CyberKnife treatment. The authors’ simulations assumed two application scenarios: (1) retrospective estimation for the purpose of moving tumor setup used just after volumetric matching with CBCT; and (2) on-the-fly estimation for the purpose of real-time target position estimation during gating or tracking delivery, either for full-rotation volumetric-modulated arc therapy (VMAT) in 60 s or a stationary six-field intensity-modulated radiation therapy (IMRT) with a beam delivery time of 20 s. Results: For the retrospective CBCT simulations, the mean 3D root-mean-square error (RMSE) for all 4893 trajectory segments was 0.41 mm (simple linear model) and 0.35 mm (state-augmented model). In the on-the-fly simulations, prior

  16. Reconstruction of implanted marker trajectories from cone-beam CT projection images using interdimensional correlation modeling

    Energy Technology Data Exchange (ETDEWEB)

    Chung, Hyekyun [Department of Nuclear and Quantum Engineering, Korea Advanced Institute of Science and Technology, Daejeon 34141, South Korea and Department of Radiation Oncology, Asan Medical Center, University of Ulsan College of Medicine, Seoul 138-736 (Korea, Republic of); Poulsen, Per Rugaard [Department of Oncology, Aarhus University Hospital, Nørrebrogade 44, 8000 Aarhus C (Denmark); Keall, Paul J. [Radiation Physics Laboratory, Sydney Medical School, University of Sydney, NSW 2006 (Australia); Cho, Seungryong [Department of Nuclear and Quantum Engineering, Korea Advanced Institute of Science and Technology, Daejeon 34141 (Korea, Republic of); Cho, Byungchul, E-mail: cho.byungchul@gmail.com, E-mail: bcho@amc.seoul.kr [Department of Radiation Oncology, Asan Medical Center, University of Ulsan College of Medicine, Seoul 05505 (Korea, Republic of)

    2016-08-15

    Purpose: Cone-beam CT (CBCT) is a widely used imaging modality for image-guided radiotherapy. Most vendors provide CBCT systems that are mounted on a linac gantry. Thus, CBCT can be used to estimate the actual 3-dimensional (3D) position of moving respiratory targets in the thoracic/abdominal region using 2D projection images. The authors have developed a method for estimating the 3D trajectory of respiratory-induced target motion from CBCT projection images using interdimensional correlation modeling. Methods: Because the superior–inferior (SI) motion of a target can be easily analyzed on projection images of a gantry-mounted CBCT system, the authors investigated the interdimensional correlation of the SI motion with left–right and anterior–posterior (AP) movements while the gantry is rotating. A simple linear model and a state-augmented model were implemented and applied to the interdimensional correlation analysis, and their performance was compared. The parameters of the interdimensional correlation models were determined by least-square estimation of the 2D error between the actual and estimated projected target position. The method was validated using 160 3D tumor trajectories from 46 thoracic/abdominal cancer patients obtained during CyberKnife treatment. The authors’ simulations assumed two application scenarios: (1) retrospective estimation for the purpose of moving tumor setup used just after volumetric matching with CBCT; and (2) on-the-fly estimation for the purpose of real-time target position estimation during gating or tracking delivery, either for full-rotation volumetric-modulated arc therapy (VMAT) in 60 s or a stationary six-field intensity-modulated radiation therapy (IMRT) with a beam delivery time of 20 s. Results: For the retrospective CBCT simulations, the mean 3D root-mean-square error (RMSE) for all 4893 trajectory segments was 0.41 mm (simple linear model) and 0.35 mm (state-augmented model). In the on-the-fly simulations, prior

  17. Three-dimensional DNA image cytometry by optical projection tomographic microscopy for early cancer diagnosis.

    Science.gov (United States)

    Agarwal, Nitin; Biancardi, Alberto M; Patten, Florence W; Reeves, Anthony P; Seibel, Eric J

    2014-04-01

    Aneuploidy is typically assessed by flow cytometry (FCM) and image cytometry (ICM). We used optical projection tomographic microscopy (OPTM) for assessing cellular DNA content using absorption and fluorescence stains. OPTM combines some of the attributes of both FCM and ICM and generates isometric high-resolution three-dimensional (3-D) images of single cells. Although the depth of field of the microscope objective was in the submicron range, it was extended by scanning the objective's focal plane. The extended depth of field image is similar to a projection in a conventional x-ray computed tomography. These projections were later reconstructed using computed tomography methods to form a 3-D image. We also present an automated method for 3-D nuclear segmentation. Nuclei of chicken, trout, and triploid trout erythrocyte were used to calibrate OPTM. Ratios of integrated optical densities extracted from 50 images of each standard were compared to ratios of DNA indices from FCM. A comparison of mean square errors with thionin, hematoxylin, Feulgen, and SYTOX green was done. Feulgen technique was preferred as it showed highest stoichiometry, least variance, and preserved nuclear morphology in 3-D. The addition of this quantitative biomarker could further strengthen existing classifiers and improve early diagnosis of cancer using 3-D microscopy.

  18. Direct image reconstruction with limited angle projection data for computerized tomography

    International Nuclear Information System (INIS)

    Inouye, T.

    1980-01-01

    Discussions are made on the minimum angle range for projection data necessary to reconstruct the complete CT image. As is easily shown from the image reconstruction theorem, the lack of projection angle provides no data for the Fourier transformed function of the object on the corresponding angular directions, where the projections are missing. In a normal situation, the Fourier transformed function of an object image holds an analytic characteristic with respect to two-dimensional orthogonal parameters. This characteristic enables uniquely prolonging the function outside the obtained region employing a sort of analytic continuation with respect to both parameters. In the method reported here, an object pattern, which is confined within a finite range, is shifted to a specified region to have complete orthogonal function expansions without changing the projection angle directions. These orthogonal functions are analytically extended to the missing projection angle range and the whole function is determined. This method does not include any estimation process, whose effectiveness is often seriously jeopardized by the presence of a slight fluctuation component. Computer simulations were carried out to demonstrate the effectiveness of the method

  19. Survey of on-road image projection with pixel light systems

    Science.gov (United States)

    Rizvi, Sadiq; Knöchelmann, Marvin; Ley, Peer-Phillip; Lachmayer, Roland

    2017-12-01

    HID, LED and laser-based high resolution automotive headlamps, as of late known as `pixel light systems', are at the forefront of the developing technologies paving the way for autonomous driving. In addition to light distribution capabilities that outperform Adaptive Front Lighting and Matrix Beam systems, pixel light systems provide the possibility of image projection directly onto the street. The underlying objective is to improve the driving experience, in any given scenario, in terms of safety, comfort and interaction for all road users. The focus of this work is to conduct a short survey on this state-of-the-art image projection functionality. A holistic research regarding the image projection functionality can be divided into three major categories: scenario selection, technological development and evaluation design. Consequently, the work presented in this paper is divided into three short studies. Section 1 provides a brief introduction to pixel light systems and a justification for the approach adopted for this study. Section 2 deals with the selection of scenarios (and driving maneuvers) where image projection can play a critical role. Section 3 discusses high power LED and LED array based prototypes that are currently under development. Section 4 demonstrates results from an experiment conducted to evaluate the illuminance of an image space projected using a pixel light system prototype developed at the Institute of Product Development (IPeG). Findings from this work can help to identify and advance future research work relating to: further development of pixel light systems, scenario planning, examination of optimal light sources, behavioral response studies etc.

  20. Fan-beam and cone-beam image reconstruction via filtering the backprojection image of differentiated projection data

    International Nuclear Information System (INIS)

    Zhuang Tingliang; Leng Shuai; Nett, Brian E; Chen Guanghong

    2004-01-01

    In this paper, a new image reconstruction scheme is presented based on Tuy's cone-beam inversion scheme and its fan-beam counterpart. It is demonstrated that Tuy's inversion scheme may be used to derive a new framework for fan-beam and cone-beam image reconstruction. In this new framework, images are reconstructed via filtering the backprojection image of differentiated projection data. The new framework is mathematically exact and is applicable to a general source trajectory provided the Tuy data sufficiency condition is satisfied. By choosing a piece-wise constant function for one of the components in the factorized weighting function, the filtering kernel is one dimensional, viz. the filtering process is along a straight line. Thus, the derived image reconstruction algorithm is mathematically exact and efficient. In the cone-beam case, the derived reconstruction algorithm is applicable to a large class of source trajectories where the pi-lines or the generalized pi-lines exist. In addition, the new reconstruction scheme survives the super-short scan mode in both the fan-beam and cone-beam cases provided the data are not transversely truncated. Numerical simulations were conducted to validate the new reconstruction scheme for the fan-beam case

  1. Block matching sparsity regularization-based image reconstruction for incomplete projection data in computed tomography

    Science.gov (United States)

    Cai, Ailong; Li, Lei; Zheng, Zhizhong; Zhang, Hanming; Wang, Linyuan; Hu, Guoen; Yan, Bin

    2018-02-01

    In medical imaging many conventional regularization methods, such as total variation or total generalized variation, impose strong prior assumptions which can only account for very limited classes of images. A more reasonable sparse representation frame for images is still badly needed. Visually understandable images contain meaningful patterns, and combinations or collections of these patterns can be utilized to form some sparse and redundant representations which promise to facilitate image reconstructions. In this work, we propose and study block matching sparsity regularization (BMSR) and devise an optimization program using BMSR for computed tomography (CT) image reconstruction for an incomplete projection set. The program is built as a constrained optimization, minimizing the L1-norm of the coefficients of the image in the transformed domain subject to data observation and positivity of the image itself. To solve the program efficiently, a practical method based on the proximal point algorithm is developed and analyzed. In order to accelerate the convergence rate, a practical strategy for tuning the BMSR parameter is proposed and applied. The experimental results for various settings, including real CT scanning, have verified the proposed reconstruction method showing promising capabilities over conventional regularization.

  2. Minimizing image noise in on-board CT reconstruction using both kilovoltage and megavoltage beam projections

    International Nuclear Information System (INIS)

    Zhang Junan; Yin Fangfang

    2007-01-01

    We studied a recently proposed aggregated CT reconstruction technique which combines the complementary advantages of kilovoltage (kV) and megavoltage (MV) x-ray imaging. Various phantoms were imaged to study the effects of beam orientations and geometry of the imaging object on image quality of reconstructed CT. It was shown that the quality of aggregated CT was correlated with both kV and MV beam orientations and the degree of this correlation depended upon the geometry of the imaging object. The results indicated that the optimal orientations were those when kV beams pass through the thinner portion and MV beams pass through the thicker portion of the imaging object. A special preprocessing procedure was also developed to perform contrast conversions between kV and MV information prior to image reconstruction. The performance of two reconstruction methods, one filtered backprojection method and one iterative method, were compared. The effects of projection number, beam truncation, and contrast conversion on the CT image quality were investigated

  3. Learning binary code via PCA of angle projection for image retrieval

    Science.gov (United States)

    Yang, Fumeng; Ye, Zhiqiang; Wei, Xueqi; Wu, Congzhong

    2018-01-01

    With benefits of low storage costs and high query speeds, binary code representation methods are widely researched for efficiently retrieving large-scale data. In image hashing method, learning hashing function to embed highdimensions feature to Hamming space is a key step for accuracy retrieval. Principal component analysis (PCA) technical is widely used in compact hashing methods, and most these hashing methods adopt PCA projection functions to project the original data into several dimensions of real values, and then each of these projected dimensions is quantized into one bit by thresholding. The variances of different projected dimensions are different, and with real-valued projection produced more quantization error. To avoid the real-valued projection with large quantization error, in this paper we proposed to use Cosine similarity projection for each dimensions, the angle projection can keep the original structure and more compact with the Cosine-valued. We used our method combined the ITQ hashing algorithm, and the extensive experiments on the public CIFAR-10 and Caltech-256 datasets validate the effectiveness of the proposed method.

  4. A method for volumetric imaging in radiotherapy using single x-ray projection

    International Nuclear Information System (INIS)

    Xu, Yuan; Yan, Hao; Ouyang, Luo; Wang, Jing; Jiang, Steve B.; Jia, Xun; Zhou, Linghong; Cervino, Laura

    2015-01-01

    Purpose: It is an intriguing problem to generate an instantaneous volumetric image based on the corresponding x-ray projection. The purpose of this study is to develop a new method to achieve this goal via a sparse learning approach. Methods: To extract motion information hidden in projection images, the authors partitioned a projection image into small rectangular patches. The authors utilized a sparse learning method to automatically select patches that have a high correlation with principal component analysis (PCA) coefficients of a lung motion model. A model that maps the patch intensity to the PCA coefficients was built along with the patch selection process. Based on this model, a measured projection can be used to predict the PCA coefficients, which are then further used to generate a motion vector field and hence a volumetric image. The authors have also proposed an intensity baseline correction method based on the partitioned projection, in which the first and the second moments of pixel intensities at a patch in a simulated projection image are matched with those in a measured one via a linear transformation. The proposed method has been validated in both simulated data and real phantom data. Results: The algorithm is able to identify patches that contain relevant motion information such as the diaphragm region. It is found that an intensity baseline correction step is important to remove the systematic error in the motion prediction. For the simulation case, the sparse learning model reduced the prediction error for the first PCA coefficient to 5%, compared to the 10% error when sparse learning was not used, and the 95th percentile error for the predicted motion vector was reduced from 2.40 to 0.92 mm. In the phantom case with a regular tumor motion, the predicted tumor trajectory was successfully reconstructed with a 0.82 mm error for tumor center localization compared to a 1.66 mm error without using the sparse learning method. When the tumor motion

  5. Calibration, Projection, and Final Image Products of MESSENGER's Mercury Dual Imaging System

    Science.gov (United States)

    Denevi, Brett W.; Chabot, Nancy L.; Murchie, Scott L.; Becker, Kris J.; Blewett, David T.; Domingue, Deborah L.; Ernst, Carolyn M.; Hash, Christopher D.; Hawkins, S. Edward; Keller, Mary R.; Laslo, Nori R.; Nair, Hari; Robinson, Mark S.; Seelos, Frank P.; Stephens, Grant K.; Turner, F. Scott; Solomon, Sean C.

    2018-02-01

    We present an overview of the operations, calibration, geodetic control, photometric standardization, and processing of images from the Mercury Dual Imaging System (MDIS) acquired during the orbital phase of the MESSENGER spacecraft's mission at Mercury (18 March 2011-30 April 2015). We also provide a summary of all of the MDIS products that are available in NASA's Planetary Data System (PDS). Updates to the radiometric calibration included slight modification of the frame-transfer smear correction, updates to the flat fields of some wide-angle camera (WAC) filters, a new model for the temperature dependence of narrow-angle camera (NAC) and WAC sensitivity, and an empirical correction for temporal changes in WAC responsivity. Further, efforts to characterize scattered light in the WAC system are described, along with a mosaic-dependent correction for scattered light that was derived for two regional mosaics. Updates to the geometric calibration focused on the focal lengths and distortions of the NAC and all WAC filters, NAC-WAC alignment, and calibration of the MDIS pivot angle and base. Additionally, two control networks were derived so that the majority of MDIS images can be co-registered with sub-pixel accuracy; the larger of the two control networks was also used to create a global digital elevation model. Finally, we describe the image processing and photometric standardization parameters used in the creation of the MDIS advanced products in the PDS, which include seven large-scale mosaics, numerous targeted local mosaics, and a set of digital elevation models ranging in scale from local to global.

  6. DEMAND FOR AND SUPPLY OF MARK-UP AND PLS FUNDS IN ISLAMIC BANKING: SOME ALTERNATIVE EXPLANATIONS

    OpenAIRE

    KHAN, TARIQULLAH

    1995-01-01

    Profit and loss-sharing (PLS) and bai’ al murabahah lil amir bil shira (mark-up) are the two parent principles of Islamic financing. The use of PLS is limited and that of mark-up overwhelming in the operations of the Islamic banks. Several studies provide different explanations for this phenomenon. The dominant among these is the moral hazard hypothesis. Some alternative explanations are given in the present paper. The discussion is based on both demand (user of funds) and supply (bank) side ...

  7. Simulation Experiment Description Markup Language (SED-ML Level 1 Version 3 (L1V3

    Directory of Open Access Journals (Sweden)

    Bergmann Frank T.

    2018-03-01

    Full Text Available The creation of computational simulation experiments to inform modern biological research poses challenges to reproduce, annotate, archive, and share such experiments. Efforts such as SBML or CellML standardize the formal representation of computational models in various areas of biology. The Simulation Experiment Description Markup Language (SED-ML describes what procedures the models are subjected to, and the details of those procedures. These standards, together with further COMBINE standards, describe models sufficiently well for the reproduction of simulation studies among users and software tools. The Simulation Experiment Description Markup Language (SED-ML is an XML-based format that encodes, for a given simulation experiment, (i which models to use; (ii which modifications to apply to models before simulation; (iii which simulation procedures to run on each model; (iv how to post-process the data; and (v how these results should be plotted and reported. SED-ML Level 1 Version 1 (L1V1 implemented support for the encoding of basic time course simulations. SED-ML L1V2 added support for more complex types of simulations, specifically repeated tasks and chained simulation procedures. SED-ML L1V3 extends L1V2 by means to describe which datasets and subsets thereof to use within a simulation experiment.

  8. Ontology aided modeling of organic reaction mechanisms with flexible and fragment based XML markup procedures.

    Science.gov (United States)

    Sankar, Punnaivanam; Aghila, Gnanasekaran

    2007-01-01

    The mechanism models for primary organic reactions encoding the structural fragments undergoing substitution, addition, elimination, and rearrangements are developed. In the proposed models, each and every structural component of mechanistic pathways is represented with flexible and fragment based markup technique in XML syntax. A significant feature of the system is the encoding of the electron movements along with the other components like charges, partial charges, half bonded species, lone pair electrons, free radicals, reaction arrows, etc. needed for a complete representation of reaction mechanism. The rendering of reaction schemes described with the proposed methodology is achieved with a concise XML extension language interoperating with the structure markup. The reaction scheme is visualized as 2D graphics in a browser by converting them into SVG documents enabling the desired layouts normally perceived by the chemists conventionally. An automatic representation of the complex patterns of the reaction mechanism is achieved by reusing the knowledge in chemical ontologies and developing artificial intelligence components in terms of axioms.

  9. Simulation Experiment Description Markup Language (SED-ML) Level 1 Version 2.

    Science.gov (United States)

    Bergmann, Frank T; Cooper, Jonathan; Le Novère, Nicolas; Nickerson, David; Waltemath, Dagmar

    2015-09-04

    The number, size and complexity of computational models of biological systems are growing at an ever increasing pace. It is imperative to build on existing studies by reusing and adapting existing models and parts thereof. The description of the structure of models is not sufficient to enable the reproduction of simulation results. One also needs to describe the procedures the models are subjected to, as recommended by the Minimum Information About a Simulation Experiment (MIASE) guidelines. This document presents Level 1 Version 2 of the Simulation Experiment Description Markup Language (SED-ML), a computer-readable format for encoding simulation and analysis experiments to apply to computational models. SED-ML files are encoded in the Extensible Markup Language (XML) and can be used in conjunction with any XML-based model encoding format, such as CellML or SBML. A SED-ML file includes details of which models to use, how to modify them prior to executing a simulation, which simulation and analysis procedures to apply, which results to extract and how to present them. Level 1 Version 2 extends the format by allowing the encoding of repeated and chained procedures.

  10. Gaussian Process Regression (GPR) Representation in Predictive Model Markup Language (PMML).

    Science.gov (United States)

    Park, J; Lechevalier, D; Ak, R; Ferguson, M; Law, K H; Lee, Y-T T; Rachuri, S

    2017-01-01

    This paper describes Gaussian process regression (GPR) models presented in predictive model markup language (PMML). PMML is an extensible-markup-language (XML) -based standard language used to represent data-mining and predictive analytic models, as well as pre- and post-processed data. The previous PMML version, PMML 4.2, did not provide capabilities for representing probabilistic (stochastic) machine-learning algorithms that are widely used for constructing predictive models taking the associated uncertainties into consideration. The newly released PMML version 4.3, which includes the GPR model, provides new features: confidence bounds and distribution for the predictive estimations. Both features are needed to establish the foundation for uncertainty quantification analysis. Among various probabilistic machine-learning algorithms, GPR has been widely used for approximating a target function because of its capability of representing complex input and output relationships without predefining a set of basis functions, and predicting a target output with uncertainty quantification. GPR is being employed to various manufacturing data-analytics applications, which necessitates representing this model in a standardized form for easy and rapid employment. In this paper, we present a GPR model and its representation in PMML. Furthermore, we demonstrate a prototype using a real data set in the manufacturing domain.

  11. SBRML: a markup language for associating systems biology data with models.

    Science.gov (United States)

    Dada, Joseph O; Spasić, Irena; Paton, Norman W; Mendes, Pedro

    2010-04-01

    Research in systems biology is carried out through a combination of experiments and models. Several data standards have been adopted for representing models (Systems Biology Markup Language) and various types of relevant experimental data (such as FuGE and those of the Proteomics Standards Initiative). However, until now, there has been no standard way to associate a model and its entities to the corresponding datasets, or vice versa. Such a standard would provide a means to represent computational simulation results as well as to frame experimental data in the context of a particular model. Target applications include model-driven data analysis, parameter estimation, and sharing and archiving model simulations. We propose the Systems Biology Results Markup Language (SBRML), an XML-based language that associates a model with several datasets. Each dataset is represented as a series of values associated with model variables, and their corresponding parameter values. SBRML provides a flexible way of indexing the results to model parameter values, which supports both spreadsheet-like data and multidimensional data cubes. We present and discuss several examples of SBRML usage in applications such as enzyme kinetics, microarray gene expression and various types of simulation results. The XML Schema file for SBRML is available at http://www.comp-sys-bio.org/SBRML under the Academic Free License (AFL) v3.0.

  12. Simulation Experiment Description Markup Language (SED-ML) Level 1 Version 3 (L1V3).

    Science.gov (United States)

    Bergmann, Frank T; Cooper, Jonathan; König, Matthias; Moraru, Ion; Nickerson, David; Le Novère, Nicolas; Olivier, Brett G; Sahle, Sven; Smith, Lucian; Waltemath, Dagmar

    2018-03-19

    The creation of computational simulation experiments to inform modern biological research poses challenges to reproduce, annotate, archive, and share such experiments. Efforts such as SBML or CellML standardize the formal representation of computational models in various areas of biology. The Simulation Experiment Description Markup Language (SED-ML) describes what procedures the models are subjected to, and the details of those procedures. These standards, together with further COMBINE standards, describe models sufficiently well for the reproduction of simulation studies among users and software tools. The Simulation Experiment Description Markup Language (SED-ML) is an XML-based format that encodes, for a given simulation experiment, (i) which models to use; (ii) which modifications to apply to models before simulation; (iii) which simulation procedures to run on each model; (iv) how to post-process the data; and (v) how these results should be plotted and reported. SED-ML Level 1 Version 1 (L1V1) implemented support for the encoding of basic time course simulations. SED-ML L1V2 added support for more complex types of simulations, specifically repeated tasks and chained simulation procedures. SED-ML L1V3 extends L1V2 by means to describe which datasets and subsets thereof to use within a simulation experiment.

  13. Development of an image reconstruction algorithm for a few number of projection data

    International Nuclear Information System (INIS)

    Vieira, Wilson S.; Brandao, Luiz E.; Braz, Delson

    2007-01-01

    An image reconstruction algorithm was developed for specific cases of radiotracer applications in industry (rotating cylindrical mixers), involving a very few number of projection data. The algorithm was planned for imaging radioactive isotope distributions around the center of circular planes. The method consists of adapting the original expectation maximization algorithm (EM) to solve the ill-posed emission tomography inverse problem in order to reconstruct transversal 2D images of an object with only four projections. To achieve this aim, counts of photons emitted by selected radioactive sources in the plane, after they had been simulated using the commercial software MICROSHIELD 5.05, constitutes the projections and a computational code (SPECTEM) was developed to generate activity vectors or images related to those sources. SPECTEM is flexible to support simultaneous changes of the detectors's geometry, the medium under investigation and the properties of the gamma radiation. As a consequence of the code had been followed correctly the proposed method, good results were obtained and they encouraged us to continue the next step of the research: the validation of SPECTEM utilizing experimental data to check its real performance. We aim this code will improve considerably radiotracer methodology, making easier the diagnosis of fails in industrial processes. (author)

  14. Development of an image reconstruction algorithm for a few number of projection data

    Energy Technology Data Exchange (ETDEWEB)

    Vieira, Wilson S.; Brandao, Luiz E. [Instituto de Engenharia Nuclear (IEN-CNEN/RJ), Rio de Janeiro , RJ (Brazil)]. E-mails: wilson@ien.gov.br; brandao@ien.gov.br; Braz, Delson [Universidade Federal do Rio de Janeiro (UFRJ), RJ (Brazil). Coordenacao dos Programa de Pos-graduacao de Engenharia (COPPE). Lab. de Instrumentacao Nuclear]. E-mail: delson@mailhost.lin.ufrj.br

    2007-07-01

    An image reconstruction algorithm was developed for specific cases of radiotracer applications in industry (rotating cylindrical mixers), involving a very few number of projection data. The algorithm was planned for imaging radioactive isotope distributions around the center of circular planes. The method consists of adapting the original expectation maximization algorithm (EM) to solve the ill-posed emission tomography inverse problem in order to reconstruct transversal 2D images of an object with only four projections. To achieve this aim, counts of photons emitted by selected radioactive sources in the plane, after they had been simulated using the commercial software MICROSHIELD 5.05, constitutes the projections and a computational code (SPECTEM) was developed to generate activity vectors or images related to those sources. SPECTEM is flexible to support simultaneous changes of the detectors's geometry, the medium under investigation and the properties of the gamma radiation. As a consequence of the code had been followed correctly the proposed method, good results were obtained and they encouraged us to continue the next step of the research: the validation of SPECTEM utilizing experimental data to check its real performance. We aim this code will improve considerably radiotracer methodology, making easier the diagnosis of fails in industrial processes. (author)

  15. Project Blue: Optical Coronagraphic Imaging Search for Terrestrial-class Exoplanets in Alpha Centauri

    Science.gov (United States)

    Morse, Jon; Project Blue team

    2018-01-01

    Project Blue is a coronagraphic imaging space telescope mission designed to search for habitable worlds orbiting the nearest Sun-like stars in the Alpha Centauri system. With a 45-50 cm baseline primary mirror size, Project Blue will perform a reconnaissance of the habitable zones of Alpha Centauri A and B in blue light and one or two longer wavelength bands to determine the hue of any planets discovered. Light passing through the off-axis telescope feeds into a coronagraphic instrument that forms the heart of the mission. Various coronagraph designs are being considered, such as phase induced amplitude apodization (PIAA), vector vortex, etc. Differential orbital image processing techniques will be employed to analyze the data for faint planets embedded in the residual glare of the parent star. Project Blue will advance our knowledge about the presence or absence of terrestrial-class exoplanets in the habitable zones and measure the brightness of zodiacal dust around each star, which will aid future missions in planning their observational surveys of exoplanets. It also provides on-orbit demonstration of high-contrast coronagraphic imaging technologies and techniques that will be useful for planning and implementing future space missions by NASA and other space agencies. We present an overview of the science goals, mission concept and development schedule. As part of our cooperative agreement with NASA, the Project Blue team intends to make the data available in a publicly accessible archive.

  16. A study on projection angles for an optimal image of PNS water's view on children

    International Nuclear Information System (INIS)

    Son, Sang Hyuk; Song, Young Geun; Kim, Sung Kyu; Hong, Sang Woo; Kim, Je Bong

    2007-01-01

    This study is to calculate the proper angle for the optimal image of PNS Water's view on children, comparing and analyzing the PNS Water's projection angles between children and adults at every age. This study randomly selected 50 patients who visited the Medical Center from January to May in 2005, and examined the incidence path of central ray, taking a PNS Water's and skull trans-Lat. view in Water's filming position while attaching a lead ball mark on the Orbit, EAM, and acanthion of the patient's skull. And then, we calculated the incidence angles (angle A) of the line connected from OML and the petrous ridge to the inferior margin of maxilla on general (random) patient's skull image, following the incidence path of central ray. Finally, we analyzed two pieces of the graphs at ages, developing out the patient's ideal images at PNS Water's filming position taken by a digital camera, and calculating the angle (angle B) between OML and IP(Image Plate). The angle between OML and IP is about 43 .deg. in 4-years-old children, which is higher than 37 .deg. as age increases the angle decreases, it goes to 37 .deg. around 30 years of age. That is similar result to maxillary growth period. We can get better quality of Water's image for children when taking the PNS Water's view if we change the projection angles, considering maxillary growth for patients in every age stage

  17. 3D fingerprint imaging system based on full-field fringe projection profilometry

    Science.gov (United States)

    Huang, Shujun; Zhang, Zonghua; Zhao, Yan; Dai, Jie; Chen, Chao; Xu, Yongjia; Zhang, E.; Xie, Lili

    2014-01-01

    As an unique, unchangeable and easily acquired biometrics, fingerprint has been widely studied in academics and applied in many fields over the years. The traditional fingerprint recognition methods are based on the obtained 2D feature of fingerprint. However, fingerprint is a 3D biological characteristic. The mapping from 3D to 2D loses 1D information and causes nonlinear distortion of the captured fingerprint. Therefore, it is becoming more and more important to obtain 3D fingerprint information for recognition. In this paper, a novel 3D fingerprint imaging system is presented based on fringe projection technique to obtain 3D features and the corresponding color texture information. A series of color sinusoidal fringe patterns with optimum three-fringe numbers are projected onto a finger surface. From another viewpoint, the fringe patterns are deformed by the finger surface and captured by a CCD camera. 3D shape data of the finger can be obtained from the captured fringe pattern images. This paper studies the prototype of the 3D fingerprint imaging system, including principle of 3D fingerprint acquisition, hardware design of the 3D imaging system, 3D calibration of the system, and software development. Some experiments are carried out by acquiring several 3D fingerprint data. The experimental results demonstrate the feasibility of the proposed 3D fingerprint imaging system.

  18. Image quality of microcalcifications in digital breast tomosynthesis: Effects of projection-view distributions

    OpenAIRE

    Lu, Yao; Chan, Heang-Ping; Wei, Jun; Goodsitt, Mitch; Carson, Paul L.; Hadjiiski, Lubomir; Schmitz, Andrea; Eberhard, Jeffrey W.; Claus, Bernhard E. H.

    2011-01-01

    Purpose: To analyze the effects of projection-view (PV) distribution on the contrast and spatial blurring of microcalcifications on the tomosynthesized slices (X-Y plane) and along the depth (Z) direction for the same radiation dose in digital breast tomosynthesis (DBT).Methods: A GE GEN2 prototype DBT system was used for acquisition of DBT scans. The system acquires PV images from 21 angles in 3° increments over a ±30° range. From these acquired PV images, the authors selected six subsets of...

  19. Imaging Local Polarization in Ferroelectric Thin Films by Coherent X-Ray Bragg Projection Ptychography

    Science.gov (United States)

    Hruszkewycz, S. O.; Highland, M. J.; Holt, M. V.; Kim, Dongjin; Folkman, C. M.; Thompson, Carol; Tripathi, A.; Stephenson, G. B.; Hong, Seungbum; Fuoss, P. H.

    2013-04-01

    We used x-ray Bragg projection ptychography (BPP) to map spatial variations of ferroelectric polarization in thin film PbTiO3, which exhibited a striped nanoscale domain pattern on a high-miscut (001) SrTiO3 substrate. By converting the reconstructed BPP phase image to picometer-scale ionic displacements in the polar unit cell, a quantitative polarization map was made that was consistent with other characterization. The spatial resolution of 5.7 nm demonstrated here establishes BPP as an important tool for nanoscale ferroelectric domain imaging, especially in complex environments accessible with hard x rays.

  20. Real-time volumetric image reconstruction and 3D tumor localization based on a single x-ray projection image for lung cancer radiotherapy.

    Science.gov (United States)

    Li, Ruijiang; Jia, Xun; Lewis, John H; Gu, Xuejun; Folkerts, Michael; Men, Chunhua; Jiang, Steve B

    2010-06-01

    To develop an algorithm for real-time volumetric image reconstruction and 3D tumor localization based on a single x-ray projection image for lung cancer radiotherapy. Given a set of volumetric images of a patient at N breathing phases as the training data, deformable image registration was performed between a reference phase and the other N-1 phases, resulting in N-1 deformation vector fields (DVFs). These DVFs can be represented efficiently by a few eigenvectors and coefficients obtained from principal component analysis (PCA). By varying the PCA coefficients, new DVFs can be generated, which, when applied on the reference image, lead to new volumetric images. A volumetric image can then be reconstructed from a single projection image by optimizing the PCA coefficients such that its computed projection matches the measured one. The 3D location of the tumor can be derived by applying the inverted DVF on its position in the reference image. The algorithm was implemented on graphics processing units (GPUs) to achieve real-time efficiency. The training data were generated using a realistic and dynamic mathematical phantom with ten breathing phases. The testing data were 360 cone beam projections corresponding to one gantry rotation, simulated using the same phantom with a 50% increase in breathing amplitude. The average relative image intensity error of the reconstructed volumetric images is 6.9% +/- 2.4%. The average 3D tumor localization error is 0.8 +/- 0.5 mm. On an NVIDIA Tesla C1060 GPU card, the average computation time for reconstructing a volumetric image from each projection is 0.24 s (range: 0.17 and 0.35 s). The authors have shown the feasibility of reconstructing volumetric images and localizing tumor positions in 3D in near real-time from a single x-ray image.

  1. Neural network CT image reconstruction method for small amount of projection data

    CERN Document Server

    Ma, X F; Takeda, T

    2000-01-01

    This paper presents a new method for two-dimensional image reconstruction by using a multi-layer neural network. Though a conventionally used object function of such a neural network is composed of a sum of squared errors of the output data, we define an object function composed of a sum of squared residuals of an integral equation. By employing an appropriate numerical line integral for this integral equation, we can construct a neural network which can be used for CT image reconstruction for cases with small amount of projection data. We applied this method to some model problems and obtained satisfactory results. This method is especially useful for analyses of laboratory experiments or field observations where only a small amount of projection data is available in comparison with the well-developed medical applications.

  2. Neural network CT image reconstruction method for small amount of projection data

    International Nuclear Information System (INIS)

    Ma, X.F.; Fukuhara, M.; Takeda, T.

    2000-01-01

    This paper presents a new method for two-dimensional image reconstruction by using a multi-layer neural network. Though a conventionally used object function of such a neural network is composed of a sum of squared errors of the output data, we define an object function composed of a sum of squared residuals of an integral equation. By employing an appropriate numerical line integral for this integral equation, we can construct a neural network which can be used for CT image reconstruction for cases with small amount of projection data. We applied this method to some model problems and obtained satisfactory results. This method is especially useful for analyses of laboratory experiments or field observations where only a small amount of projection data is available in comparison with the well-developed medical applications

  3. Kinematic Measurement of Knee Prosthesis from Single-Plane Projection Images

    Science.gov (United States)

    Hirokawa, Shunji; Ariyoshi, Shogo; Takahashi, Kenji; Maruyama, Koichi

    In this paper, the measurement of 3D motion from 2D perspective projections of knee prosthesis is described. The technique reported by Banks and Hodge was further developed in this study. The estimation was performed in two steps. The first-step estimation was performed on the assumption of orthogonal projection. Then, the second-step estimation was subsequently carried out based upon the perspective projection to accomplish more accurate estimation. The simulation results have demonstrated that the technique archived sufficient accuracies of position/orientation estimation for prosthetic kinematics. Then we applied our algorithm to the CCD images, thereby examining the influences of various artifacts, possibly incorporated through an imaging process, on the estimation accuracies. We found that accuracies in the experiment were influenced mainly by the geometric discrepancies between the prosthesis component and computer generated model and by the spacial inconsistencies between the coordinate axes of the positioner and that of the computer model. However, we verified that our algorithm could achieve proper and consistent estimation even for the CCD images.

  4. Extensible Markup Language: How Might It Alter the Software Documentation Process and the Role of the Technical Communicator?

    Science.gov (United States)

    Battalio, John T.

    2002-01-01

    Describes the influence that Extensible Markup Language (XML) will have on the software documentation process and subsequently on the curricula of advanced undergraduate and master's programs in technical communication. Recommends how curricula of advanced undergraduate and master's programs in technical communication ought to change in order to…

  5. Using Extensible Markup Language (XML) for the Single Source Delivery of Educational Resources by Print and Online: A Case Study

    Science.gov (United States)

    Walsh, Lucas

    2007-01-01

    This article seeks to provide an introduction to Extensible Markup Language (XML) by looking at its use in a single source publishing approach to the provision of teaching resources in both hardcopy and online. Using the development of the International Baccalaureate Organisation's online Economics Subject Guide as a practical example, this…

  6. THE USE OF PUBLIC RELATIONS IN PROJECTING AN ORGANIZATION'S POSITIVE IMAGE

    Directory of Open Access Journals (Sweden)

    Ioana Olariu

    2017-07-01

    Full Text Available This article is a theoretical approach on the importance of using public relations in helping an organization to project a positive image. The study of the impact information has on the image of organisations seems to be an interesting research topic. Practice has proved that the image of institutions has a patrimonial value and it is sometimes essential in raising their credibility. It can be said that an image is defined as the representation of certain attitudes, opinions or prejudices concerning a person, a group of persons or the public opinion concerning an institution. In other words, an image is the opinion of a person, of a group of persons or of the public opinion regarding that institution. All specialists agree that a negative image affects, sometimes to an incredible extent, the success of an institution. In the contemporary age, we cannot speak about public opinion without taking into consideration the mass media as a main agent in transmitting the information to the public, with unlimited possibilities of influencing or forming it. The plan for the PR department starts with its own declaration of principles, which describes its roles and contribution to the organisation.

  7. Factors affecting the effectiveness of a projection dephaser in 2D gradient-echo imaging

    International Nuclear Information System (INIS)

    Bakker, Chris J G; Peters, Nicky H G M; Vincken, Koen L; Bom, Martijn van der; Seppenwoolde, Jan-Henry

    2007-01-01

    Projection dephasers are often used for background suppression and dynamic range improvement in thick-slab 2D imaging in order to promote the visibility of subslice structures, e.g., blood vessels and interventional devices. In this study, we explored the factors that govern the effectiveness of a projection dephaser by simulations and phantom experiments. This was done for the ideal case of a single subslice hyper- or hypointensity against a uniform background in the absence of susceptibility effects. Simulations and experiments revealed a pronounced influence of the slice profile, the nominal flip angle and the TE and TR of the acquisition, the size, intraslice position and MR properties of the subslice structure, and T 1 of the background. The complexity of the ideal case points to the necessity of additional explorations when considering the use of projection dephasers under less ideal conditions, e.g., in the presence of tissue heterogeneities and susceptibility gradients

  8. Neural network algorithm for image reconstruction using the grid friendly projections

    International Nuclear Information System (INIS)

    Cierniak, R.

    2011-01-01

    Full text: The presented paper describes a development of original approach to the reconstruction problem using a recurrent neural network. Particularly, the 'grid-friendly' angles of performed projections are selected according to the discrete Radon transform (DRT) concept to decrease the number of projections required. The methodology of our approach is consistent with analytical reconstruction algorithms. Reconstruction problem is reformulated in our approach to optimization problem. This problem is solved in present concept using method based on the maximum likelihood methodology. The reconstruction algorithm proposed in this work is consequently adapted for more practical discrete fan beam projections. Computer simulation results show that the neural network reconstruction algorithm designed to work in this way improves obtained results and outperforms conventional methods in reconstructed image quality. (author)

  9. The Ilac-Project Supporting Ancient Coin Classification by Means of Image Analysis

    Science.gov (United States)

    Kavelar, A.; Zambanini, S.; Kampel, M.; Vondrovec, K.; Siegl, K.

    2013-07-01

    This paper presents the ILAC project, which aims at the development of an automated image-based classification system for ancient Roman Republican coins. The benefits of such a system are manifold: operating at the suture between computer vision and numismatics, ILAC can reduce the day-to-day workload of numismatists by assisting them in classification tasks and providing a preselection of suitable coin classes. This is especially helpful for large coin hoard findings comprising several thousands of coins. Furthermore, this system could be implemented in an online platform for hobby numismatists, allowing them to access background information about their coin collection by simply uploading a photo of obverse and reverse for the coin of interest. ILAC explores different computer vision techniques and their combinations for the use of image-based coin recognition. Some of these methods, such as image matching, use the entire coin image in the classification process, while symbol or legend recognition exploit certain characteristics of the coin imagery. An overview of the methods explored so far and the respective experiments is given as well as an outlook on the next steps of the project.

  10. THE ILAC-PROJECT: SUPPORTING ANCIENT COIN CLASSIFICATION BY MEANS OF IMAGE ANALYSIS

    Directory of Open Access Journals (Sweden)

    A. Kavelar

    2013-07-01

    Full Text Available This paper presents the ILAC project, which aims at the development of an automated image-based classification system for ancient Roman Republican coins. The benefits of such a system are manifold: operating at the suture between computer vision and numismatics, ILAC can reduce the day-to-day workload of numismatists by assisting them in classification tasks and providing a preselection of suitable coin classes. This is especially helpful for large coin hoard findings comprising several thousands of coins. Furthermore, this system could be implemented in an online platform for hobby numismatists, allowing them to access background information about their coin collection by simply uploading a photo of obverse and reverse for the coin of interest. ILAC explores different computer vision techniques and their combinations for the use of image-based coin recognition. Some of these methods, such as image matching, use the entire coin image in the classification process, while symbol or legend recognition exploit certain characteristics of the coin imagery. An overview of the methods explored so far and the respective experiments is given as well as an outlook on the next steps of the project.

  11. Breast EIT using a new projected image reconstruction method with multi-frequency measurements.

    Science.gov (United States)

    Lee, Eunjung; Ts, Munkh-Erdene; Seo, Jin Keun; Woo, Eung Je

    2012-05-01

    We propose a new method to produce admittivity images of the breast for the diagnosis of breast cancer using electrical impedance tomography(EIT). Considering the anatomical structure of the breast, we designed an electrode configuration where current-injection and voltage-sensing electrodes are separated in such a way that internal current pathways are approximately along the tangential direction of an array of voltage-sensing electrodes. Unlike conventional EIT imaging methods where the number of injected currents is maximized to increase the total amount of measured data, current is injected only twice between two pairs of current-injection electrodes attached along the circumferential side of the breast. For each current injection, the induced voltages are measured from the front surface of the breast using as many voltage-sensing electrodes as possible. Although this electrode configurational lows us to measure induced voltages only on the front surface of the breast,they are more sensitive to an anomaly inside the breast since such an injected current tends to produce a more uniform internal current density distribution. Furthermore, the sensitivity of a measured boundary voltage between two equipotential lines on the front surface of the breast is improved since those equipotential lines are perpendicular to the primary direction of internal current streamlines. One should note that this novel data collection method is different from those of other frontal plane techniques such as the x-ray projection and T-scan imaging methods because we do not get any data on the plane that is perpendicular to the current flow. To reconstruct admittivity images using two measured voltage data sets, a new projected image reconstruction algorithm is developed. Numerical simulations demonstrate the frequency-difference EIT imaging of the breast. The results show that the new method is promising to accurately detect and localize small anomalies inside the breast.

  12. Breast EIT using a new projected image reconstruction method with multi-frequency measurements

    International Nuclear Information System (INIS)

    Lee, Eunjung; Ts, Munkh-Erdene; Seo, Jin Keun; Woo, Eung Je

    2012-01-01

    We propose a new method to produce admittivity images of the breast for the diagnosis of breast cancer using electrical impedance tomography (EIT). Considering the anatomical structure of the breast, we designed an electrode configuration where current-injection and voltage-sensing electrodes are separated in such a way that internal current pathways are approximately along the tangential direction of an array of voltage-sensing electrodes. Unlike conventional EIT imaging methods where the number of injected currents is maximized to increase the total amount of measured data, current is injected only twice between two pairs of current-injection electrodes attached along the circumferential side of the breast. For each current injection, the induced voltages are measured from the front surface of the breast using as many voltage-sensing electrodes as possible. Although this electrode configuration allows us to measure induced voltages only on the front surface of the breast, they are more sensitive to an anomaly inside the breast since such an injected current tends to produce a more uniform internal current density distribution. Furthermore, the sensitivity of a measured boundary voltage between two equipotential lines on the front surface of the breast is improved since those equipotential lines are perpendicular to the primary direction of internal current streamlines. One should note that this novel data collection method is different from those of other frontal plane techniques such as the x-ray projection and T-scan imaging methods because we do not get any data on the plane that is perpendicular to the current flow. To reconstruct admittivity images using two measured voltage data sets, a new projected image reconstruction algorithm is developed. Numerical simulations demonstrate the frequency-difference EIT imaging of the breast. The results show that the new method is promising to accurately detect and localize small anomalies inside the breast. (paper)

  13. The semantics of Chemical Markup Language (CML for computational chemistry : CompChem

    Directory of Open Access Journals (Sweden)

    Phadungsukanan Weerapong

    2012-08-01

    Full Text Available Abstract This paper introduces a subdomain chemistry format for storing computational chemistry data called CompChem. It has been developed based on the design, concepts and methodologies of Chemical Markup Language (CML by adding computational chemistry semantics on top of the CML Schema. The format allows a wide range of ab initio quantum chemistry calculations of individual molecules to be stored. These calculations include, for example, single point energy calculation, molecular geometry optimization, and vibrational frequency analysis. The paper also describes the supporting infrastructure, such as processing software, dictionaries, validation tools and database repositories. In addition, some of the challenges and difficulties in developing common computational chemistry dictionaries are discussed. The uses of CompChem are illustrated by two practical applications.

  14. The semantics of Chemical Markup Language (CML) for computational chemistry : CompChem.

    Science.gov (United States)

    Phadungsukanan, Weerapong; Kraft, Markus; Townsend, Joe A; Murray-Rust, Peter

    2012-08-07

    : This paper introduces a subdomain chemistry format for storing computational chemistry data called CompChem. It has been developed based on the design, concepts and methodologies of Chemical Markup Language (CML) by adding computational chemistry semantics on top of the CML Schema. The format allows a wide range of ab initio quantum chemistry calculations of individual molecules to be stored. These calculations include, for example, single point energy calculation, molecular geometry optimization, and vibrational frequency analysis. The paper also describes the supporting infrastructure, such as processing software, dictionaries, validation tools and database repositories. In addition, some of the challenges and difficulties in developing common computational chemistry dictionaries are discussed. The uses of CompChem are illustrated by two practical applications.

  15. SBML-PET-MPI: a parallel parameter estimation tool for Systems Biology Markup Language based models.

    Science.gov (United States)

    Zi, Zhike

    2011-04-01

    Parameter estimation is crucial for the modeling and dynamic analysis of biological systems. However, implementing parameter estimation is time consuming and computationally demanding. Here, we introduced a parallel parameter estimation tool for Systems Biology Markup Language (SBML)-based models (SBML-PET-MPI). SBML-PET-MPI allows the user to perform parameter estimation and parameter uncertainty analysis by collectively fitting multiple experimental datasets. The tool is developed and parallelized using the message passing interface (MPI) protocol, which provides good scalability with the number of processors. SBML-PET-MPI is freely available for non-commercial use at http://www.bioss.uni-freiburg.de/cms/sbml-pet-mpi.html or http://sites.google.com/site/sbmlpetmpi/.

  16. Treating metadata as annotations: separating the content markup from the content

    Directory of Open Access Journals (Sweden)

    Fredrik Paulsson

    2007-11-01

    Full Text Available The use of digital learning resources creates an increasing need for semantic metadata, describing the whole resource, as well as parts of resources. Traditionally, schemas such as Text Encoding Initiative (TEI have been used to add semantic markup for parts of resources. This is not sufficient for use in a ”metadata ecology”, where metadata is distributed, coherent to different Application Profiles, and added by different actors. A new methodology, where metadata is “pointed in” as annotations, using XPointers, and RDF is proposed. A suggestion for how such infrastructure can be implemented, using existing open standards for metadata, and for the web is presented. We argue that such methodology and infrastructure is necessary to realize the decentralized metadata infrastructure needed for a “metadata ecology".

  17. The markup is the model: reasoning about systems biology models in the Semantic Web era.

    Science.gov (United States)

    Kell, Douglas B; Mendes, Pedro

    2008-06-07

    Metabolic control analysis, co-invented by Reinhart Heinrich, is a formalism for the analysis of biochemical networks, and is a highly important intellectual forerunner of modern systems biology. Exchanging ideas and exchanging models are part of the international activities of science and scientists, and the Systems Biology Markup Language (SBML) allows one to perform the latter with great facility. Encoding such models in SBML allows their distributed analysis using loosely coupled workflows, and with the advent of the Internet the various software modules that one might use to analyze biochemical models can reside on entirely different computers and even on different continents. Optimization is at the core of many scientific and biotechnological activities, and Reinhart made many major contributions in this area, stimulating our own activities in the use of the methods of evolutionary computing for optimization.

  18. SBML-SAT: a systems biology markup language (SBML) based sensitivity analysis tool.

    Science.gov (United States)

    Zi, Zhike; Zheng, Yanan; Rundell, Ann E; Klipp, Edda

    2008-08-15

    It has long been recognized that sensitivity analysis plays a key role in modeling and analyzing cellular and biochemical processes. Systems biology markup language (SBML) has become a well-known platform for coding and sharing mathematical models of such processes. However, current SBML compatible software tools are limited in their ability to perform global sensitivity analyses of these models. This work introduces a freely downloadable, software package, SBML-SAT, which implements algorithms for simulation, steady state analysis, robustness analysis and local and global sensitivity analysis for SBML models. This software tool extends current capabilities through its execution of global sensitivity analyses using multi-parametric sensitivity analysis, partial rank correlation coefficient, SOBOL's method, and weighted average of local sensitivity analyses in addition to its ability to handle systems with discontinuous events and intuitive graphical user interface. SBML-SAT provides the community of systems biologists a new tool for the analysis of their SBML models of biochemical and cellular processes.

  19. Light at Night Markup Language (LANML): XML Technology for Light at Night Monitoring Data

    Science.gov (United States)

    Craine, B. L.; Craine, E. R.; Craine, E. M.; Crawford, D. L.

    2013-05-01

    Light at Night Markup Language (LANML) is a standard, based upon XML, useful in acquiring, validating, transporting, archiving and analyzing multi-dimensional light at night (LAN) datasets of any size. The LANML standard can accommodate a variety of measurement scenarios including single spot measures, static time-series, web based monitoring networks, mobile measurements, and airborne measurements. LANML is human-readable, machine-readable, and does not require a dedicated parser. In addition LANML is flexible; ensuring future extensions of the format will remain backward compatible with analysis software. The XML technology is at the heart of communicating over the internet and can be equally useful at the desktop level, making this standard particularly attractive for web based applications, educational outreach and efficient collaboration between research groups.

  20. Modeling of the positioning system and visual mark-up of historical cadastral maps

    Directory of Open Access Journals (Sweden)

    Tomislav Jakopec

    2013-03-01

    Full Text Available The aim of the paper is to present of the possibilities of positioning and visual markup of historical cadastral maps onto Google maps using open source software. The corpus is stored in the Croatian State Archives in Zagreb, in the Maps Archive for Croatia and Slavonia. It is part of cadastral documentation that consists of cadastral material from the period of first cadastral survey conducted in the Kingdom of Croatia and Slavonia from 1847 to 1877, and which is used extensively according to the data provided by the customer service of the Croatian State Archives. User needs on the one side and the possibilities of innovative implementation of ICT on the other have motivated the development of the system which would use digital copies of original cadastral maps and connect them with systems like Google maps, and thus both protect the original materials and open up new avenues of research related to the use of originals. With this aim in mind, two cadastral map presentation models have been created. Firstly, there is a detailed display of the original, which enables its viewing using dynamic zooming. Secondly, the interactive display is facilitated through blending the cadastral maps with Google maps, which resulted in establishing links between the coordinates of the digital and original plans through transformation. The transparency of the original can be changed, and the user can intensify the visibility of the underlying layer (Google map or the top layer (cadastral map, which enables direct insight into parcel dynamics over a longer time-span. The system also allows for the mark-up of cadastral maps, which can lead to the development of the cumulative index of all terms found on cadastral maps. The paper is an example of the implementation of ICT for providing new services, strengthening cooperation with the interested public and related institutions, familiarizing the public with the archival material, and offering new possibilities for

  1. Gene Fusion Markup Language: a prototype for exchanging gene fusion data.

    Science.gov (United States)

    Kalyana-Sundaram, Shanker; Shanmugam, Achiraman; Chinnaiyan, Arul M

    2012-10-16

    An avalanche of next generation sequencing (NGS) studies has generated an unprecedented amount of genomic structural variation data. These studies have also identified many novel gene fusion candidates with more detailed resolution than previously achieved. However, in the excitement and necessity of publishing the observations from this recently developed cutting-edge technology, no community standardization approach has arisen to organize and represent the data with the essential attributes in an interchangeable manner. As transcriptome studies have been widely used for gene fusion discoveries, the current non-standard mode of data representation could potentially impede data accessibility, critical analyses, and further discoveries in the near future. Here we propose a prototype, Gene Fusion Markup Language (GFML) as an initiative to provide a standard format for organizing and representing the significant features of gene fusion data. GFML will offer the advantage of representing the data in a machine-readable format to enable data exchange, automated analysis interpretation, and independent verification. As this database-independent exchange initiative evolves it will further facilitate the formation of related databases, repositories, and analysis tools. The GFML prototype is made available at http://code.google.com/p/gfml-prototype/. The Gene Fusion Markup Language (GFML) presented here could facilitate the development of a standard format for organizing, integrating and representing the significant features of gene fusion data in an inter-operable and query-able fashion that will enable biologically intuitive access to gene fusion findings and expedite functional characterization. A similar model is envisaged for other NGS data analyses.

  2. Digital breast tomosynthesis: computer-aided detection of clustered microcalcifications on planar projection images

    International Nuclear Information System (INIS)

    Samala, Ravi K; Chan, Heang-Ping; Lu, Yao; Hadjiiski, Lubomir M; Wei, Jun; Helvie, Mark A

    2014-01-01

    This paper describes a new approach to detect microcalcification clusters (MCs) in digital breast tomosynthesis (DBT) via its planar projection (PPJ) image. With IRB approval, two-view (cranio-caudal and mediolateral oblique views) DBTs of human subject breasts were obtained with a GE GEN2 prototype DBT system that acquires 21 projection angles spanning 60° in 3° increments. A data set of 307 volumes (154 human subjects) was divided by case into independent training (127 with MCs) and test sets (104 with MCs and 76 free of MCs). A simultaneous algebraic reconstruction technique with multiscale bilateral filtering (MSBF) regularization was used to enhance microcalcifications and suppress noise. During the MSBF regularized reconstruction, the DBT volume was separated into high frequency (HF) and low frequency components representing microcalcifications and larger structures. At the final iteration, maximum intensity projection was applied to the regularized HF volume to generate a PPJ image that contained MCs with increased contrast-to-noise ratio (CNR) and reduced search space. High CNR objects in the PPJ image were extracted and labeled as microcalcification candidates. Convolution neural network trained to recognize the image pattern of microcalcifications was used to classify the candidates into true calcifications and tissue structures and artifacts. The remaining microcalcification candidates were grouped into MCs by dynamic conditional clustering based on adaptive CNR threshold and radial distance criteria. False positive (FP) clusters were further reduced using the number of candidates in a cluster, CNR and size of microcalcification candidates. At 85% sensitivity an FP rate of 0.71 and 0.54 was achieved for view- and case-based sensitivity, respectively, compared to 2.16 and 0.85 achieved in DBT. The improvement was significant (p-value = 0.003) by JAFROC analysis. (paper)

  3. New K-edge-balanced contrast phantom for image quality assurance in projection radiography

    Science.gov (United States)

    Cresens, Marc; Schaetzing, Ralph

    2003-06-01

    X-ray-absorber step-wedge phantoms serve in projection radiography to assess a detection system's overall exposure-related signal-to-noise ratio performance and contrast response. Data derived from a phantom image, created by exposing a step-wedge onto the image receptor, are compared with predefined acceptance criteria during periodic image quality assurance (QA). For contrast-related measurements, in particular, the x-ray tube potential requires accurate setting and low ripple, since small deviations from the specified kVp, causing energy spectrum changes, lead to significant image signal variation at high contrast ratios. A K-edge-balanced, rare-earth-metal contrast phantom can generate signals that are significantly more robust to the spectral variability and instability of exposure equipment in the field. The image signals from a hafnium wedge, for example, are up to eight times less sensitive to spectral fluctuations than those of today"s copper phantoms for a 200:1 signal ratio. At 120 kVp (RQA 9), the hafnium phantom still preserves 70% of the subject contrast present at 75 kVp (RQA 5). A copper wedge preserves only 7% of its contrast over the same spectral range. Spectral simulations and measurements on prototype systems, as well as potential uses of this new class of phantoms (e.g., QA, single-shot exposure response characterization) are described.

  4. Respiratory compensation in projection imaging using a magnification and displacement model

    International Nuclear Information System (INIS)

    Crawford, C.R.; King, K.F.; Ritchie, C.J.; Godwin, J.D.

    1996-01-01

    Respiratory motion during the collection of computed tomography (CT) projections generates structured artifacts and a loss of resolution that can render the scans unusable. This motion is problematic in scans of those patients who cannot suspend respiration, such as the very young or incubated patients. In this paper, the authors present an algorithm that can be used to reduce motion artifacts in CT scans caused by respiration. An approximate model for the effect of respiration is that the object cross section under interrogation experiences time-varying magnification and displacement along two axes. Using this model an exact filtered backprojection algorithm is derived for the case of parallel projections. The result is extended to generate an approximate reconstruction formula for fan-beam projections. Computer simulations and scans of phantoms on a commercial CT scanner validate the new reconstruction algorithms for parallel and fan-beam projections. Significant reduction in respiratory artifacts is demonstrated clinically when the motion model is satisfied. The method can be applied to projection data used in CT single photon emission computed tomography (SPECT), positron emission tomography (PET), and magnetic resonance imaging (MRI)

  5. GPU acceleration of 3D forward and backward projection using separable footprints for X-ray CT image reconstruction

    Energy Technology Data Exchange (ETDEWEB)

    Wu, Meng; Fessler, Jeffrey A. [Michigan Univ., Ann Arbor, MI (United States). Dept. of Electrical Engineering and Computer Science

    2011-07-01

    Iterative 3D image reconstruction methods can improve image quality over conventional filtered back projection (FBP) in X-ray computed tomography. However, high computational costs deter the routine use of iterative reconstruction clinically. The separable footprint method for forward and back-projection simplifies the integrals over a detector cell in a way that is quite accurate and also has a relatively efficient CPU implementation. In this project, we implemented the separable footprints method for both forward and backward projection on a graphics processing unit (GPU) with NVDIA's parallel computing architecture (CUDA). This paper describes our GPU kernels for the separable footprint method and simulation results. (orig.)

  6. Methods of X-ray CT image reconstruction from few projections

    International Nuclear Information System (INIS)

    Wang, H.

    2011-01-01

    To improve the safety (low dose) and the productivity (fast acquisition) of a X-ray CT system, we want to reconstruct a high quality image from a small number of projections. The classical reconstruction algorithms generally fail since the reconstruction procedure is unstable and suffers from artifacts. A new approach based on the recently developed 'Compressed Sensing' (CS) theory assumes that the unknown image is in some sense 'sparse' or 'compressible', and the reconstruction is formulated through a non linear optimization problem (TV/l1 minimization) by enhancing the sparsity. Using the pixel (or voxel in 3D) as basis, to apply the CS framework in CT one usually needs a 'sparsifying' transform, and combines it with the 'X-ray projector' which applies on the pixel image. In this thesis, we have adapted a 'CT-friendly' radial basis of Gaussian family called 'blob' to the CS-CT framework. The blob has better space-frequency localization properties than the pixel, and many operations, such as the X-ray transform, can be evaluated analytically and are highly parallelizable (on GPU platform). Compared to the classical Kaisser-Bessel blob, the new basis has a multi-scale structure: an image is the sum of dilated and translated radial Mexican hat functions. The typical medical objects are compressible under this basis, so the sparse representation system used in the ordinary CS algorithms is no more needed. 2D simulations show that the existing TV and l1 algorithms are more efficient and the reconstructions have better visual quality than the equivalent approach based on the pixel or wavelet basis. The new approach has also been validated on 2D experimental data, where we have observed that in general the number of projections can be reduced to about 50%, without compromising the image quality. (author) [fr

  7. Simulations of Aperture Synthesis Imaging Radar for the EISCAT_3D Project

    Science.gov (United States)

    La Hoz, C.; Belyey, V.

    2012-12-01

    EISCAT_3D is a project to build the next generation of incoherent scatter radars endowed with multiple 3-dimensional capabilities that will replace the current EISCAT radars in Northern Scandinavia. Aperture Synthesis Imaging Radar (ASIR) is one of the technologies adopted by the EISCAT_3D project to endow it with imaging capabilities in 3-dimensions that includes sub-beam resolution. Complemented by pulse compression, it will provide 3-dimensional images of certain types of incoherent scatter radar targets resolved to about 100 metres at 100 km range, depending on the signal-to-noise ratio. This ability will open new research opportunities to map small structures associated with non-homogeneous, unstable processes such as aurora, summer and winter polar radar echoes (PMSE and PMWE), Natural Enhanced Ion Acoustic Lines (NEIALs), structures excited by HF ionospheric heating, meteors, space debris, and others. To demonstrate the feasibility of the antenna configurations and the imaging inversion algorithms a simulation of synthetic incoherent scattering data has been performed. The simulation algorithm incorporates the ability to control the background plasma parameters with non-homogeneous, non-stationary components over an extended 3-dimensional space. Control over the positions of a number of separated receiving antennas, their signal-to-noise-ratios and arriving phases allows realistic simulation of a multi-baseline interferometric imaging radar system. The resulting simulated data is fed into various inversion algorithms. This simulation package is a powerful tool to evaluate various antenna configurations and inversion algorithms. Results applied to realistic design alternatives of EISCAT_3D will be described.

  8. Development and comparison of projection and image space 3D nodule insertion techniques

    Science.gov (United States)

    Robins, Marthony; Solomon, Justin; Sahbaee, Pooyan; Samei, Ehsan

    2016-04-01

    This study aimed to develop and compare two methods of inserting computerized virtual lesions into CT datasets. 24 physical (synthetic) nodules of three sizes and four morphologies were inserted into an anthropomorphic chest phantom (LUNGMAN, KYOTO KAGAKU). The phantom was scanned (Somatom Definition Flash, Siemens Healthcare) with and without nodules present, and images were reconstructed with filtered back projection and iterative reconstruction (SAFIRE) at 0.6 mm slice thickness using a standard thoracic CT protocol at multiple dose settings. Virtual 3D CAD models based on the physical nodules were virtually inserted (accounting for the system MTF) into the nodule-free CT data using two techniques. These techniques include projection-based and image-based insertion. Nodule volumes were estimated using a commercial segmentation tool (iNtuition, TeraRecon, Inc.). Differences were tested using paired t-tests and R2 goodness of fit between the virtually and physically inserted nodules. Both insertion techniques resulted in nodule volumes very similar to the real nodules (values were all <0.97 for both insertion techniques. These data imply that these techniques can confidently be used as a means of inserting virtual nodules in CT datasets. These techniques can be instrumental in building hybrid CT datasets composed of patient images with virtually inserted nodules.

  9. Engaging stakeholder communities as body image intervention partners: The Body Project as a case example.

    Science.gov (United States)

    Becker, Carolyn Black; Perez, Marisol; Kilpela, Lisa Smith; Diedrichs, Phillippa C; Trujillo, Eva; Stice, Eric

    2017-04-01

    Despite recent advances in developing evidence-based psychological interventions, substantial changes are needed in the current system of intervention delivery to impact mental health on a global scale (Kazdin & Blase, 2011). Prevention offers one avenue for reaching large populations because prevention interventions often are amenable to scaling-up strategies, such as task-shifting to lay providers, which further facilitate community stakeholder partnerships. This paper discusses the dissemination and implementation of the Body Project, an evidence-based body image prevention program, across 6 diverse stakeholder partnerships that span academic, non-profit and business sectors at national and international levels. The paper details key elements of the Body Project that facilitated partnership development, dissemination and implementation, including use of community-based participatory research methods and a blended train-the-trainer and task-shifting approach. We observed consistent themes across partnerships, including: sharing decision making with community partners, engaging of community leaders as gatekeepers, emphasizing strengths of community partners, working within the community's structure, optimizing non-traditional and/or private financial resources, placing value on cost-effectiveness and sustainability, marketing the program, and supporting flexibility and creativity in developing strategies for evolution within the community and in research. Ideally, lessons learned with the Body Project can be generalized to implementation of other body image and eating disorder prevention programs. Copyright © 2015 Elsevier Ltd. All rights reserved.

  10. ABrIL - Advanced Brain Imaging Lab : a cloud based computation environment for cooperative neuroimaging projects.

    Science.gov (United States)

    Neves Tafula, Sérgio M; Moreira da Silva, Nádia; Rozanski, Verena E; Silva Cunha, João Paulo

    2014-01-01

    Neuroscience is an increasingly multidisciplinary and highly cooperative field where neuroimaging plays an important role. Neuroimaging rapid evolution is demanding for a growing number of computing resources and skills that need to be put in place at every lab. Typically each group tries to setup their own servers and workstations to support their neuroimaging needs, having to learn from Operating System management to specific neuroscience software tools details before any results can be obtained from each setup. This setup and learning process is replicated in every lab, even if a strong collaboration among several groups is going on. In this paper we present a new cloud service model - Brain Imaging Application as a Service (BiAaaS) - and one of its implementation - Advanced Brain Imaging Lab (ABrIL) - in the form of an ubiquitous virtual desktop remote infrastructure that offers a set of neuroimaging computational services in an interactive neuroscientist-friendly graphical user interface (GUI). This remote desktop has been used for several multi-institution cooperative projects with different neuroscience objectives that already achieved important results, such as the contribution to a high impact paper published in the January issue of the Neuroimage journal. The ABrIL system has shown its applicability in several neuroscience projects with a relatively low-cost, promoting truly collaborative actions and speeding up project results and their clinical applicability.

  11. Public financing of research projects in Poland – its image and consequences?

    Directory of Open Access Journals (Sweden)

    Feldy Marzena

    2016-12-01

    Full Text Available Both the size of appropriation as well as their distribution have had a profound impact on the shape and activities of the science sector. The creation of a fair system of distribution of public resources to research that will also facilitate the effective implementation of the pursued scientific policy goals represents a major challenge. The issue of the determination of the right proportions of individual distribution channels remains critical. Despite this task being the responsibility of the State, establishing cooperation in this respect with the scientific community is desirable. The implementation of solutions that raise the concerns of scientists leads to system instability and reduced effectiveness which is manifest among others in a lower level of indicators of scientific excellence and innovation in the country. These observations are pertinent to Poland where the manner in which scientific institutes operate were changed under the 2009–2011 reform. A neoliberal operating model based on competitiveness and rewarding of top rated scientific establishments and scientists was implemented. In light of these facts, the initiation of research that will provide information on how the implemented changes are perceived by the scientific community seems to be appropriate. The aim of this article is in particlar presenting how the project model of financing laid down under the reform is perceived and what kind of image has been shaped among Polish scientists. In order to gain a comprehensive picture of the situation, both the rational and emotional image was subject to analysis. The conclusions regarding the perception of the project model were drawn on the basis of empirical materials collected in a qualitative study the specifics of which will be presented in the chapter on methodology. Prior to that, the author discusses the basic models for the distribution of state support for science and characterises the most salient features of the

  12. Implementation of a high-resolution workstation for primary diagnosis of projection radiography images

    Science.gov (United States)

    Good, Walter F.; Herron, John M.; Maitz, Glenn S.; Gur, David; Miller, Stephen L.; Straub, William H.; Fuhrman, Carl R.

    1990-08-01

    We designed and implemented a high-resolution video workstation as the central hardware component in a comprehensive multi-project program comparing the use of digital and film modalities. The workstation utilizes a 1.8 GByte real-time disk (RCI) capable of storing 400 full-resolution images and two Tektronix (GMA251) display controllers with 19" monitors (GMA2O2). The display is configured in a portrait format with a resolution of 1536 x 2048 x 8 bit, and operates at 75 Hz in a noninterlaced mode. Transmission of data through a 12 to 8 bit lookup table into the display controllers occurs at 20 MBytes/second (.35 seconds per image). The workstation allows easy use of brightness (level) and contrast (window) to be manipulated with a trackball, and various processing options can be selected using push buttons. Display of any of the 400 images is also performed at 20MBytes/sec (.35 sec/image). A separate text display provides for the automatic display of patient history data and for a scoring form through which readers can interact with the system by means of a computer mouse. In addition, the workstation provides for the randomization of cases and for the immediate entry of diagnostic responses into a master database. Over the past year this workstation has been used for over 10,000 readings in diagnostic studies related to 1) image resolution; 2) film vs. soft display; 3) incorporation of patient history data into the reading process; and 4) usefulness of image processing.

  13. A new EU-funded project for enhanced real-time imaging for radiotherapy

    CERN Multimedia

    KTT Life Sciences Unit

    2011-01-01

    ENTERVISION (European training network for digital imaging for radiotherapy) is a new Marie Curie Initial Training Network coordinated by CERN, which brings together multidisciplinary researchers to carry out R&D in physics techniques for application in the clinical environment.   ENTERVISION was established in response to a critical need to reinforce research in online 3D digital imaging and to train professionals in order to deliver some of the key elements for early detection and more precise treatment of tumours. The main goal of the project is to train researchers who will help contribute to technical developments in an exciting multidisciplinary field, where expertise from physics, medicine, electronics, informatics, radiobiology and engineering merges and catalyses the advancement of cancer treatment. With this aim in mind, ENTERVISION brings together ten academic institutes and research centres, as well as the two leading European companies in particle therapy, IBA and Siemens. &ldq...

  14. Low-complexity camera digital signal imaging for video document projection system

    Science.gov (United States)

    Hsia, Shih-Chang; Tsai, Po-Shien

    2011-04-01

    We present high-performance and low-complexity algorithms for real-time camera imaging applications. The main functions of the proposed camera digital signal processing (DSP) involve color interpolation, white balance, adaptive binary processing, auto gain control, and edge and color enhancement for video projection systems. A series of simulations demonstrate that the proposed method can achieve good image quality while keeping computation cost and memory requirements low. On the basis of the proposed algorithms, the cost-effective hardware core is developed using Verilog HDL. The prototype chip has been verified with one low-cost programmable device. The real-time camera system can achieve 1270 × 792 resolution with the combination of extra components and can demonstrate each DSP function.

  15. Grid Databases for Shared Image Analysis in the MammoGrid Project

    CERN Document Server

    Amendolia, S R; Hauer, T; Manset, D; McClatchey, R; Odeh, M; Reading, T; Rogulin, D; Schottlander, D; Solomonides, T

    2004-01-01

    The MammoGrid project aims to prove that Grid infrastructures can be used for collaborative clinical analysis of database-resident but geographically distributed medical images. This requires: a) the provision of a clinician-facing front-end workstation and b) the ability to service real-world clinician queries across a distributed and federated database. The MammoGrid project will prove the viability of the Grid by harnessing its power to enable radiologists from geographically dispersed hospitals to share standardized mammograms, to compare diagnoses (with and without computer aided detection of tumours) and to perform sophisticated epidemiological studies across national boundaries. This paper outlines the approach taken in MammoGrid to seamlessly connect radiologist workstations across a Grid using an "information infrastructure" and a DICOM-compliant object model residing in multiple distributed data stores in Italy and the UK

  16. THE EUROSDR PROJECT "RADIOMETRIC ASPECTS OF DIGITAL PHOTOGRAMMETRIC IMAGES" – RESULTS OF THE EMPIRICAL PHASE

    Directory of Open Access Journals (Sweden)

    E. Honkavaara

    2012-09-01

    Full Text Available This article presents the empirical research carried out in the context of the multi-site EuroSDR project "Radiometric aspects of digital photogrammetric images" and provides highlights of the results. The investigations have considered the vicarious radiometric and spatial resolution validation and calibration of the sensor system, radiometric processing of the image blocks either by performing relative radiometric block equalization or into absolutely reflectance calibrated products, and finally aspects of practical applications on NDVI layer generation and tree species classification. The data sets were provided by Leica Geosystems ADS40 and Intergraph DMC and the participants represented stakeholders in National Mapping Authorities, software development and research. The investigations proved the stability and quality of evaluated imaging systems with respect to radiometry and optical system. The first new-generation methods for reflectance calibration and equalization of photogrammetric image block data provided promising accuracy and were also functional from the productivity and usability points of view. The reflectance calibration methods provided up to 5% accuracy without any ground reference. Application oriented results indicated that automatic interpretation methods will benefit from the optimal use of radiometrically accurate multi-view photogrammetric imagery.

  17. Strain Imaging of Nanoscale Semiconductor Heterostructures with X-Ray Bragg Projection Ptychography

    Science.gov (United States)

    Holt, Martin V.; Hruszkewycz, Stephan O.; Murray, Conal E.; Holt, Judson R.; Paskiewicz, Deborah M.; Fuoss, Paul H.

    2014-04-01

    We report the imaging of nanoscale distributions of lattice strain and rotation in complementary components of lithographically engineered epitaxial thin film semiconductor heterostructures using synchrotron x-ray Bragg projection ptychography (BPP). We introduce a new analysis method that enables lattice rotation and out-of-plane strain to be determined independently from a single BPP phase reconstruction, and we apply it to two laterally adjacent, multiaxially stressed materials in a prototype channel device. These results quantitatively agree with mechanical modeling and demonstrate the ability of BPP to map out-of-plane lattice dilatation, a parameter critical to the performance of electronic materials.

  18. Improved superficial brain hemorrhage visualization in susceptibility weighted images by constrained minimum intensity projection

    Science.gov (United States)

    Castro, Marcelo A.; Pham, Dzung L.; Butman, John

    2016-03-01

    Minimum intensity projection is a technique commonly used to display magnetic resonance susceptibility weighted images, allowing the observer to better visualize hemorrhages and vasculature. The technique displays the minimum intensity in a given projection within a thick slab, allowing different connectivity patterns to be easily revealed. Unfortunately, the low signal intensity of the skull within the thick slab can mask superficial tissues near the skull base and other regions. Because superficial microhemorrhages are a common feature of traumatic brain injury, this effect limits the ability to proper diagnose and follow up patients. In order to overcome this limitation, we developed a method to allow minimum intensity projection to properly display superficial tissues adjacent to the skull. Our approach is based on two brain masks, the largest of which includes extracerebral voxels. The analysis of the rind within both masks containing the actual brain boundary allows reclassification of those voxels initially missed in the smaller mask. Morphological operations are applied to guarantee accuracy and topological correctness, and the mean intensity within the mask is assigned to all outer voxels. This prevents bone from dominating superficial regions in the projection, enabling superior visualization of cortical hemorrhages and vessels.

  19. Some key techniques of SPOT-5 image processing in new national land and resources investigation project

    Science.gov (United States)

    Xue, Changsheng; Li, Qingquan; Li, Deren

    2004-02-01

    In 1988, the detail information on land resource was investigated in China. Fourteen years later, it has changed a lot. It is necessary that the second land resource detailed investigation should be implemented. On this condition, the New National Land and Resources Investigation Project in China, which will last 12 years, has been started since 1999. The project is directly under the administration of the Ministry of Land and Resource (MLR). It was organized and implemented By China Geological, China Land Surveying and Planning Institute (CLSPI) and Information Center of MLR. It is a grand and cross century project supported by the Central Finance, based on State and public interests and strategic characteristics. Up to now, "Land Use Dynamic Monitoring By Remote Sensing," "Arable Land Resource Investigation," "Rural Collective Land Property Right Investgiation," "Establishment of Public Consulting Standardization of Cadastral Information," "Land Resource Fundamental Maps and Data Updating," "Urban Land Price Investigation and Intensive Utilization Potential Capacity Evaluation," "Farmland Classification, Gradation, and Evaluation," "Land Use Database Construction at City or County Level" 8 subprojects have had the preliminary achievements. In this project, SPOT-1/2/4 and Landsat-7 TM data were always applied to monitor land use dynamic change as the main data resource. Certainly, IRS, CBERS-2, and IKONOS data also were tested in small areas. In 2002, the SPOT-5 data, whose spatial resolution of the panchromatic image is 2.5 meters and the spectral one is 10 meters, were applied into update the land use base map at the 1:10000 scale in 26 Chinese cities. The purpose in this paper is to communicate the experience of SPOT-5 image processing with the colleagues.

  20. Image restoration by the method of convex projections: part 1 theory.

    Science.gov (United States)

    Youla, D C; Webb, H

    1982-01-01

    A projection operator onto a closed convex set in Hilbert space is one of the few examples of a nonlinear map that can be defined in simple abstract terms. Moreover, it minimizes distance and is nonexpansive, and therefore shares two of the more important properties of ordinary linear orthogonal projections onto closed linear manifolds. In this paper, we exploit the properties of these operators to develop several iterative algorithms for image restoration from partial data which permit any number of nonlinear constraints of a certain type to be subsumed automatically. Their common conceptual basis is as follows. Every known property of an original image f is envisaged as restricting it to lie in a well-defined closed convex set. Thus, m such properties place f in the intersection E(0) = E(i) of the corresponding closed convex sets E(1),E(2),...EE(m). Given only the projection operators PE(i) onto the individual E(i)'s, i = 1 --> m, we restore f by recursive means. Clearly, in this approach, the realization of the P(i)'s in a Hilbert space setting is one of the major synthesis problems. Section I describes the geometrical significance of the three main theorems in considerable detail, and most of the underlying ideas are illustrated with the aid of simple diagrams. Section II presents rules for the numerical implementation of 11 specific projection operators which are found to occur frequently in many signal-processing applications, and the Appendix contains proofs of all the major results.

  1. Imaging Seismic Source Variations Using Back-Projection Methods at El Tatio Geyser Field, Northern Chile

    Science.gov (United States)

    Kelly, C. L.; Lawrence, J. F.

    2014-12-01

    During October 2012, 51 geophones and 6 broadband seismometers were deployed in an ~50x50m region surrounding a periodically erupting columnar geyser in the El Tatio Geyser Field, Chile. The dense array served as the seismic framework for a collaborative project to study the mechanics of complex hydrothermal systems. Contemporaneously, complementary geophysical measurements (including down-hole temperature and pressure, discharge rates, thermal imaging, water chemistry, and video) were also collected. Located on the western flanks of the Andes Mountains at an elevation of 4200m, El Tatio is the third largest geyser field in the world. Its non-pristine condition makes it an ideal location to perform minutely invasive geophysical studies. The El Jefe Geyser was chosen for its easily accessible conduit and extremely periodic eruption cycle (~120s). During approximately 2 weeks of continuous recording, we recorded ~2500 nighttime eruptions which lack cultural noise from tourism. With ample data, we aim to study how the source varies spatially and temporally during each phase of the geyser's eruption cycle. We are developing a new back-projection processing technique to improve source imaging for diffuse signals. Our method was previously applied to the Sierra Negra Volcano system, which also exhibits repeating harmonic and diffuse seismic sources. We back-project correlated seismic signals from the receivers back to their sources, assuming linear source to receiver paths and a known velocity model (obtained from ambient noise tomography). We apply polarization filters to isolate individual and concurrent geyser energy associated with P and S phases. We generate 4D, time-lapsed images of the geyser source field that illustrate how the source distribution changes through the eruption cycle. We compare images for pre-eruption, co-eruption, post-eruption and quiescent periods. We use our images to assess eruption mechanics in the system (i.e. top-down vs. bottom-up) and

  2. Automatically Generating a Distributed 3D Battlespace Using USMTF and XML-MTF Air Tasking Order, Extensible Markup Language (XML) and Virtual Reality Modeling Language (VRML)

    National Research Council Canada - National Science Library

    Murray, Mark

    2000-01-01

    .... To more effectively exchange and share data, the Defense Information Systems Agency (DISA), the lead agency for the USMTF, is actively engaged in extending the USMTF standard with a new data sharing technology called Extensible Markup Language (XML...

  3. Automatically Generating a Distributed 3D Virtual Battlespace Using USMTF and XML-MTF Air Tasking Orders, Extensible Markup Language (XML) and Virtual Reality Modeling Language (VRML)

    National Research Council Canada - National Science Library

    Murray, Mark

    2000-01-01

    .... To more effectively exchange and share data, the Defense Information Systems Agency (DISA), the lead agency for the USMTF, is actively engaged in extending the USMTF standard with a new data sharing technology called Extensible Markup Language (XML...

  4. Mammography with and without radiolucent positioning sheets: Comparison of projected breast area, pain experience, radiation dose and technical image quality

    NARCIS (Netherlands)

    Timmers, Janine; ten Voorde, Marloes; van Engen, Ruben E.; van Landsveld-Verhoeven, Cary; Pijnappel, Ruud; Droogh-de Greve, Kitty; den Heeten, Gerard J.; Broeders, Mireille J. M.

    2015-01-01

    To compare projected breast area, image quality, pain experience and radiation dose between mammography performed with and without radiolucent positioning sheets. 184 women screened in the Dutch breast screening programme (May-June 2012) provided written informed consent to have one additional image

  5. Comparison analysis between filtered back projection and algebraic reconstruction technique on microwave imaging

    Science.gov (United States)

    Ramadhan, Rifqi; Prabowo, Rian Gilang; Aprilliyani, Ria; Basari

    2018-02-01

    Victims of acute cancer and tumor are growing each year and cancer becomes one of the causes of human deaths in the world. Cancers or tumor tissue cells are cells that grow abnormally and turn to take over and damage the surrounding tissues. At the beginning, cancers or tumors do not have definite symptoms in its early stages, and can even attack the tissues inside of the body. This phenomena is not identifiable under visual human observation. Therefore, an early detection system which is cheap, quick, simple, and portable is essensially required to anticipate the further development of cancer or tumor. Among all of the modalities, microwave imaging is considered to be a cheaper, simple, and portable system method. There are at least two simple image reconstruction algorithms i.e. Filtered Back Projection (FBP) and Algebraic Reconstruction Technique (ART), which have been adopted in some common modalities. In this paper, both algorithms will be compared by reconstructing the image from an artificial tissue model (i.e. phantom), which has two different dielectric distributions. We addressed two performance comparisons, namely quantitative and qualitative analysis. Qualitative analysis includes the smoothness of the image and also the success in distinguishing dielectric differences by observing the image with human eyesight. In addition, quantitative analysis includes Histogram, Structural Similarity Index (SSIM), Mean Squared Error (MSE), and Peak Signal-to-Noise Ratio (PSNR) calculation were also performed. As a result, quantitative parameters of FBP might show better values than the ART. However, ART is likely more capable to distinguish two different dielectric value than FBP, due to higher contrast in ART and wide distribution grayscale level.

  6. Prevalence of incidental findings on magnetic resonance imaging: Cuban project to map the human brain

    International Nuclear Information System (INIS)

    Hernandez Gonzalez, Gertrudis de los Angeles; Alvarez Sanchez, Marilet; Jordan Gonzalez, Jose

    2010-01-01

    To determine the prevalence of incidental findings in healthy subjects of the Cuban Human Brain Mapping Project sample, it was performed a retrospective descriptive study of the magnetic resonance imaging (MRI) obtained from 394 healthy subjects that make up the sample of the project, between 2006-2007, with an age range of 18 to 68 years (mean 33,12), of which 269 (68,27 %) are male and 125 (31,73 %) are women. It was shown that 40,36 % had one or more anomaly in the magnetic resonance imaging (MRI). In total, the number of incidental findings was 188, 23,6 % of which were brain findings and 24,11 % were non-brain findings, among the latter, were the sinusopathy with 20,81 % and maxillary polyps with 3,30 %. The most prevalent brain findings were: intrasellar arachnoidocele, 11,93 %, followed by the prominence of the pituitary gland, 5,84 %, ventricular asymmetry, 1,77 % and bone defects, 1,02 %. Other brain abnormalities found with very low prevalence had no pathological significance, except for two cases with brain tumor, which were immediately sent to a specialist. Incidental findings in MRI are common in the general population (40,36 %), being the sinusopathy, and intrasellar arachnoidocele the most common findings. Asymptomatic individuals who have any type of structural abnormality provide invaluable information on the prevalence of these abnormalities in a presumably healthy population, which may be used as references for epidemiological studies

  7. Automatic tracking of implanted fiducial markers in cone beam CT projection images

    International Nuclear Information System (INIS)

    Marchant, T. E.; Skalski, A.; Matuszewski, B. J.

    2012-01-01

    Purpose: This paper describes a novel method for simultaneous intrafraction tracking of multiple fiducial markers. Although the proposed method is generic and can be adopted for a number of applications including fluoroscopy based patient position monitoring and gated radiotherapy, the tracking results presented in this paper are specific to tracking fiducial markers in a sequence of cone beam CT projection images. Methods: The proposed method is accurate and robust thanks to utilizing the mean shift and random sampling principles, respectively. The performance of the proposed method was evaluated with qualitative and quantitative methods, using data from two pancreatic and one prostate cancer patients and a moving phantom. The ground truth, for quantitative evaluation, was calculated based on manual tracking preformed by three observers. Results: The average dispersion of marker position error calculated from the tracking results for pancreas data (six markers tracked over 640 frames, 3840 marker identifications) was 0.25 mm (at iscoenter), compared with an average dispersion for the manual ground truth estimated at 0.22 mm. For prostate data (three markers tracked over 366 frames, 1098 marker identifications), the average error was 0.34 mm. The estimated tracking error in the pancreas data was < 1 mm (2 pixels) in 97.6% of cases where nearby image clutter was detected and in 100.0% of cases with no nearby image clutter. Conclusions: The proposed method has accuracy comparable to that of manual tracking and, in combination with the proposed batch postprocessing, superior robustness. Marker tracking in cone beam CT (CBCT) projections is useful for a variety of purposes, such as providing data for assessment of intrafraction motion, target tracking during rotational treatment delivery, motion correction of CBCT, and phase sorting for 4D CBCT.

  8. Macro optical projection tomography for large scale 3D imaging of plant structures and gene activity.

    Science.gov (United States)

    Lee, Karen J I; Calder, Grant M; Hindle, Christopher R; Newman, Jacob L; Robinson, Simon N; Avondo, Jerome J H Y; Coen, Enrico S

    2017-01-01

    Optical projection tomography (OPT) is a well-established method for visualising gene activity in plants and animals. However, a limitation of conventional OPT is that the specimen upper size limit precludes its application to larger structures. To address this problem we constructed a macro version called Macro OPT (M-OPT). We apply M-OPT to 3D live imaging of gene activity in growing whole plants and to visualise structural morphology in large optically cleared plant and insect specimens up to 60 mm tall and 45 mm deep. We also show how M-OPT can be used to image gene expression domains in 3D within fixed tissue and to visualise gene activity in 3D in clones of growing young whole Arabidopsis plants. A further application of M-OPT is to visualise plant-insect interactions. Thus M-OPT provides an effective 3D imaging platform that allows the study of gene activity, internal plant structures and plant-insect interactions at a macroscopic scale. © The Author 2016. Published by Oxford University Press on behalf of the Society for Experimental Biology.

  9. Accelerated perturbation-resilient block-iterative projection methods with application to image reconstruction.

    Science.gov (United States)

    Nikazad, T; Davidi, R; Herman, G T

    2012-03-01

    We study the convergence of a class of accelerated perturbation-resilient block-iterative projection methods for solving systems of linear equations. We prove convergence to a fixed point of an operator even in the presence of summable perturbations of the iterates, irrespective of the consistency of the linear system. For a consistent system, the limit point is a solution of the system. In the inconsistent case, the symmetric version of our method converges to a weighted least squares solution. Perturbation resilience is utilized to approximate the minimum of a convex functional subject to the equations. A main contribution, as compared to previously published approaches to achieving similar aims, is a more than an order of magnitude speed-up, as demonstrated by applying the methods to problems of image reconstruction from projections. In addition, the accelerated algorithms are illustrated to be better, in a strict sense provided by the method of statistical hypothesis testing, than their unaccelerated versions for the task of detecting small tumors in the brain from X-ray CT projection data.

  10. The Systems Biology Markup Language (SBML) Level 3 Package: Layout, Version 1 Core.

    Science.gov (United States)

    Gauges, Ralph; Rost, Ursula; Sahle, Sven; Wengler, Katja; Bergmann, Frank Thomas

    2015-09-04

    Many software tools provide facilities for depicting reaction network diagrams in a visual form. Two aspects of such a visual diagram can be distinguished: the layout (i.e.: the positioning and connections) of the elements in the diagram, and the graphical form of the elements (for example, the glyphs used for symbols, the properties of the lines connecting them, and so on). For software tools that also read and write models in SBML (Systems Biology Markup Language) format, a common need is to store the network diagram together with the SBML representation of the model. This in turn raises the question of how to encode the layout and the rendering of these diagrams. The SBML Level 3 Version 1 Core specification does not provide a mechanism for explicitly encoding diagrams, but it does provide a mechanism for SBML packages to extend the Core specification and add additional syntactical constructs. The Layout package for SBML Level 3 adds the necessary features to SBML so that diagram layouts can be encoded in SBML files, and a companion package called SBML Rendering specifies how the graphical rendering of elements can be encoded. The SBML Layout package is based on the principle that reaction network diagrams should be described as representations of entities such as species and reactions (with direct links to the underlying SBML elements), and not as arbitrary drawings or graphs; for this reason, existing languages for the description of vector drawings (such as SVG) or general graphs (such as GraphML) cannot be used.

  11. Histoimmunogenetics Markup Language 1.0: Reporting next generation sequencing-based HLA and KIR genotyping.

    Science.gov (United States)

    Milius, Robert P; Heuer, Michael; Valiga, Daniel; Doroschak, Kathryn J; Kennedy, Caleb J; Bolon, Yung-Tsi; Schneider, Joel; Pollack, Jane; Kim, Hwa Ran; Cereb, Nezih; Hollenbach, Jill A; Mack, Steven J; Maiers, Martin

    2015-12-01

    We present an electronic format for exchanging data for HLA and KIR genotyping with extensions for next-generation sequencing (NGS). This format addresses NGS data exchange by refining the Histoimmunogenetics Markup Language (HML) to conform to the proposed Minimum Information for Reporting Immunogenomic NGS Genotyping (MIRING) reporting guidelines (miring.immunogenomics.org). Our refinements of HML include two major additions. First, NGS is supported by new XML structures to capture additional NGS data and metadata required to produce a genotyping result, including analysis-dependent (dynamic) and method-dependent (static) components. A full genotype, consensus sequence, and the surrounding metadata are included directly, while the raw sequence reads and platform documentation are externally referenced. Second, genotype ambiguity is fully represented by integrating Genotype List Strings, which use a hierarchical set of delimiters to represent allele and genotype ambiguity in a complete and accurate fashion. HML also continues to enable the transmission of legacy methods (e.g. site-specific oligonucleotide, sequence-specific priming, and Sequence Based Typing (SBT)), adding features such as allowing multiple group-specific sequencing primers, and fully leveraging techniques that combine multiple methods to obtain a single result, such as SBT integrated with NGS. Copyright © 2015 The Authors. Published by Elsevier Inc. All rights reserved.

  12. Coding practice of the Journal Article Tag Suite extensible markup language

    Directory of Open Access Journals (Sweden)

    Sun Huh

    2014-08-01

    Full Text Available In general, the Journal Article Tag Suite (JATS extensible markup language (XML coding is processed automatically by an XML filtering program. In this article, the basic tagging in JATS is explained in terms of coding practice. A text editor that supports UTF-8 encoding is necessary to input JATS XML data that works in every language. Any character representable in Unicode can be used in JATS XML, and commonly available web browsers can be used to view JATS XML files. JATS XML files can refer to document type definitions, extensible stylesheet language files, and cascading style sheets, but they must specify the locations of those files. Tools for validating JATS XML files are available via the web sites of PubMed Central and ScienceCentral. Once these files are uploaded to a web server, they can be accessed from all over the world by anyone with a browser. Encoding an example article in JATS XML may help editors in deciding on the adoption of JATS XML.

  13. Efficient Analysis of Systems Biology Markup Language Models of Cellular Populations Using Arrays.

    Science.gov (United States)

    Watanabe, Leandro; Myers, Chris J

    2016-08-19

    The Systems Biology Markup Language (SBML) has been widely used for modeling biological systems. Although SBML has been successful in representing a wide variety of biochemical models, the core standard lacks the structure for representing large complex regular systems in a standard way, such as whole-cell and cellular population models. These models require a large number of variables to represent certain aspects of these types of models, such as the chromosome in the whole-cell model and the many identical cell models in a cellular population. While SBML core is not designed to handle these types of models efficiently, the proposed SBML arrays package can represent such regular structures more easily. However, in order to take full advantage of the package, analysis needs to be aware of the arrays structure. When expanding the array constructs within a model, some of the advantages of using arrays are lost. This paper describes a more efficient way to simulate arrayed models. To illustrate the proposed method, this paper uses a population of repressilator and genetic toggle switch circuits as examples. Results show that there are memory benefits using this approach with a modest cost in runtime.

  14. A two-way interface between limited Systems Biology Markup Language and R

    Directory of Open Access Journals (Sweden)

    Radivoyevitch Tomas

    2004-12-01

    Full Text Available Abstract Background Systems Biology Markup Language (SBML is gaining broad usage as a standard for representing dynamical systems as data structures. The open source statistical programming environment R is widely used by biostatisticians involved in microarray analyses. An interface between SBML and R does not exist, though one might be useful to R users interested in SBML, and SBML users interested in R. Results A model structure that parallels SBML to a limited degree is defined in R. An interface between this structure and SBML is provided through two function definitions: write.SBML( which maps this R model structure to SBML level 2, and read.SBML( which maps a limited range of SBML level 2 files back to R. A published model of purine metabolism is provided in this SBML-like format and used to test the interface. The model reproduces published time course responses before and after its mapping through SBML. Conclusions List infrastructure preexisting in R makes it well-suited for manipulating SBML models. Further developments of this SBML-R interface seem to be warranted.

  15. The Systems Biology Markup Language (SBML Level 3 Package: Layout, Version 1 Core

    Directory of Open Access Journals (Sweden)

    Gauges Ralph

    2015-06-01

    Full Text Available Many software tools provide facilities for depicting reaction network diagrams in a visual form. Two aspects of such a visual diagram can be distinguished: the layout (i.e.: the positioning and connections of the elements in the diagram, and the graphical form of the elements (for example, the glyphs used for symbols, the properties of the lines connecting them, and so on. For software tools that also read and write models in SBML (Systems Biology Markup Language format, a common need is to store the network diagram together with the SBML representation of the model. This in turn raises the question of how to encode the layout and the rendering of these diagrams. The SBML Level 3 Version 1 Core specification does not provide a mechanism for explicitly encoding diagrams, but it does provide a mechanism for SBML packages to extend the Core specification and add additional syntactical constructs. The Layout package for SBML Level 3 adds the necessary features to SBML so that diagram layouts can be encoded in SBML files, and a companion package called SBML Rendering specifies how the graphical rendering of elements can be encoded.

  16. SBMLeditor: effective creation of models in the Systems Biology Markup language (SBML).

    Science.gov (United States)

    Rodriguez, Nicolas; Donizelli, Marco; Le Novère, Nicolas

    2007-03-06

    The need to build a tool to facilitate the quick creation and editing of models encoded in the Systems Biology Markup language (SBML) has been growing with the number of users and the increased complexity of the language. SBMLeditor tries to answer this need by providing a very simple, low level editor of SBML files. Users can create and remove all the necessary bits and pieces of SBML in a controlled way, that maintains the validity of the final SBML file. SBMLeditor is written in JAVA using JCompneur, a library providing interfaces to easily display an XML document as a tree. This decreases dramatically the development time for a new XML editor. The possibility to include custom dialogs for different tags allows a lot of freedom for the editing and validation of the document. In addition to Xerces, SBMLeditor uses libSBML to check the validity and consistency of SBML files. A graphical equation editor allows an easy manipulation of MathML. SBMLeditor can be used as a module of the Systems Biology Workbench. SBMLeditor contains many improvements compared to a generic XML editor, and allow users to create an SBML model quickly and without syntactic errors.

  17. A two-way interface between limited Systems Biology Markup Language and R.

    Science.gov (United States)

    Radivoyevitch, Tomas

    2004-12-07

    Systems Biology Markup Language (SBML) is gaining broad usage as a standard for representing dynamical systems as data structures. The open source statistical programming environment R is widely used by biostatisticians involved in microarray analyses. An interface between SBML and R does not exist, though one might be useful to R users interested in SBML, and SBML users interested in R. A model structure that parallels SBML to a limited degree is defined in R. An interface between this structure and SBML is provided through two function definitions: write.SBML() which maps this R model structure to SBML level 2, and read.SBML() which maps a limited range of SBML level 2 files back to R. A published model of purine metabolism is provided in this SBML-like format and used to test the interface. The model reproduces published time course responses before and after its mapping through SBML. List infrastructure preexisting in R makes it well-suited for manipulating SBML models. Further developments of this SBML-R interface seem to be warranted.

  18. A new instrument to assess physician skill at thoracic ultrasound, including pleural effusion markup.

    Science.gov (United States)

    Salamonsen, Matthew; McGrath, David; Steiler, Geoff; Ware, Robert; Colt, Henri; Fielding, David

    2013-09-01

    To reduce complications and increase success, thoracic ultrasound is recommended to guide all chest drainage procedures. Despite this, no tools currently exist to assess proceduralist training or competence. This study aims to validate an instrument to assess physician skill at performing thoracic ultrasound, including effusion markup, and examine its validity. We developed an 11-domain, 100-point assessment sheet in line with British Thoracic Society guidelines: the Ultrasound-Guided Thoracentesis Skills and Tasks Assessment Test (UGSTAT). The test was used to assess 22 participants (eight novices, seven intermediates, seven advanced) on two occasions while performing thoracic ultrasound on a pleural effusion phantom. Each test was scored by two blinded expert examiners. Validity was examined by assessing the ability of the test to stratify participants according to expected skill level (analysis of variance) and demonstrating test-retest and intertester reproducibility by comparison of repeated scores (mean difference [95% CI] and paired t test) and the intraclass correlation coefficient. Mean scores for the novice, intermediate, and advanced groups were 49.3, 73.0, and 91.5 respectively, which were all significantly different (P < .0001). There were no significant differences between repeated scores. Procedural training on mannequins prior to unsupervised performance on patients is rapidly becoming the standard in medical education. This study has validated the UGSTAT, which can now be used to determine the adequacy of thoracic ultrasound training prior to clinical practice. It is likely that its role could be extended to live patients, providing a way to document ongoing procedural competence.

  19. Representing Information in Patient Reports Using Natural Language Processing and the Extensible Markup Language

    Science.gov (United States)

    Friedman, Carol; Hripcsak, George; Shagina, Lyuda; Liu, Hongfang

    1999-01-01

    Objective: To design a document model that provides reliable and efficient access to clinical information in patient reports for a broad range of clinical applications, and to implement an automated method using natural language processing that maps textual reports to a form consistent with the model. Methods: A document model that encodes structured clinical information in patient reports while retaining the original contents was designed using the extensible markup language (XML), and a document type definition (DTD) was created. An existing natural language processor (NLP) was modified to generate output consistent with the model. Two hundred reports were processed using the modified NLP system, and the XML output that was generated was validated using an XML validating parser. Results: The modified NLP system successfully processed all 200 reports. The output of one report was invalid, and 199 reports were valid XML forms consistent with the DTD. Conclusions: Natural language processing can be used to automatically create an enriched document that contains a structured component whose elements are linked to portions of the original textual report. This integrated document model provides a representation where documents containing specific information can be accurately and efficiently retrieved by querying the structured components. If manual review of the documents is desired, the salient information in the original reports can also be identified and highlighted. Using an XML model of tagging provides an additional benefit in that software tools that manipulate XML documents are readily available. PMID:9925230

  20. Computerized tomographic simulation compared with clinical mark-up in palliative radiotherapy: A prospective study

    International Nuclear Information System (INIS)

    Haddad, Peiman; Cheung, Fred; Pond, Gregory; Easton, Debbie; Cops, Frederick; Bezjak, Andrea; McLean, Michael; Levin, Wilfred; Billingsley, Susan; Williams, Diane; Wong, Rebecca

    2006-01-01

    Purpose To evaluate the impact of computed tomographic (CT) planning in comparison to clinical mark-up (CM) for palliative radiation of chest wall metastases. Methods and Materials In patients treated with CM for chest wall bone metastases (without conventional simulation/fluoroscopy), two consecutive planning CT scans were acquired with and without an external marker to delineate the CM treatment field. The two sets of scans were fused for evaluation of clinical tumor volume (CTV) coverage by the CM technique. Under-coverage was defined as the proportion of CTV not covered by the CM 80% isodose. Results Twenty-one treatments (ribs 17, sternum 2, and scapula 2) formed the basis of our study. Due to technical reasons, comparable data between CM and CT plans were available for 19 treatments only. CM resulted in a mean CTV under-coverage of 36%. Eleven sites (58%) had an under-coverage of >20%. Mean volume of normal tissues receiving ≥80% of the dose was 5.4% in CM and 9.3% in CT plans (p = 0.017). Based on dose-volume histogram comparisons, CT planning resulted in a change of treatment technique from direct apposition to a tangential pair in 7 of 19 cases. Conclusions CT planning demonstrated a 36% under-coverage of CTV with CM of ribs and chest wall metastases

  1. Objective and expert-independent validation of retinal image registration algorithms by a projective imaging distortion model

    NARCIS (Netherlands)

    Lee, Sangyeol; Reinhardt, Joseph M.; Cattin, Philippe C.; Abramoff, M.D.

    2010-01-01

    Fundus camera imaging of the retina is widely used to diagnose and manage ophthalmologic disorders including diabetic retinopathy, glaucoma, and age-related macular degeneration. Retinal images typically have a limited field of view, and multiple images can be joined together using an image

  2. Femtosecond few- to single-electron point-projection microscopy for nanoscale dynamic imaging

    Directory of Open Access Journals (Sweden)

    A. R. Bainbridge

    2016-03-01

    Full Text Available Femtosecond electron microscopy produces real-space images of matter in a series of ultrafast snapshots. Pulses of electrons self-disperse under space-charge broadening, so without compression, the ideal operation mode is a single electron per pulse. Here, we demonstrate femtosecond single-electron point projection microscopy (fs-ePPM in a laser-pump fs-e-probe configuration. The electrons have an energy of only 150 eV and take tens of picoseconds to propagate to the object under study. Nonetheless, we achieve a temporal resolution with a standard deviation of 114 fs (equivalent to a full-width at half-maximum of 269 ± 40 fs combined with a spatial resolution of 100 nm, applied to a localized region of charge at the apex of a nanoscale metal tip induced by 30 fs 800 nm laser pulses at 50 kHz. These observations demonstrate real-space imaging of reversible processes, such as tracking charge distributions, is feasible whilst maintaining femtosecond resolution. Our findings could find application as a characterization method, which, depending on geometry, could resolve tens of femtoseconds and tens of nanometres. Dynamically imaging electric and magnetic fields and charge distributions on sub-micron length scales opens new avenues of ultrafast dynamics. Furthermore, through the use of active compression, such pulses are an ideal seed for few-femtosecond to attosecond imaging applications which will access sub-optical cycle processes in nanoplasmonics.

  3. X-ray CT core imaging of Oman Drilling Project on D/V CHIKYU

    Science.gov (United States)

    Michibayashi, K.; Okazaki, K.; Leong, J. A. M.; Kelemen, P. B.; Johnson, K. T. M.; Greenberger, R. N.; Manning, C. E.; Harris, M.; de Obeso, J. C.; Abe, N.; Hatakeyama, K.; Ildefonse, B.; Takazawa, E.; Teagle, D. A. H.; Coggon, J. A.

    2017-12-01

    We obtained X-ray computed tomography (X-ray CT) images for all cores (GT1A, GT2A, GT3A and BT1A) in Oman Drilling Project Phase 1 (OmanDP cores), since X-ray CT scanning is a routine measurement of the IODP measurement plan onboard Chikyu, which enables the non-destructive observation of the internal structure of core samples. X-ray CT images provide information about chemical compositions and densities of the cores and is useful for assessing sample locations and the quality of the whole-round samples. The X-ray CT scanner (Discovery CT 750HD, GE Medical Systems) on Chikyu scans and reconstructs the image of a 1.4 m section in 10 minutes and produces a series of scan images, each 0.625 mm thick. The X-ray tube (as an X-ray source) and the X-ray detector are installed inside of the gantry at an opposing position to each other. The core sample is scanned in the gantry with the scanning rate of 20 mm/sec. The distribution of attenuation values mapped to an individual slice comprises the raw data that are used for subsequent image processing. Successive two-dimensional (2-D) slices of 512 x 512 pixels yield a representation of attenuation values in three-dimensional (3-D) voxels of 512 x 512 by 1600 in length. Data generated for each core consist of core-axis-normal planes (XY planes) of X-ray attenuation values with dimensions of 512 × 512 pixels in 9 cm × 9 cm cross-section, meaning at the dimensions of a core section, the resolution is 0.176 mm/pixel. X-ray intensity varies as a function of X-ray path length and the linear attenuation coefficient (LAC) of the target material is a function of the chemical composition and density of the target material. The basic measure of attenuation, or radiodensity, is the CT number given in Hounsfield units (HU). CT numbers of air and water are -1000 and 0, respectively. Our preliminary results show that CT numbers of OmanDP cores are well correlated to gamma ray attenuation density (GRA density) as a function of chemical

  4. Effect of number of of projections on inverse radon transform based image reconstruction by using filtered back-projection for parallel beam transmission tomography

    International Nuclear Information System (INIS)

    Qureshi, S.A.; Mirza, S.M.; Arif, M.

    2007-01-01

    This paper present the effect of number of projections on inverse Radon transform (IRT) estimation using filtered back-projection (FBP) technique for parallel beam transmission tomography. The head phantom and the lung phantom have been used in this work. Various filters used in this study include Ram-Lak, Shepp-Logan, Cosin, Hamming and Hanning filters. The slices have been reconstructed by increasing the number of projections through parallel beam transmission tomography keeping the projections uniformly distributed. The Euclidean and Mean Squared errors and peak signal-to-noise ratio (PSNR) have been analyzed for their sensitiveness as functions of number of projections. It has found that image quality improves with the number of projections but at the cost of the computer time. The error has been minimized to get the best approximation of inverse Radon transform (IRT) as the number of projections is enhanced. The value of PSNR has been found to increase from 8.20 to 24.53 dB as the number of projections is raised from 5 to 180 for head phantom. (author)

  5. The Age-ility Project (Phase 1): Structural and functional imaging and electrophysiological data repository.

    Science.gov (United States)

    Karayanidis, Frini; Keuken, Max C; Wong, Aaron; Rennie, Jaime L; de Hollander, Gilles; Cooper, Patrick S; Ross Fulham, W; Lenroot, Rhoshel; Parsons, Mark; Phillips, Natalie; Michie, Patricia T; Forstmann, Birte U

    2016-01-01

    Our understanding of the complex interplay between structural and functional organisation of brain networks is being advanced by the development of novel multi-modal analyses approaches. The Age-ility Project (Phase 1) data repository offers open access to structural MRI, diffusion MRI, and resting-state fMRI scans, as well as resting-state EEG recorded from the same community participants (n=131, 15-35 y, 66 male). Raw imaging and electrophysiological data as well as essential demographics are made available via the NITRC website. All data have been reviewed for artifacts using a rigorous quality control protocol and detailed case notes are provided. Copyright © 2015. Published by Elsevier Inc.

  6. Design of the PET–MR system for head imaging of the DREAM Project

    International Nuclear Information System (INIS)

    González, A.J.; Conde, P.; Hernández, L.; Herrero, V.; Moliner, L.; Monzó, J.M.; Orero, A.; Peiró, A.; Rodríguez-Álvarez, M.J.; Ros, A.; Sánchez, F.; Soriano, A.; Vidal, L.F.; Benlloch, J.M.

    2013-01-01

    In this paper we describe the overall design of a PET–MR system for head imaging within the framework of the DREAM Project as well as the first detector module tests. The PET system design consists of 4 rings of 16 detector modules each and it is expected to be integrated in a head dedicated radio frequency coil of an MR scanner. The PET modules are based on monolithic LYSO crystals coupled by means of optical devices to an array of 256 Silicon Photomultipliers. These types of crystals allow to preserve the scintillation light distribution and, thus, to recover the exact photon impact position with the proper characterization of such a distribution. Every module contains 4 Application Specific Integrated Circuits (ASICs) which return detailed information of several light statistical momenta. The preliminary tests carried out on this design and controlled by means of ASICs have shown promising results towards the suitability of hybrid PET–MR systems

  7. A Photometric Stereo Using Re-Projected Images for Active Stereo Vision System

    Directory of Open Access Journals (Sweden)

    Keonhwa Jung

    2017-10-01

    Full Text Available In optical 3D shape measurement, stereo vision with structured light can measure 3D scan data with high accuracy and is used in many applications, but fine surface detail is difficult to obtain. On the other hand, photometric stereo can capture surface details but has disadvantages, in that its 3D data accuracy drops and it requires multiple light sources. When the two measurement methods are combined, more accurate 3D scan data and detailed surface features can be obtained at the same time. In this paper, we present a 3D optical measurement technique that uses re-projection of images to implement photometric stereo without an external light source. 3D scan data is enhanced by combining normal vector from this photometric stereo method, and the result is evaluated with the ground truth.

  8. Diaphragm motion quantification in megavoltage cone-beam CT projection images.

    Science.gov (United States)

    Chen, Mingqing; Siochi, R Alfredo

    2010-05-01

    To quantify diaphragm motion in megavoltage (MV) cone-beam computed tomography (CBCT) projections. User identified ipsilateral hemidiaphragm apex (IHDA) positions in two full exhale and inhale frames were used to create bounding rectangles in all other frames of a CBCT scan. The bounding rectangle was enlarged to create a region of interest (ROI). ROI pixels were associated with a cost function: The product of image gradients and a gradient direction matching function for an ideal hemidiaphragm determined from 40 training sets. A dynamic Hough transform (DHT) models a hemidiaphragm as a contour made of two parabola segments with a common vertex (the IHDA). The images within the ROIs are transformed into Hough space where a contour's Hough value is the sum of the cost function over all contour pixels. Dynamic programming finds the optimal trajectory of the common vertex in Hough space subject to motion constraints between frames, and an active contour model further refines the result. Interpolated ray tracing converts the positions to room coordinates. Root-mean-square (RMS) distances between these positions and those resulting from an expert's identification of the IHDA were determined for 21 Siemens MV CBCT scans. Computation time on a 2.66 GHz CPU was 30 s. The average craniocaudal RMS error was 1.38 +/- 0.67 mm. While much larger errors occurred in a few near-sagittal frames on one patient's scans, adjustments to algorithm constraints corrected them. The DHT based algorithm can compute IHDA trajectories immediately prior to radiation therapy on a daily basis using localization MVCBCT projection data. This has potential for calibrating external motion surrogates against diaphragm motion.

  9. Short Project-Based Learning with MATLAB Applications to Support the Learning of Video-Image Processing

    Science.gov (United States)

    Gil, Pablo

    2017-10-01

    University courses concerning Computer Vision and Image Processing are generally taught using a traditional methodology that is focused on the teacher rather than on the students. This approach is consequently not effective when teachers seek to attain cognitive objectives involving their students' critical thinking. This manuscript covers the development, implementation and assessment of a short project-based engineering course with MATLAB applications Multimedia Engineering being taken by Bachelor's degree students. The principal goal of all course lectures and hands-on laboratory activities was for the students to not only acquire image-specific technical skills but also a general knowledge of data analysis so as to locate phenomena in pixel regions of images and video frames. This would hopefully enable the students to develop skills regarding the implementation of the filters, operators, methods and techniques used for image processing and computer vision software libraries. Our teaching-learning process thus permits the accomplishment of knowledge assimilation, student motivation and skill development through the use of a continuous evaluation strategy to solve practical and real problems by means of short projects designed using MATLAB applications. Project-based learning is not new. This approach has been used in STEM learning in recent decades. But there are many types of projects. The aim of the current study is to analyse the efficacy of short projects as a learning tool when compared to long projects during which the students work with more independence. This work additionally presents the impact of different types of activities, and not only short projects, on students' overall results in this subject. Moreover, a statistical study has allowed the author to suggest a link between the students' success ratio and the type of content covered and activities completed on the course. The results described in this paper show that those students who took part

  10. Developing priority criteria for magnetic resonance imaging: results from the Western Canada Waiting List project

    International Nuclear Information System (INIS)

    Hadorn, D.C.

    2002-01-01

    The Western Canada Waiting List (WCWL) Project is a federally funded partnership of 19 organizations, including medical associations, health authorities, ministries of health and research organizations, that was created to develop tools to assist in assessing the relative urgency and priority of patients on waiting lists. The WCWL panel on magnetic resonance imaging (MRI) was 1 of 5 panels constituted under this project. The panel developed and tested a set of standardized clinical criteria for setting priorities among patients awaiting MRI. The criteria were applied to 407 patients in the 4 western provinces. Regression analysis was used to determine the set of criteria weights that collectively best predicted clinicians' overall ratings of patients' urgency for MRI. Reliability was assessed using clinicians' ratings of 6 hypothetical paper cases. The resulting weighted criteria accounted for about two-fifths of the observed variance in overall urgency ratings (R 2 = 39.9%). The panel then modified the criteria on the basis of regression results and clinical judgment. Most of the revised criteria items showed poor inter-rater reliability, but test-retest reliability (over a 2-month interval) was relatively good. Criteria items requiring probability judgments were a challenge for clinicians. Further development and testing of the tool appears warranted, although considerable question remains concerning the utility of priority criteria for MRI and other diagnostic services. (author)

  11. A 3D Kinematic Measurement of Knee Prosthesis Using X-ray Projection Images

    Science.gov (United States)

    Hirokawa, Shunji; Ariyoshi, Shogo; Hossain, Mohammad Abrar

    We have developed a technique for estimating 3D motion of knee prosthesis from its 2D perspective projections. As Fourier descriptors were used for compact representation of library templates and contours extracted from the prosthetic X-ray images, the entire silhouette contour of each prosthetic component was required. This caused such a problem as our algorithm did not function when the silhouettes of tibio and femoral components overlapped with each other. Here we planned a novel method to overcome it; which was processed in two steps. First, the missing part of silhouette contour due to overlap was interpolated using a free-formed curvature such as Bezier. Then the first step position/orientation estimation was performed. In the next step, a clipping window was set in the projective coordinate so as to separate the overlapped silhouette drawn using the first step estimates. After that the localized library whose templates were clipped in shape was prepared and the second step estimation was performed. Computer model simulation demonstrated sufficient accuracies of position/orientation estimation even for overlapped silhouettes; equivalent to those without overlap.

  12. High-speed technique based on a parallel projection correlation procedure for digital image correlation

    Science.gov (United States)

    Zaripov, D. I.; Renfu, Li

    2018-05-01

    The implementation of high-efficiency digital image correlation methods based on a zero-normalized cross-correlation (ZNCC) procedure for high-speed, time-resolved measurements using a high-resolution digital camera is associated with big data processing and is often time consuming. In order to speed-up ZNCC computation, a high-speed technique based on a parallel projection correlation procedure is proposed. The proposed technique involves the use of interrogation window projections instead of its two-dimensional field of luminous intensity. This simplification allows acceleration of ZNCC computation up to 28.8 times compared to ZNCC calculated directly, depending on the size of interrogation window and region of interest. The results of three synthetic test cases, such as a one-dimensional uniform flow, a linear shear flow and a turbulent boundary-layer flow, are discussed in terms of accuracy. In the latter case, the proposed technique is implemented together with an iterative window-deformation technique. On the basis of the results of the present work, the proposed technique is recommended to be used for initial velocity field calculation, with further correction using more accurate techniques.

  13. Improved application of independent component analysis to functional magnetic resonance imaging study via linear projection techniques.

    Science.gov (United States)

    Long, Zhiying; Chen, Kewei; Wu, Xia; Reiman, Eric; Peng, Danling; Yao, Li

    2009-02-01

    Spatial Independent component analysis (sICA) has been widely used to analyze functional magnetic resonance imaging (fMRI) data. The well accepted implicit assumption is the spatially statistical independency of intrinsic sources identified by sICA, making the sICA applications difficult for data in which there exist interdependent sources and confounding factors. This interdependency can arise, for instance, from fMRI studies investigating two tasks in a single session. In this study, we introduced a linear projection approach and considered its utilization as a tool to separate task-related components from two-task fMRI data. The robustness and feasibility of the method are substantiated through simulation on computer data and fMRI real rest data. Both simulated and real two-task fMRI experiments demonstrated that sICA in combination with the projection method succeeded in separating spatially dependent components and had better detection power than pure model-based method when estimating activation induced by each task as well as both tasks.

  14. Comparison of pure and hybrid iterative reconstruction techniques with conventional filtered back projection: Image quality assessment in the cervicothoracic region

    International Nuclear Information System (INIS)

    Katsura, Masaki; Sato, Jiro; Akahane, Masaaki; Matsuda, Izuru; Ishida, Masanori; Yasaka, Koichiro; Kunimatsu, Akira; Ohtomo, Kuni

    2013-01-01

    Objectives: To evaluate the impact on image quality of three different image reconstruction techniques in the cervicothoracic region: model-based iterative reconstruction (MBIR), adaptive statistical iterative reconstruction (ASIR), and filtered back projection (FBP). Methods: Forty-four patients underwent unenhanced standard-of-care clinical computed tomography (CT) examinations which included the cervicothoracic region with a 64-row multidetector CT scanner. Images were reconstructed with FBP, 50% ASIR-FBP blending (ASIR50), and MBIR. Two radiologists assessed the cervicothoracic region in a blinded manner for streak artifacts, pixilated blotchy appearances, critical reproduction of visually sharp anatomical structures (thyroid gland, common carotid artery, and esophagus), and overall diagnostic acceptability. Objective image noise was measured in the internal jugular vein. Data were analyzed using the sign test and pair-wise Student's t-test. Results: MBIR images had significant lower quantitative image noise (8.88 ± 1.32) compared to ASIR images (18.63 ± 4.19, P 0.9 for ASIR vs. FBP for both readers). MBIR images were all diagnostically acceptable. Unique features of MBIR images included pixilated blotchy appearances, which did not adversely affect diagnostic acceptability. Conclusions: MBIR significantly improves image noise and streak artifacts of the cervicothoracic region over ASIR and FBP. MBIR is expected to enhance the value of CT examinations for areas where image noise and streak artifacts are problematic

  15. Image quality of microcalcifications in digital breast tomosynthesis: Effects of projection-view distributions

    International Nuclear Information System (INIS)

    Lu, Yao; Chan, Heang-Ping; Wei, Jun; Goodsitt, Mitch; Carson, Paul L.; Hadjiiski, Lubomir; Schmitz, Andrea; Eberhard, Jeffrey W.; Claus, Bernhard E. H.

    2011-01-01

    Purpose: To analyze the effects of projection-view (PV) distribution on the contrast and spatial blurring of microcalcifications on the tomosynthesized slices (X-Y plane) and along the depth (Z) direction for the same radiation dose in digital breast tomosynthesis (DBT). Methods: A GE GEN2 prototype DBT system was used for acquisition of DBT scans. The system acquires PV images from 21 angles in 3 deg. increments over a ±30 deg. range. From these acquired PV images, the authors selected six subsets of PV images to simulate DBT of different angular ranges and angular increments. The number of PV images in each subset was fixed at 11 to simulate a constant total dose. These different PV distributions were subjectively divided into three categories: uniform group, nonuniform central group, and nonuniform extreme group with different angular ranges and angular increments. The simultaneous algebraic reconstruction technique (SART) was applied to each subset to reconstruct the DBT slices. A selective diffusion regularization method was employed to suppress noise. The image quality of microcalcifications in the reconstructed DBTs with different PV distributions was compared using the DBT scans of an American College of Radiology phantom and three human subjects. The contrast-to-noise ratio (CNR) and the full width at half maximum (FWHM) of the line profiles of microcalcifications within their in-focus DBT slices (parallel to detector plane) and the FWHMs of the interplane artifact spread function (ASF) in the Z-direction (perpendicular to detector plane) were used as image quality measures. Results: The results indicate that DBT acquired with a large angular range or, for an equal angular range,with a large fraction of PVs at large angles yielded superior ASF with smaller FWHM in the Z-direction. PV distributions with a narrow angular range or a large fraction of PVs at small angles had stronger interplane artifacts. In the X-Y focal planes, the effect of PV

  16. Objective and expert-independent validation of retinal image registration algorithms by a projective imaging distortion model.

    Science.gov (United States)

    Lee, Sangyeol; Reinhardt, Joseph M; Cattin, Philippe C; Abràmoff, Michael D

    2010-08-01

    Fundus camera imaging of the retina is widely used to diagnose and manage ophthalmologic disorders including diabetic retinopathy, glaucoma, and age-related macular degeneration. Retinal images typically have a limited field of view, and multiple images can be joined together using an image registration technique to form a montage with a larger field of view. A variety of methods for retinal image registration have been proposed, but evaluating such methods objectively is difficult due to the lack of a reference standard for the true alignment of the individual images that make up the montage. A method of generating simulated retinal images by modeling the geometric distortions due to the eye geometry and the image acquisition process is described in this paper. We also present a validation process that can be used for any retinal image registration method by tracing through the distortion path and assessing the geometric misalignment in the coordinate system of the reference standard. The proposed method can be used to perform an accuracy evaluation over the whole image, so that distortion in the non-overlapping regions of the montage components can be easily assessed. We demonstrate the technique by generating test image sets with a variety of overlap conditions and compare the accuracy of several retinal image registration models. Copyright 2010 Elsevier B.V. All rights reserved.

  17. Use of the geometric mean of opposing planar projections in pre-reconstruction restoration of SPECT images

    International Nuclear Information System (INIS)

    Boulfelfel, D.; Rangayyan, R.M.; Hahn, L.J.; Kloiber, R.

    1992-01-01

    This paper presents a restoration scheme for single photon emission computed tomography (SPECT) images that performs restoration before reconstruction (pre-reconstruction restoration) from planar (projection) images. In this scheme, the pixel-by-pixel geometric mean of each pair of opposing (conjugate) planar projections is computed prior to the reconstruction process. The averaging process is shown to help in making the degradation phenomenon less dependent on the distance of each point of the object from the camera. The restoration filters investigated are the Wiener and power spectrum equalization filters. (author)

  18. Reconstruction of computed tomographic image from a few x-ray projections by means of accelerative gradient method

    International Nuclear Information System (INIS)

    Kobayashi, Fujio; Yamaguchi, Shoichiro

    1982-01-01

    A method of the reconstruction of computed tomographic images was proposed to reduce the exposure dose to X-ray. The method is the small number of X-ray projection method by accelerative gradient method. The procedures of computation are described. The algorithm of these procedures is simple, the convergence of the computation is fast, and the required memory capacity is small. Numerical simulation was carried out to conform the validity of this method. A sample of simple shape was considered, projection data were given, and the images were reconstructed from 6 views. Good results were obtained, and the method is considered to be useful. (Kato, T.)

  19. Exact fan-beam image reconstruction algorithm for truncated projection data acquired from an asymmetric half-size detector

    International Nuclear Information System (INIS)

    Leng Shuai; Zhuang Tingliang; Nett, Brian E; Chen Guanghong

    2005-01-01

    In this paper, we present a new algorithm designed for a specific data truncation problem in fan-beam CT. We consider a scanning configuration in which the fan-beam projection data are acquired from an asymmetrically positioned half-sized detector. Namely, the asymmetric detector only covers one half of the scanning field of view. Thus, the acquired fan-beam projection data are truncated at every view angle. If an explicit data rebinning process is not invoked, this data acquisition configuration will reek havoc on many known fan-beam image reconstruction schemes including the standard filtered backprojection (FBP) algorithm and the super-short-scan FBP reconstruction algorithms. However, we demonstrate that a recently developed fan-beam image reconstruction algorithm which reconstructs an image via filtering a backprojection image of differentiated projection data (FBPD) survives the above fan-beam data truncation problem. Namely, we may exactly reconstruct the whole image object using the truncated data acquired in a full scan mode (2π angular range). We may also exactly reconstruct a small region of interest (ROI) using the truncated projection data acquired in a short-scan mode (less than 2π angular range). The most important characteristic of the proposed reconstruction scheme is that an explicit data rebinning process is not introduced. Numerical simulations were conducted to validate the new reconstruction algorithm

  20. The price of surgery: markup of operative procedures in the United States.

    Science.gov (United States)

    Gani, Faiz; Makary, Martin A; Pawlik, Timothy M

    2017-02-01

    Despite cost containment efforts, the price for surgery is not subject to any regulations. We sought to characterize and compare variability in pricing for commonly performed major surgical procedures across the United States. Medicare claims corresponding to eight major surgical procedures (aortic aneurysm repair, aortic valvuloplasty, carotid endartectomy, coronary artery bypass grafting, esophagectomy, pancreatectomy, liver resection, and colectomy) were identified using the Medicare Provider Utilization and Payment Data Physician and Other Supplier Public Use File for 2013. For each procedure, total charges, Medicare-allowable costs, and total payments were recorded. A procedure-specific markup ratio (MR; ratio of total charges to Medicare-allowable costs) was calculated and compared between procedures and across states. Variation in MR was compared using a coefficient of variation (CoV). Among all providers, the median MR was 3.5 (interquartile range: 3.1-4.0). MR was noted to vary by procedure; ranging from 3.0 following colectomy to 6.0 following carotid endartectomy (P < 0.001). MR also varied for the same procedure; varying the least after liver resection (CoV = 0.24), while coronary artery bypass grafting pricing demonstrated the greatest variation in MR (CoV = 0.53). Compared with the national average, MR varied by 36% between states ranging from 1.8 to 13.0. Variation in MR was also noted within the same state varying by 15% within the state of Arkansas (CoV = 0.15) compared with 51% within the state of Wisconsin (CoV = 0.51). Significant variation was noted for the price of surgery by procedure as well as between and within different geographical regions. Greater scrutiny and transparency in the price of surgery is required to promote cost containment. Copyright © 2016 Elsevier Inc. All rights reserved.

  1. Projectables

    DEFF Research Database (Denmark)

    Rasmussen, Troels A.; Merritt, Timothy R.

    2017-01-01

    CNC cutting machines have become essential tools for designers and architects enabling rapid prototyping, model-building and production of high quality components. Designers often cut from new materials, discarding the irregularly shaped remains. We introduce ProjecTables, a visual augmented...... reality system for interactive packing of model parts onto sheet materials. ProjecTables enables designers to (re)use scrap materials for CNC cutting that would have been previously thrown away, at the same time supporting aesthetic choices related to wood grain, avoiding surface blemishes, and other...... relevant material properties. We conducted evaluations of ProjecTables with design students from Aarhus School of Architecture, demonstrating that participants could quickly and easily place and orient model parts reducing material waste. Contextual interviews and ideation sessions led to a deeper...

  2. A 3D imaging system integrating photoacoustic and fluorescence orthogonal projections for anatomical, functional and molecular assessment of rodent models

    Science.gov (United States)

    Brecht, Hans P.; Ivanov, Vassili; Dumani, Diego S.; Emelianov, Stanislav Y.; Anastasio, Mark A.; Ermilov, Sergey A.

    2018-03-01

    We have developed a preclinical 3D imaging instrument integrating photoacoustic tomography and fluorescence (PAFT) addressing known deficiencies in sensitivity and spatial resolution of the individual imaging components. PAFT is designed for simultaneous acquisition of photoacoustic and fluorescence orthogonal projections at each rotational position of a biological object, enabling direct registration of the two imaging modalities. Orthogonal photoacoustic projections are utilized to reconstruct large (21 cm3 ) volumes showing vascularized anatomical structures and regions of induced optical contrast with spatial resolution exceeding 100 µm. The major advantage of orthogonal fluorescence projections is significant reduction of background noise associated with transmitted or backscattered photons. The fluorescence imaging component of PAFT is used to boost detection sensitivity by providing low-resolution spatial constraint for the fluorescent biomarkers. PAFT performance characteristics were assessed by imaging optical and fluorescent contrast agents in tissue mimicking phantoms and in vivo. The proposed PAFT technology will enable functional and molecular volumetric imaging using fluorescent biomarkers, nanoparticles, and other photosensitive constructs mapped with high fidelity over robust anatomical structures, such as skin, central and peripheral vasculature, and internal organs.

  3. Multi-detector and systematic imaging system designed and developed within the New AGLAE project

    International Nuclear Information System (INIS)

    Pichon, L.; Pacheco, C.; Moignard, B.; Lemasson, Q.; Guillou, T.; Walter, Ph

    2013-01-01

    Full text: The New AGLAE project aims to establish a world-class facility for non invasive analysis of Cultural Heritage materials. One of the objectives of the New AGLAE project is to increase the x-ray measurement detection, enabling to reduce the beam intensity thus the interaction with sensitive artworks by a ten. Multidisciplinary, the New AGLAE project will provide an exceptional and multipurpose beam line with a performance in spatial resolution, beam stability and a capability of multi-particle detection much higher than for the previous facility. The New AGLAE will give fundamental elements for the understanding of the structure of materials, their composition, properties, and change over time. One of the objectives of this project is to design and set up a new data acquisition system. To reach that purpose, the surface and the number of PIXE detectors have been increased. Indeed, a 10 mm 2 and a 30 mm 3 Si(Li) detectors respectively dedicated to low and high energy measurements, were replaced by a cluster of five 50 mm 2 S.D.D. detectors. If this multi detector enables to decrease the intensity of the incident beam by one order of magnitude, involving less irradiation during the analysis, it can also provide large and/or fast maps. So as to digital the preamp pulses obtained from the detectors, a custom Digital X-ray Processor provides both digital data and control signals compatible to a multiparameter multichannel system. This multiparameter system saves each event from x-ray, gamma and particle detectors and simultaneously the X, Y positions of the beam on the sample as a list file. Furthermore, to draw several-cm-sized maps with a 20/40μm resolution, the scanning of the area originally combines a fast vertical magnetic deflection of the beam and a mechanical movement of the target. To process the data, several homemade software have been developed or updated so as to rebuild any matrix of spectra, to re-bin maps, to process a series of single spectra

  4. Multidetector CT evaluation of central airways stenoses: Comparison of virtual bronchoscopy, minimal-intensity projection, and multiplanar reformatted images

    Directory of Open Access Journals (Sweden)

    Dinesh K Sundarakumar

    2011-01-01

    Full Text Available Aims: To evaluate the diagnostic utility of virtual bronchoscopy, multiplanar reformatted images, and minimal-intensity projection in assessing airway stenoses. Settings and Design: It was a prospective study involving 150 patients with symptoms of major airway disease. Materials and Methods: Fifty-six patients were selected for analysis based on the detection of major airway lesions on fiber-optic bronchoscopy (FB or routine axial images. Comparisons were made between axial images, virtual bronchoscopy (VB, minimal-intensity projection (minIP, and multiplanar reformatted (MPR images using FB as the gold standard. Lesions were evaluated in terms of degree of airway narrowing, distance from carina, length of the narrowed segment and visualization of airway distal to the lesion. Results: MPR images had the highest degree of agreement with FB (Κ = 0.76 in the depiction of degree of narrowing. minIP had the least degree of agreement with FB (Κ = 0.51 in this regard. The distal visualization was best on MPR images (84.2%, followed by axial images (80.7%, whereas FB could visualize the lesions only in 45.4% of the cases. VB had the best agreement with FB in assessing the segment length (Κ = 0.62. Overall there were no statistically significant differences in the measurement of the distance from the carina in the axial, minIP, and MPR images. MPR images had the highest overall degree of confidence, namely, 70.17% (n = 40. Conclusion: Three-dimensional reconstruction techniques were found to improve lesion evaluation compared with axial images alone. The technique of MPR images was the most useful for lesion evaluation and provided additional information useful for surgical and airway interventions in tracheobronchial stenosis. minIP was useful in the overall depiction of airway anatomy.

  5. Do state minimum markup/price laws work? Evidence from retail scanner data and TUS-CPS.

    Science.gov (United States)

    Huang, Jidong; Chriqui, Jamie F; DeLong, Hillary; Mirza, Maryam; Diaz, Megan C; Chaloupka, Frank J

    2016-10-01

    Minimum markup/price laws (MPLs) have been proposed as an alternative non-tax pricing strategy to reduce tobacco use and access. However, the empirical evidence on the effectiveness of MPLs in increasing cigarette prices is very limited. This study aims to fill this critical gap by examining the association between MPLs and cigarette prices. State MPLs were compiled from primary legal research databases and were linked to cigarette prices constructed from the Nielsen retail scanner data and the self-reported cigarette prices from the Tobacco Use Supplement to the Current Population Survey. Multivariate regression analyses were conducted to examine the association between MPLs and the major components of MPLs and cigarette prices. The presence of MPLs was associated with higher cigarette prices. In addition, cigarette prices were higher, above and beyond the higher prices resulting from MPLs, in states that prohibit below-cost combination sales; do not allow any distributing party to use trade discounts to reduce the base cost of cigarettes; prohibit distributing parties from meeting the price of a competitor, and prohibit distributing below-cost coupons to the consumer. Moreover, states that had total markup rates >24% were associated with significantly higher cigarette prices. MPLs are an effective way to increase cigarette prices. The impact of MPLs can be further strengthened by imposing greater markup rates and by prohibiting coupon distribution, competitor price matching, and use of below-cost combination sales and trade discounts. Published by the BMJ Publishing Group Limited. For permission to use (where not already granted under a licence) please go to http://www.bmj.com/company/products-services/rights-and-licensing/.

  6. An Attempt to Construct a Database of Photographic Data of Radiolarian Fossils with the Hypertext Markup Language

    OpenAIRE

    磯貝, 芳徳; 水谷, 伸治郎; Yoshinori, Isogai; Shinjiro, Mizutani

    1998-01-01

    放散虫化石の走査型電子顕微鏡写真のコレクションを,Hypertext Markup Languageを用いてデータベース化した.このデータベースは約千枚の放散虫化石の写真を現時点でもっており,化石名,地質学的年代,発掘地名など多様な視点から検索することができる.このデータベースの構築によって,計算機やデータベースについて特別な技術を持っていない通常の研究者が,自身のデータベースを自らの手で構築しようとするとき,Hypertext Markup Languageが有効であることを示した.さらにインターネットを経由して,誰でもこのデータベースを利用できる点は,Hypertext Markup Languageを用いたデータベースの特筆するき特徴である.データベース構築の過程を記述し,現況を報告する.さらに当データベース構築の背景にある考えや問題点について議論する....

  7. Physics meets fine arts: a project-based learning path on infrared imaging

    Science.gov (United States)

    Bonanno, A.; Bozzo, G.; Sapia, P.

    2018-03-01

    Infrared imaging represents a noninvasive tool for cultural heritage diagnostics, based on the capability of IR radiation to penetrate the most external layers of different objects (as for example paintings), revealing hidden features of artworks. From an educational viewpoint, this diagnostic technique offers teachers the opportunity to address manifold topics pertaining to the physics and technology of electromagnetic radiation, with particular emphasis on the nature of color and its physical correlates. Moreover, the topic provides interesting interdisciplinary bridges towards the human sciences. In this framework, we present a hands-on learning sequence, suitable for both high school students and university freshmen, inspired by the project-based learning (PBL) paradigm, designed and implemented in the context of an Italian national project aimed at offering students the opportunity to participate in educational activities within a real working context. In a preliminary test we involved a group of 23 high school students while they were working as apprentices in the Laboratory of Applied Physics for Cultural Heritage (ArcheoLab) at the University of Calabria. Consistently with the PBL paradigm, students were given well-defined practical goals to be achieved. As final goals they were asked (i) to construct and to test a low cost device (based on a disused commercial camera) appropriate for performing educational-grade IR investigations on paintings, and (ii) to prepare a device working as a simple spectrometer (recycling the optical components of a disused video projector), suitable for characterizing various light sources in order to identify the most appropriate for infrared imaging. The proposed learning path has shown (in the preliminary test) to be effective in fostering students’ interest towards physics and its technological applications, especially because pupils perceived the context (i.e. physics applied to the protection and restoration of cultural

  8. High Resolution Mineral Mapping of the Oman Drilling Project Cores with Imaging Spectroscopy: Preliminary Results

    Science.gov (United States)

    Greenberger, R. N.; Ehlmann, B. L.; Kelemen, P. B.; Manning, C. E.; Teagle, D. A. H.; Harris, M.; Michibayashi, K.; Takazawa, E.

    2017-12-01

    The Oman Drilling Project provides an unprecedented opportunity to study the formation and alteration of oceanic crust and peridotite. Key to answering the main questions of the project are a characterization of the primary and secondary minerals present within the drill core and their spatial relationships. To that end, we used the Caltech imaging spectrometer system to scan the entire 1.5-km archive half of the core from all four gabbro and listvenite boreholes (GT1A, GT2A, GT3A, and BT1B) at 250 µm/pixel aboard the JAMSTEC Drilling Vessel Chikyu during the ChikyuOman core description campaign. The instrument measures the visible and shortwave infrared reflectance spectra of the rocks as a function of wavelength from 0.4 to 2.6 µm. This wavelength range is sensitive to many mineral groups, including hydrated minerals (phyllosilicates, zeolites, amorphous silica polytypes), carbonates, sulfates, and transition metals, most commonly iron-bearing mineralogies. To complete the measurements, the core was illuminated with a halogen light source and moved below the spectrometer at 1 cm/s by the Chikyu's Geotek track. Data are corrected and processed to reflectance using measurements of dark current and a spectralon calibration panel. The data provide a unique view of the mineralogy at high spatial resolution. Analysis of the images for complete downhole trends is ongoing. Thus far, a variety of minerals have been identified within their petrologic contexts, including but not limited to magnesite, dolomite, calcite, quartz (through an Si-OH absorption due to minor H2O), serpentine, chlorite, epidote, zeolites, mica (fuchsite), kaolinite, prehnite, gypsum, amphibole, and iron oxides. Further analysis will likely identify more minerals. Results include rapidly distinguishing the cations present within carbonate minerals and identifying minerals of volumetrically-low abundance within the matrix and veins of core samples. This technique, for example, accurately identifies

  9. Comparison of pure and hybrid iterative reconstruction techniques with conventional filtered back projection: Image quality assessment in the cervicothoracic region

    Energy Technology Data Exchange (ETDEWEB)

    Katsura, Masaki, E-mail: mkatsura-tky@umin.ac.jp [Department of Radiology, Graduate School of Medicine, The University of Tokyo, 7-3-1 Hongo, Bunkyo-ku, Tokyo 113-8655 (Japan); Sato, Jiro; Akahane, Masaaki; Matsuda, Izuru; Ishida, Masanori; Yasaka, Koichiro; Kunimatsu, Akira; Ohtomo, Kuni [Department of Radiology, Graduate School of Medicine, The University of Tokyo, 7-3-1 Hongo, Bunkyo-ku, Tokyo 113-8655 (Japan)

    2013-02-15

    Objectives: To evaluate the impact on image quality of three different image reconstruction techniques in the cervicothoracic region: model-based iterative reconstruction (MBIR), adaptive statistical iterative reconstruction (ASIR), and filtered back projection (FBP). Methods: Forty-four patients underwent unenhanced standard-of-care clinical computed tomography (CT) examinations which included the cervicothoracic region with a 64-row multidetector CT scanner. Images were reconstructed with FBP, 50% ASIR-FBP blending (ASIR50), and MBIR. Two radiologists assessed the cervicothoracic region in a blinded manner for streak artifacts, pixilated blotchy appearances, critical reproduction of visually sharp anatomical structures (thyroid gland, common carotid artery, and esophagus), and overall diagnostic acceptability. Objective image noise was measured in the internal jugular vein. Data were analyzed using the sign test and pair-wise Student's t-test. Results: MBIR images had significant lower quantitative image noise (8.88 ± 1.32) compared to ASIR images (18.63 ± 4.19, P < 0.01) and FBP images (26.52 ± 5.8, P < 0.01). Significant improvements in streak artifacts of the cervicothoracic region were observed with the use of MBIR (P < 0.001 each for MBIR vs. the other two image data sets for both readers), while no significant difference was observed between ASIR and FBP (P > 0.9 for ASIR vs. FBP for both readers). MBIR images were all diagnostically acceptable. Unique features of MBIR images included pixilated blotchy appearances, which did not adversely affect diagnostic acceptability. Conclusions: MBIR significantly improves image noise and streak artifacts of the cervicothoracic region over ASIR and FBP. MBIR is expected to enhance the value of CT examinations for areas where image noise and streak artifacts are problematic.

  10. GEOREFERENCED IMAGE SYSTEM WITH DRONES

    Directory of Open Access Journals (Sweden)

    Héctor A. Pérez-Sánchez

    2017-07-01

    Full Text Available This paper has as general purpose develop and implementation of a system that allows the generation of flight routes for a drone, the acquisition of geographic location information (GPS during the flight and taking photographs of points of interest for creating georeferenced images, same that will be used to generate KML files (Keyhole Markup Language for the representation of geographical data in three dimensions to be displayed on the Google Earth tool.

  11. Standardized Semantic Markup for Reference Terminologies, Thesauri and Coding Systems: Benefits for distributed E-Health Applications.

    Science.gov (United States)

    Hoelzer, Simon; Schweiger, Ralf K; Liu, Raymond; Rudolf, Dirk; Rieger, Joerg; Dudeck, Joachim

    2005-01-01

    With the introduction of the ICD-10 as the standard for diagnosis, the development of an electronic representation of its complete content, inherent semantics and coding rules is necessary. Our concept refers to current efforts of the CEN/TC 251 to establish a European standard for hierarchical classification systems in healthcare. We have developed an electronic representation of the ICD-10 with the extensible Markup Language (XML) that facilitates the integration in current information systems or coding software taking into account different languages and versions. In this context, XML offers a complete framework of related technologies and standard tools for processing that helps to develop interoperable applications.

  12. Data dictionary services in XNAT and the Human Connectome Project

    Science.gov (United States)

    Herrick, Rick; McKay, Michael; Olsen, Timothy; Horton, William; Florida, Mark; Moore, Charles J.; Marcus, Daniel S.

    2014-01-01

    The XNAT informatics platform is an open source data management tool used by biomedical imaging researchers around the world. An important feature of XNAT is its highly extensible architecture: users of XNAT can add new data types to the system to capture the imaging and phenotypic data generated in their studies. Until recently, XNAT has had limited capacity to broadcast the meaning of these data extensions to users, other XNAT installations, and other software. We have implemented a data dictionary service for XNAT, which is currently being used on ConnectomeDB, the Human Connectome Project (HCP) public data sharing website. The data dictionary service provides a framework to define key relationships between data elements and structures across the XNAT installation. This includes not just core data representing medical imaging data or subject or patient evaluations, but also taxonomical structures, security relationships, subject groups, and research protocols. The data dictionary allows users to define metadata for data structures and their properties, such as value types (e.g., textual, integers, floats) and valid value templates, ranges, or field lists. The service provides compatibility and integration with other research data management services by enabling easy migration of XNAT data to standards-based formats such as the Resource Description Framework (RDF), JavaScript Object Notation (JSON), and Extensible Markup Language (XML). It also facilitates the conversion of XNAT's native data schema into standard neuroimaging vocabularies and structures. PMID:25071542

  13. Mammography with and without radiolucent positioning sheets : Comparison of projected breast area, pain experience, radiation dose and technical image quality

    NARCIS (Netherlands)

    Timmers, Janine; ten Voorde, Marloes; van Engen, Ruben E.; van Landsveld-Verhoeven, Cary; Pijnappel, Ruud; Droogh-de Greve, Kitty; den Heeten, Gerard J.; Broeders, Mireille J. M.

    2015-01-01

    Purpose: To compare projected breast area, image quality, pain experience and radiation dose between mammography performed with and without radiolucent positioning sheets. Methods: 184 women screened in the Dutch breast screening programme (May-June 2012) provided written informed consent to have

  14. The Pilot Project 'Optical Image Correlation' of the ESA Geohazards Thematic Exploitation Platform (GTEP)

    Science.gov (United States)

    Stumpf, André; Malet, Jean-Philippe

    2016-04-01

    Since more than 20 years, "Earth Observation" (EO) satellites developed or operated by ESA have provided a wealth of data. In the coming years, the Sentinel missions, along with the Copernicus Contributing Missions as well as Earth Explorers and other, Third Party missions will provide routine monitoring of our environment at the global scale, thereby delivering an unprecedented amount of data. While the availability of the growing volume of environmental data from space represents a unique opportunity for science, general R&D, and applications, it also poses major challenges to fully exploit the potential of archived and daily incoming datasets. Those challenges do not only comprise the discovery, access, processing, and visualization of large data volumes but also an increasing diversity of data sources and end users from different fields (e.g. EO, in-situ monitoring, and modeling). In this context, the GTEP (Geohazards Thematic Exploitation Platform) initiative aims to build an operational distributed processing platform to maximize the exploitation of EO data from past and future satellite missions for the detection and monitoring of natural hazards. This presentation focuses on the "Optical Image Correlation" Pilot Project (funded by ESA within the GTEP platform) which objectives are to develop an easy-to-use, flexible and distributed processing chain for: 1) the automated reconstruction of surface Digital Elevation Models from stereo (and tristereo) pairs of Spot 6/7 and Pléiades satellite imagery, 2) the creation of ortho-images (panchromatic and multi-spectral) of Landsat 8, Sentinel-2, Spot 6/7 and Pléiades scenes, 3) the calculation of horizontal (E-N) displacement vectors based on sub-pixel image correlation. The processing chains is being implemented on the GEP cloud-based (Hadoop, MapReduce) environment and designed for analysis of surface displacements at local to regional scale (10-1000 km2) targeting in particular co-seismic displacement and slow

  15. Did Caravaggio employ optical projections? An image analysis of the parity in the artist's paintings

    Science.gov (United States)

    Stork, David G.

    2011-03-01

    We examine one class of evidence put forth in support of the recent claim that the Italian Baroque master Caravaggio secretly employed optical projectors as a direct drawing aid. Specically, we test the claims that there is an "abnormal number" of left-handed gures in his works and, more specically, that "During the Del Monte period he had too many left-handed models." We also test whether there was a reversal in the handedness of specic models in different paintings. Such evidence would be consistent with the claim that Caravaggio switched between using a convex-lens projector to using a concave-mirror projector and would support, but not prove, the claim that Caravaggio used optical projections. We estimate the parity (+ or -) of each of Caravaggio's 76 appropriate oil paintings based on the handedness of gures, the orientation of asymmetric objects, placement of scabbards, depicted text, and so on, and search for statistically significant changes in handedness in figures. We also track the direction of the illumination over time in the artist's uvre. We discuss some historical evidence as it relates to the question of his possible use of optics. We nd the proportion of left-handed figures lower than that in the general population (not higher), and no significant change in estimated handedness even of individual models. Optical proponents have argued that Bacchus (1597) portrays a left-handed gure, but we give visual and cultural evidence showing that this gure is instead right-handed, thereby rebutting this claim that the painting was executed using optical projections. Moreover, scholars recently re-discovered the image of the artist with easel and canvas reflected in the carafe of wine at the front left in the tableau in Bacchus, showing that this painting was almost surely executed using traditional (non-optical) easel methods. We conclude that there is 1) no statistically signicant abnormally high number of left-handed gures in Caravaggio's uvre, including

  16. Improving image quality in Electrical Impedance Tomography (EIT using Projection Error Propagation-based Regularization (PEPR technique: A simulation study

    Directory of Open Access Journals (Sweden)

    Tushar Kanti Bera

    2011-03-01

    Full Text Available A Projection Error Propagation-based Regularization (PEPR method is proposed and the reconstructed image quality is improved in Electrical Impedance Tomography (EIT. A projection error is produced due to the misfit of the calculated and measured data in the reconstruction process. The variation of the projection error is integrated with response matrix in each iterations and the reconstruction is carried out in EIDORS. The PEPR method is studied with the simulated boundary data for different inhomogeneity geometries. Simulated results demonstrate that the PEPR technique improves image reconstruction precision in EIDORS and hence it can be successfully implemented to increase the reconstruction accuracy in EIT.>doi:10.5617/jeb.158 J Electr Bioimp, vol. 2, pp. 2-12, 2011

  17. Sliding thin slab, minimum intensity projection imaging for objective analysis of emphysema

    International Nuclear Information System (INIS)

    Satoh, Shiro; Ohdama, Shinichi; Shibuya, Hitoshi

    2006-01-01

    The aim of this study was to determine whether sliding thin slab, minimum intensity projection (STS-MinIP) imaging is more advantageous than thin-section computed tomography (CT) for detecting and assessing emphysema. Objective quantification of emphysema by STS-MinIP and thin-section CT was defined as the percentage of area lower than the threshold in the lung section at the level of the aortic arch, tracheal carina, and 5 cm below the carina. Quantitative analysis in 100 subjects was performed and compared with pulmonary function test results. The ratio of the low attenuation area in the lung measured by STS-MinIP was significantly higher than that found by thin-section CT (P<0.01). The difference between STS-MinIP and thin-section CT was statistically evident even for mild emphysema and increased depending on whether the low attenuation in the lung increased. Moreover, STS-MinIP showed a stronger regression relation with pulmonary function results than did thin-section CT (P<0.01). STS-MinIP can be recommended as a new morphometric method for detecting and assessing the severity of emphysema. (author)

  18. The Yosemite Extreme Panoramic Imaging Project: Monitoring Rockfall in Yosemite Valley with High-Resolution, Three-Dimensional Imagery

    Science.gov (United States)

    Stock, G. M.; Hansen, E.; Downing, G.

    2008-12-01

    Yosemite Valley experiences numerous rockfalls each year, with over 600 rockfall events documented since 1850. However, monitoring rockfall activity has proved challenging without high-resolution "basemap" imagery of the Valley walls. The Yosemite Extreme Panoramic Imaging Project, a partnership between the National Park Service and xRez Studio, has created an unprecedented image of Yosemite Valley's walls by utilizing gigapixel panoramic photography, LiDAR-based digital terrain modeling, and three-dimensional computer rendering. Photographic capture was accomplished by 20 separate teams shooting from key overlapping locations throughout Yosemite Valley. The shots were taken simultaneously in order to ensure uniform lighting, with each team taking over 500 overlapping shots from each vantage point. Each team's shots were then assembled into 20 gigapixel panoramas. In addition, all 20 gigapixel panoramas were projected onto a 1 meter resolution digital terrain model in three-dimensional rendering software, unifying Yosemite Valley's walls into a vertical orthographic view. The resulting image reveals the geologic complexity of Yosemite Valley in high resolution and represents one of the world's largest photographic captures of a single area. Several rockfalls have already occurred since image capture, and repeat photography of these areas clearly delineates rockfall source areas and failure dynamics. Thus, the imagery has already proven to be a valuable tool for monitoring and understanding rockfall in Yosemite Valley. It also sets a new benchmark for the quality of information a photographic image, enabled with powerful new imaging technology, can provide for the earth sciences.

  19. Multi-detector and systematic imaging system designed and developed within the New AGLAE project

    Energy Technology Data Exchange (ETDEWEB)

    Pichon, L.; Pacheco, C.; Moignard, B.; Lemasson, Q. [C2RMF - Palais du Louvre 14 quai F Mitterrand 75001, Paris (France); FR3605 - MCC/CNRS/UPMC (France); Guillou, T.; Walter, Ph [FR3605 - CC/CNRS/UPMC (France); LAMS - UMR 8220 - CNRS/UPMC - Seine, Paris (France)

    2013-07-01

    Full text: The New AGLAE project aims to establish a world-class facility for non invasive analysis of Cultural Heritage materials. One of the objectives of the New AGLAE project is to increase the x-ray measurement detection, enabling to reduce the beam intensity thus the interaction with sensitive artworks by a ten. Multidisciplinary, the New AGLAE project will provide an exceptional and multipurpose beam line with a performance in spatial resolution, beam stability and a capability of multi-particle detection much higher than for the previous facility. The New AGLAE will give fundamental elements for the understanding of the structure of materials, their composition, properties, and change over time. One of the objectives of this project is to design and set up a new data acquisition system. To reach that purpose, the surface and the number of PIXE detectors have been increased. Indeed, a 10 mm{sup 2} and a 30 mm{sup 3} Si(Li) detectors respectively dedicated to low and high energy measurements, were replaced by a cluster of five 50 mm{sup 2} S.D.D. detectors. If this multi detector enables to decrease the intensity of the incident beam by one order of magnitude, involving less irradiation during the analysis, it can also provide large and/or fast maps. So as to digital the preamp pulses obtained from the detectors, a custom Digital X-ray Processor provides both digital data and control signals compatible to a multiparameter multichannel system. This multiparameter system saves each event from x-ray, gamma and particle detectors and simultaneously the X, Y positions of the beam on the sample as a list file. Furthermore, to draw several-cm-sized maps with a 20/40μm resolution, the scanning of the area originally combines a fast vertical magnetic deflection of the beam and a mechanical movement of the target. To process the data, several homemade software have been developed or updated so as to rebuild any matrix of spectra, to re-bin maps, to process a series of

  20. Transparent ICD and DRG coding using information technology: linking and associating information sources with the eXtensible Markup Language.

    Science.gov (United States)

    Hoelzer, Simon; Schweiger, Ralf K; Dudeck, Joachim

    2003-01-01

    With the introduction of ICD-10 as the standard for diagnostics, it becomes necessary to develop an electronic representation of its complete content, inherent semantics, and coding rules. The authors' design relates to the current efforts by the CEN/TC 251 to establish a European standard for hierarchical classification systems in health care. The authors have developed an electronic representation of ICD-10 with the eXtensible Markup Language (XML) that facilitates integration into current information systems and coding software, taking different languages and versions into account. In this context, XML provides a complete processing framework of related technologies and standard tools that helps develop interoperable applications. XML provides semantic markup. It allows domain-specific definition of tags and hierarchical document structure. The idea of linking and thus combining information from different sources is a valuable feature of XML. In addition, XML topic maps are used to describe relationships between different sources, or "semantically associated" parts of these sources. The issue of achieving a standardized medical vocabulary becomes more and more important with the stepwise implementation of diagnostically related groups, for example. The aim of the authors' work is to provide a transparent and open infrastructure that can be used to support clinical coding and to develop further software applications. The authors are assuming that a comprehensive representation of the content, structure, inherent semantics, and layout of medical classification systems can be achieved through a document-oriented approach.

  1. Image quality and dose in mammography in 17 countries in Africa, Asia and Eastern Europe: Results from IAEA projects

    International Nuclear Information System (INIS)

    Ciraj-Bjelac, Olivera; Avramova-Cholakova, Simona; Beganovic, Adnan; Economides, Sotirios; Faj, Dario; Gershan, Vesna; Grupetta, Edward; Kharita, M.H.; Milakovic, Milomir; Milu, Constantin; Muhogora, Wilbroad E.; Muthuvelu, Pirunthavany; Oola, Samuel; Setayeshi, Saeid

    2012-01-01

    Purpose: The objective is to study mammography practice from an optimisation point of view by assessing the impact of simple and immediately implementable corrective actions on image quality. Materials and methods: This prospective multinational study included 54 mammography units in 17 countries. More than 21,000 mammography images were evaluated using a three-level image quality scoring system. Following initial assessment, appropriate corrective actions were implemented and image quality was re-assessed in 24 units. Results: The fraction of images that were considered acceptable without any remark in the first phase (before the implementation of corrective actions) was 70% and 75% for cranio-caudal and medio-lateral oblique projections, respectively. The main causes for poor image quality before corrective actions were related to film processing, damaged or scratched image receptors, or film-screen combinations that are not spectrally matched, inappropriate radiographic techniques and lack of training. Average glandular dose to a standard breast was 1.5 mGy (mean and range 0.59–3.2 mGy). After optimisation the frequency of poor quality images decreased, but the relative contributions of the various causes remained similar. Image quality improvements following appropriate corrective actions were up to 50 percentage points in some facilities. Conclusions: Poor image quality is a major source of unnecessary radiation dose to the breast. An increased awareness of good quality mammograms is of particular importance for countries that are moving towards introduction of population-based screening programmes. The study demonstrated how simple and low-cost measures can be a valuable tool in improving of image quality in mammography

  2. Nonintrusive iris image acquisition system based on a pan-tilt-zoom camera and light stripe projection

    Science.gov (United States)

    Yoon, Soweon; Jung, Ho Gi; Park, Kang Ryoung; Kim, Jaihie

    2009-03-01

    Although iris recognition is one of the most accurate biometric technologies, it has not yet been widely used in practical applications. This is mainly due to user inconvenience during the image acquisition phase. Specifically, users try to adjust their eye position within small capture volume at a close distance from the system. To overcome these problems, we propose a novel iris image acquisition system that provides users with unconstrained environments: a large operating range, enabling movement from standing posture, and capturing good-quality iris images in an acceptable time. The proposed system has the following three contributions compared with previous works: (1) the capture volume is significantly increased by using a pan-tilt-zoom (PTZ) camera guided by a light stripe projection, (2) the iris location in the large capture volume is found fast due to 1-D vertical face searching from the user's horizontal position obtained by the light stripe projection, and (3) zooming and focusing on the user's irises at a distance are accurate and fast using the estimated 3-D position of a face by the light stripe projection and the PTZ camera. Experimental results show that the proposed system can capture good-quality iris images in 2.479 s on average at a distance of 1.5 to 3 m, while allowing a limited amount of movement by the user.

  3. SU-C-209-05: Monte Carlo Model of a Prototype Backscatter X-Ray (BSX) Imager for Projective and Selective Object-Plane Imaging

    International Nuclear Information System (INIS)

    Rolison, L; Samant, S; Baciak, J; Jordan, K

    2016-01-01

    Purpose: To develop a Monte Carlo N-Particle (MCNP) model for the validation of a prototype backscatter x-ray (BSX) imager, and optimization of BSX technology for medical applications, including selective object-plane imaging. Methods: BSX is an emerging technology that represents an alternative to conventional computed tomography (CT) and projective digital radiography (DR). It employs detectors located on the same side as the incident x-ray source, making use of backscatter and avoiding ring geometry to enclose the imaging object. Current BSX imagers suffer from low spatial resolution. A MCNP model was designed to replicate a BSX prototype used for flaw detection in industrial materials. This prototype consisted of a 1.5mm diameter 60kVp pencil beam surrounded by a ring of four 5.0cm diameter NaI scintillation detectors. The imaging phantom consisted of a 2.9cm thick aluminum plate with five 0.6cm diameter holes drilled halfway. The experimental image was created using a raster scanning motion (in 1.5mm increments). Results: A qualitative comparison between the physical and simulated images showed very good agreement with 1.5mm spatial resolution in plane perpendicular to incident x-ray beam. The MCNP model developed the concept of radiography by selective plane detection (RSPD) for BSX, whereby specific object planes can be imaged by varying kVp. 10keV increments in mean x-ray energy yielded 4mm thick slice resolution in the phantom. Image resolution in the MCNP model can be further increased by increasing the number of detectors, and decreasing raster step size. Conclusion: MCNP modelling was used to validate a prototype BSX imager and introduce the RSPD concept, allowing for selective object-plane imaging. There was very good visual agreement between the experimental and MCNP imaging. Beyond optimizing system parameters for the existing prototype, new geometries can be investigated for volumetric image acquisition in medical applications. This material is

  4. SU-C-209-05: Monte Carlo Model of a Prototype Backscatter X-Ray (BSX) Imager for Projective and Selective Object-Plane Imaging

    Energy Technology Data Exchange (ETDEWEB)

    Rolison, L; Samant, S; Baciak, J; Jordan, K [University of Florida, Gainesville, FL (United States)

    2016-06-15

    Purpose: To develop a Monte Carlo N-Particle (MCNP) model for the validation of a prototype backscatter x-ray (BSX) imager, and optimization of BSX technology for medical applications, including selective object-plane imaging. Methods: BSX is an emerging technology that represents an alternative to conventional computed tomography (CT) and projective digital radiography (DR). It employs detectors located on the same side as the incident x-ray source, making use of backscatter and avoiding ring geometry to enclose the imaging object. Current BSX imagers suffer from low spatial resolution. A MCNP model was designed to replicate a BSX prototype used for flaw detection in industrial materials. This prototype consisted of a 1.5mm diameter 60kVp pencil beam surrounded by a ring of four 5.0cm diameter NaI scintillation detectors. The imaging phantom consisted of a 2.9cm thick aluminum plate with five 0.6cm diameter holes drilled halfway. The experimental image was created using a raster scanning motion (in 1.5mm increments). Results: A qualitative comparison between the physical and simulated images showed very good agreement with 1.5mm spatial resolution in plane perpendicular to incident x-ray beam. The MCNP model developed the concept of radiography by selective plane detection (RSPD) for BSX, whereby specific object planes can be imaged by varying kVp. 10keV increments in mean x-ray energy yielded 4mm thick slice resolution in the phantom. Image resolution in the MCNP model can be further increased by increasing the number of detectors, and decreasing raster step size. Conclusion: MCNP modelling was used to validate a prototype BSX imager and introduce the RSPD concept, allowing for selective object-plane imaging. There was very good visual agreement between the experimental and MCNP imaging. Beyond optimizing system parameters for the existing prototype, new geometries can be investigated for volumetric image acquisition in medical applications. This material is

  5. Diagnosing and mapping pulmonary emphysema on X-ray projection images: incremental value of grating-based X-ray dark-field imaging.

    Science.gov (United States)

    Meinel, Felix G; Schwab, Felix; Schleede, Simone; Bech, Martin; Herzen, Julia; Achterhold, Klaus; Auweter, Sigrid; Bamberg, Fabian; Yildirim, Ali Ö; Bohla, Alexander; Eickelberg, Oliver; Loewen, Rod; Gifford, Martin; Ruth, Ronald; Reiser, Maximilian F; Pfeiffer, Franz; Nikolaou, Konstantin

    2013-01-01

    To assess whether grating-based X-ray dark-field imaging can increase the sensitivity of X-ray projection images in the diagnosis of pulmonary emphysema and allow for a more accurate assessment of emphysema distribution. Lungs from three mice with pulmonary emphysema and three healthy mice were imaged ex vivo using a laser-driven compact synchrotron X-ray source. Median signal intensities of transmission (T), dark-field (V) and a combined parameter (normalized scatter) were compared between emphysema and control group. To determine the diagnostic value of each parameter in differentiating between healthy and emphysematous lung tissue, a receiver-operating-characteristic (ROC) curve analysis was performed both on a per-pixel and a per-individual basis. Parametric maps of emphysema distribution were generated using transmission, dark-field and normalized scatter signal and correlated with histopathology. Transmission values relative to water were higher for emphysematous lungs than for control lungs (1.11 vs. 1.06, pemphysema provides color-coded parametric maps, which show the best correlation with histopathology. In a murine model, the complementary information provided by X-ray transmission and dark-field images adds incremental diagnostic value in detecting pulmonary emphysema and visualizing its regional distribution as compared to conventional X-ray projections.

  6. PLÉIADES PROJECT: ASSESSMENT OF GEOREFERENCING ACCURACY, IMAGE QUALITY, PANSHARPENING PERFORMENCE AND DSM/DTM QUALITY

    Directory of Open Access Journals (Sweden)

    H. Topan

    2016-06-01

    Full Text Available Pléiades 1A and 1B are twin optical satellites of Optical and Radar Federated Earth Observation (ORFEO program jointly running by France and Italy. They are the first satellites of Europe with sub-meter resolution. Airbus DS (formerly Astrium Geo runs a MyGIC (formerly Pléiades Users Group program to validate Pléiades images worldwide for various application purposes. The authors conduct three projects, one is within this program, the second is supported by BEU Scientific Research Project Program, and the third is supported by TÜBİTAK. Assessment of georeferencing accuracy, image quality, pansharpening performance and Digital Surface Model/Digital Terrain Model (DSM/DTM quality subjects are investigated in these projects. For these purposes, triplet panchromatic (50 cm Ground Sampling Distance (GSD and VNIR (2 m GSD Pléiades 1A images were investigated over Zonguldak test site (Turkey which is urbanised, mountainous and covered by dense forest. The georeferencing accuracy was estimated with a standard deviation in X and Y (SX, SY in the range of 0.45m by bias corrected Rational Polynomial Coefficient (RPC orientation, using ~170 Ground Control Points (GCPs. 3D standard deviation of ±0.44m in X, ±0.51m in Y, and ±1.82m in Z directions have been reached in spite of the very narrow angle of convergence by bias corrected RPC orientation. The image quality was also investigated with respect to effective resolution, Signal to Noise Ratio (SNR and blur coefficient. The effective resolution was estimated with factor slightly below 1.0, meaning that the image quality corresponds to the nominal resolution of 50cm. The blur coefficients were achieved between 0.39-0.46 for triplet panchromatic images, indicating a satisfying image quality. SNR is in the range of other comparable space borne images which may be caused by de-noising of Pléiades images. The pansharpened images were generated by various methods, and are validated by most common

  7. Imaging reconstruction based on improved wavelet denoising combined with parallel-beam filtered back-projection algorithm

    Science.gov (United States)

    Ren, Zhong; Liu, Guodong; Huang, Zhen

    2012-11-01

    The image reconstruction is a key step in medical imaging (MI) and its algorithm's performance determinates the quality and resolution of reconstructed image. Although some algorithms have been used, filter back-projection (FBP) algorithm is still the classical and commonly-used algorithm in clinical MI. In FBP algorithm, filtering of original projection data is a key step in order to overcome artifact of the reconstructed image. Since simple using of classical filters, such as Shepp-Logan (SL), Ram-Lak (RL) filter have some drawbacks and limitations in practice, especially for the projection data polluted by non-stationary random noises. So, an improved wavelet denoising combined with parallel-beam FBP algorithm is used to enhance the quality of reconstructed image in this paper. In the experiments, the reconstructed effects were compared between the improved wavelet denoising and others (directly FBP, mean filter combined FBP and median filter combined FBP method). To determine the optimum reconstruction effect, different algorithms, and different wavelet bases combined with three filters were respectively test. Experimental results show the reconstruction effect of improved FBP algorithm is better than that of others. Comparing the results of different algorithms based on two evaluation standards i.e. mean-square error (MSE), peak-to-peak signal-noise ratio (PSNR), it was found that the reconstructed effects of the improved FBP based on db2 and Hanning filter at decomposition scale 2 was best, its MSE value was less and the PSNR value was higher than others. Therefore, this improved FBP algorithm has potential value in the medical imaging.

  8. Assessing natural hazard risk using images and data

    Science.gov (United States)

    Mccullough, H. L.; Dunbar, P. K.; Varner, J. D.; Mungov, G.

    2012-12-01

    Photographs and other visual media provide valuable pre- and post-event data for natural hazard assessment. Scientific research, mitigation, and forecasting rely on visual data for risk analysis, inundation mapping and historic records. Instrumental data only reveal a portion of the whole story; photographs explicitly illustrate the physical and societal impacts from the event. Visual data is rapidly increasing as the availability of portable high resolution cameras and video recorders becomes more attainable. Incorporating these data into archives ensures a more complete historical account of events. Integrating natural hazards data, such as tsunami, earthquake and volcanic eruption events, socio-economic information, and tsunami deposits and runups along with images and photographs enhances event comprehension. Global historic databases at NOAA's National Geophysical Data Center (NGDC) consolidate these data, providing the user with easy access to a network of information. NGDC's Natural Hazards Image Database (ngdc.noaa.gov/hazardimages) was recently improved to provide a more efficient and dynamic user interface. It uses the Google Maps API and Keyhole Markup Language (KML) to provide geographic context to the images and events. Descriptive tags, or keywords, have been applied to each image, enabling easier navigation and discovery. In addition, the Natural Hazards Map Viewer (maps.ngdc.noaa.gov/viewers/hazards) provides the ability to search and browse data layers on a Mercator-projection globe with a variety of map backgrounds. This combination of features creates a simple and effective way to enhance our understanding of hazard events and risks using imagery.

  9. Image and diagnosis quality of X-ray image transmission via cell phone camera: a project study evaluating quality and reliability.

    Directory of Open Access Journals (Sweden)

    Hans Goost

    Full Text Available INTRODUCTION: Developments in telemedicine have not produced any relevant benefits for orthopedics and trauma surgery to date. For the present project study, several parameters were examined during assessment of x-ray images, which had been photographed and transmitted via cell phone. MATERIALS AND METHODS: A total of 100 x-ray images of various body regions were photographed with a Nokia cell phone and transmitted via email or MMS. Next, the transmitted photographs were reviewed on a laptop computer by five medical specialists and assessed regarding quality and diagnosis. RESULTS: Due to their poor quality, the transmitted MMS images could not be evaluated and this path of transmission was therefore excluded. Mean size of transmitted x-ray email images was 394 kB (range: 265-590 kB, SD ± 59, average transmission time was 3.29 min ± 8 (CI 95%: 1.7-4.9. Applying a score from 1-10 (very poor - excellent, mean image quality was 5.8. In 83.2 ± 4% (mean value ± SD of cases (median 82; 80-89%, there was agreement between final diagnosis and assessment by the five medical experts who had received the images. However, there was a markedly low concurrence ratio in the thoracic area and in pediatric injuries. DISCUSSION: While the rate of accurate diagnosis and indication for surgery was high with a concurrence ratio of 83%, considerable differences existed between the assessed regions, with lowest values for thoracic images. Teleradiology is a cost-effective, rapid method which can be applied wherever wireless cell phone reception is available. In our opinion, this method is in principle suitable for clinical use, enabling the physician on duty to agree on appropriate measures with colleagues located elsewhere via x-ray image transmission on a cell phone.

  10. Fundamental remote science research program. Part 2: Status report of the mathematical pattern recognition and image analysis project

    Science.gov (United States)

    Heydorn, R. P.

    1984-01-01

    The Mathematical Pattern Recognition and Image Analysis (MPRIA) Project is concerned with basic research problems related to the study of he Earth from remotely sensed measurements of its surface characteristics. The program goal is to better understand how to analyze the digital image that represents the spatial, spectral, and temporal arrangement of these measurements for purposing of making selected inferences about the Earth. This report summarizes the progress that has been made toward this program goal by each of the principal investigators in the MPRIA Program.

  11. Digital tomosynthesis parallel imaging computational analysis with shift and add and back projection reconstruction algorithms.

    Science.gov (United States)

    Chen, Ying; Balla, Apuroop; Rayford II, Cleveland E; Zhou, Weihua; Fang, Jian; Cong, Linlin

    2010-01-01

    Digital tomosynthesis is a novel technology that has been developed for various clinical applications. Parallel imaging configuration is utilised in a few tomosynthesis imaging areas such as digital chest tomosynthesis. Recently, parallel imaging configuration for breast tomosynthesis began to appear too. In this paper, we present the investigation on computational analysis of impulse response characterisation as the start point of our important research efforts to optimise the parallel imaging configurations. Results suggest that impulse response computational analysis is an effective method to compare and optimise imaging configurations.

  12. Fast, accurate, and robust automatic marker detection for motion correction based on oblique kV or MV projection image pairs

    International Nuclear Information System (INIS)

    Slagmolen, Pieter; Hermans, Jeroen; Maes, Frederik; Budiharto, Tom; Haustermans, Karin; Heuvel, Frank van den

    2010-01-01

    Purpose: A robust and accurate method that allows the automatic detection of fiducial markers in MV and kV projection image pairs is proposed. The method allows to automatically correct for inter or intrafraction motion. Methods: Intratreatment MV projection images are acquired during each of five treatment beams of prostate cancer patients with four implanted fiducial markers. The projection images are first preprocessed using a series of marker enhancing filters. 2D candidate marker locations are generated for each of the filtered projection images and 3D candidate marker locations are reconstructed by pairing candidates in subsequent projection images. The correct marker positions are retrieved in 3D by the minimization of a cost function that combines 2D image intensity and 3D geometric or shape information for the entire marker configuration simultaneously. This optimization problem is solved using dynamic programming such that the globally optimal configuration for all markers is always found. Translational interfraction and intrafraction prostate motion and the required patient repositioning is assessed from the position of the centroid of the detected markers in different MV image pairs. The method was validated on a phantom using CT as ground-truth and on clinical data sets of 16 patients using manual marker annotations as ground-truth. Results: The entire setup was confirmed to be accurate to around 1 mm by the phantom measurements. The reproducibility of the manual marker selection was less than 3.5 pixels in the MV images. In patient images, markers were correctly identified in at least 99% of the cases for anterior projection images and 96% of the cases for oblique projection images. The average marker detection accuracy was 1.4±1.8 pixels in the projection images. The centroid of all four reconstructed marker positions in 3D was positioned within 2 mm of the ground-truth position in 99.73% of all cases. Detecting four markers in a pair of MV images

  13. CT image reconstruction of steel pipe section from few projections using the method of rotating polar-coordinate

    International Nuclear Information System (INIS)

    Peng Shuaijun; Wu Zhifang

    2008-01-01

    Fast online inspection in steel pipe production is a big challenge. Radiographic CT imaging technology, a high performance non-destructive testing method, is quite appropriate for inspection and quality control of steel pipes. The method of rotating polar-coordinate is used to reconstruct the steel pipe section from few projections with the purpose of inspecting it online. It reduces the projection number needed and the data collection time, and accelerates the reconstruction algorithm and saves the inspection time evidently. The results of simulation experiment and actual experiment indicate that the image quality and reconstruction time of rotating polar-coordinate method meet the requirements of inspecting the steel tube section online basically. The study is of some theoretical significance and the method is expected to be widely used in practice. (authors)

  14. THERMAL IMAGING OF Si, GaAs AND GaN -BASED DEVICES WITHIN THE MICROTHERM PROJECT

    OpenAIRE

    Pavageau , S.; Tessier , G.; Filloy , C.; Jerosolimski , G.; Fournier , D.; Polignano , M.-L.; Mica , I.; Cassette , S.; Aubry , R.; Durand , O.

    2005-01-01

    Submitted on behalf of EDA Publishing Association (http://irevues.inist.fr/handle/2042/5920); International audience; Within the european project Microtherm, we have developed a CCD-based thermoreflectance system which delivers thermal images of working integrated circuits with high spatial and thermal resolutions (down to 350 nm and 0.1 K respectively). We illustrate the performances of this set-up on several classes of semiconductor devices including high power transistors and transistor ar...

  15. A Visual Database System for Image Analysis on Parallel Computers and its Application to the EOS Amazon Project

    Science.gov (United States)

    Shapiro, Linda G.; Tanimoto, Steven L.; Ahrens, James P.

    1996-01-01

    The goal of this task was to create a design and prototype implementation of a database environment that is particular suited for handling the image, vision and scientific data associated with the NASA's EOC Amazon project. The focus was on a data model and query facilities that are designed to execute efficiently on parallel computers. A key feature of the environment is an interface which allows a scientist to specify high-level directives about how query execution should occur.

  16. Final report on LDRD project : single-photon-sensitive imaging detector arrays at 1600 nm

    International Nuclear Information System (INIS)

    Childs, Kenton David; Serkland, Darwin Keith; Geib, Kent Martin; Hawkins, Samuel D.; Carroll, Malcolm S.; Klem, John Frederick; Sheng, Josephine Juin-Jye; Patel, Rupal K.; Bolles, Desta; Bauer, Tom M.; Koudelka, Robert

    2006-01-01

    The key need that this project has addressed is a short-wave infrared light detector for ranging (LIDAR) imaging at temperatures greater than 100K, as desired by nonproliferation and work for other customers. Several novel device structures to improve avalanche photodiodes (APDs) were fabricated to achieve the desired APD performance. A primary challenge to achieving high sensitivity APDs at 1550 nm is that the small band-gap materials (e.g., InGaAs or Ge) necessary to detect low-energy photons exhibit higher dark counts and higher multiplication noise compared to materials like silicon. To overcome these historical problems APDs were designed and fabricated using separate absorption and multiplication (SAM) regions. The absorption regions used (InGaAs or Ge) to leverage these materials 1550 nm sensitivity. Geiger mode detection was chosen to circumvent gain noise issues in the III-V and Ge multiplication regions, while a novel Ge/Si device was built to examine the utility of transferring photoelectrons in a silicon multiplication region. Silicon is known to have very good analog and GM multiplication properties. The proposed devices represented a high-risk for high-reward approach. Therefore one primary goal of this work was to experimentally resolve uncertainty about the novel APD structures. This work specifically examined three different designs. An InGaAs/InAlAs Geiger mode (GM) structure was proposed for the superior multiplication properties of the InAlAs. The hypothesis to be tested in this structure was whether InAlAs really presented an advantage in GM. A Ge/Si SAM was proposed representing the best possible multiplication material (i.e., silicon), however, significant uncertainty existed about both the Ge material quality and the ability to transfer photoelectrons across the Ge/Si interface. Finally a third pure germanium GM structure was proposed because bulk germanium has been reported to have better dark count properties. However, significant

  17. Final report on LDRD project : single-photon-sensitive imaging detector arrays at 1600 nm.

    Energy Technology Data Exchange (ETDEWEB)

    Childs, Kenton David; Serkland, Darwin Keith; Geib, Kent Martin; Hawkins, Samuel D.; Carroll, Malcolm S.; Klem, John Frederick; Sheng, Josephine Juin-Jye; Patel, Rupal K.; Bolles, Desta; Bauer, Tom M.; Koudelka, Robert

    2006-11-01

    The key need that this project has addressed is a short-wave infrared light detector for ranging (LIDAR) imaging at temperatures greater than 100K, as desired by nonproliferation and work for other customers. Several novel device structures to improve avalanche photodiodes (APDs) were fabricated to achieve the desired APD performance. A primary challenge to achieving high sensitivity APDs at 1550 nm is that the small band-gap materials (e.g., InGaAs or Ge) necessary to detect low-energy photons exhibit higher dark counts and higher multiplication noise compared to materials like silicon. To overcome these historical problems APDs were designed and fabricated using separate absorption and multiplication (SAM) regions. The absorption regions used (InGaAs or Ge) to leverage these materials 1550 nm sensitivity. Geiger mode detection was chosen to circumvent gain noise issues in the III-V and Ge multiplication regions, while a novel Ge/Si device was built to examine the utility of transferring photoelectrons in a silicon multiplication region. Silicon is known to have very good analog and GM multiplication properties. The proposed devices represented a high-risk for high-reward approach. Therefore one primary goal of this work was to experimentally resolve uncertainty about the novel APD structures. This work specifically examined three different designs. An InGaAs/InAlAs Geiger mode (GM) structure was proposed for the superior multiplication properties of the InAlAs. The hypothesis to be tested in this structure was whether InAlAs really presented an advantage in GM. A Ge/Si SAM was proposed representing the best possible multiplication material (i.e., silicon), however, significant uncertainty existed about both the Ge material quality and the ability to transfer photoelectrons across the Ge/Si interface. Finally a third pure germanium GM structure was proposed because bulk germanium has been reported to have better dark count properties. However, significant

  18. A sparsity-based iterative algorithm for reconstruction of micro-CT images from highly undersampled projection datasets obtained with a synchrotron X-ray source

    Science.gov (United States)

    Melli, S. Ali; Wahid, Khan A.; Babyn, Paul; Cooper, David M. L.; Gopi, Varun P.

    2016-12-01

    Synchrotron X-ray Micro Computed Tomography (Micro-CT) is an imaging technique which is increasingly used for non-invasive in vivo preclinical imaging. However, it often requires a large number of projections from many different angles to reconstruct high-quality images leading to significantly high radiation doses and long scan times. To utilize this imaging technique further for in vivo imaging, we need to design reconstruction algorithms that reduce the radiation dose and scan time without reduction of reconstructed image quality. This research is focused on using a combination of gradient-based Douglas-Rachford splitting and discrete wavelet packet shrinkage image denoising methods to design an algorithm for reconstruction of large-scale reduced-view synchrotron Micro-CT images with acceptable quality metrics. These quality metrics are computed by comparing the reconstructed images with a high-dose reference image reconstructed from 1800 equally spaced projections spanning 180°. Visual and quantitative-based performance assessment of a synthetic head phantom and a femoral cortical bone sample imaged in the biomedical imaging and therapy bending magnet beamline at the Canadian Light Source demonstrates that the proposed algorithm is superior to the existing reconstruction algorithms. Using the proposed reconstruction algorithm to reduce the number of projections in synchrotron Micro-CT is an effective way to reduce the overall radiation dose and scan time which improves in vivo imaging protocols.

  19. Optimal image alignment with random projections of manifolds: algorithm and geometric analysis.

    Science.gov (United States)

    Kokiopoulou, Effrosyni; Kressner, Daniel; Frossard, Pascal

    2011-06-01

    This paper addresses the problem of image alignment based on random measurements. Image alignment consists of estimating the relative transformation between a query image and a reference image. We consider the specific problem where the query image is provided in compressed form in terms of linear measurements captured by a vision sensor. We cast the alignment problem as a manifold distance minimization problem in the linear subspace defined by the measurements. The transformation manifold that represents synthesis of shift, rotation, and isotropic scaling of the reference image can be given in closed form when the reference pattern is sparsely represented over a parametric dictionary. We show that the objective function can then be decomposed as the difference of two convex functions (DC) in the particular case where the dictionary is built on Gaussian functions. Thus, the optimization problem becomes a DC program, which in turn can be solved globally by a cutting plane method. The quality of the solution is typically affected by the number of random measurements and the condition number of the manifold that describes the transformations of the reference image. We show that the curvature, which is closely related to the condition number, remains bounded in our image alignment problem, which means that the relative transformation between two images can be determined optimally in a reduced subspace.

  20. MR-guided PET motion correction in LOR space using generic projection data for image reconstruction with PRESTO

    International Nuclear Information System (INIS)

    Scheins, J.; Ullisch, M.; Tellmann, L.; Weirich, C.; Rota Kops, E.; Herzog, H.; Shah, N.J.

    2013-01-01

    The BrainPET scanner from Siemens, designed as hybrid MR/PET system for simultaneous acquisition of both modalities, provides high-resolution PET images with an optimum resolution of 3 mm. However, significant head motion often compromises the achievable image quality, e.g. in neuroreceptor studies of human brain. This limitation can be omitted when tracking the head motion and accurately correcting measured Lines-of-Response (LORs). For this purpose, we present a novel method, which advantageously combines MR-guided motion tracking with the capabilities of the reconstruction software PRESTO (PET Reconstruction Software Toolkit) to convert motion-corrected LORs into highly accurate generic projection data. In this way, the high-resolution PET images achievable with PRESTO can also be obtained in presence of severe head motion

  1. Reflections from a Creative Community-Based Participatory Research Project Exploring Health and Body Image with First Nations Girls

    Directory of Open Access Journals (Sweden)

    Jennifer M. Shea PhD

    2013-02-01

    Full Text Available In Canada, Aboriginal peoples often experience a multitude of inequalities when compared with the general population, particularly in relation to health (e.g., increased incidence of diabetes. These inequalities are rooted in a negative history of colonization. Decolonizing methodologies recognize these realities and aim to shift the focus from communities being researched to being collaborative partners in the research process. This article describes a qualitative community-based participatory research project focused on health and body image with First Nations girls in a Tribal Council region in Western Canada. We discuss our project design and the incorporation of creative methods (e.g., photovoice to foster integration and collaboration as related to decolonizing methodology principles. This article is both descriptive and reflective as it summarizes our project and discusses lessons learned from the process, integrating evaluations from the participating girls as well as our reflections as researchers.

  2. From data to analysis: linking NWChem and Avogadro with the syntax and semantics of Chemical Markup Language

    Energy Technology Data Exchange (ETDEWEB)

    De Jong, Wibe A.; Walker, Andrew M.; Hanwell, Marcus D.

    2013-05-24

    Background Multidisciplinary integrated research requires the ability to couple the diverse sets of data obtained from a range of complex experiments and computer simulations. Integrating data requires semantically rich information. In this paper the generation of semantically rich data from the NWChem computational chemistry software is discussed within the Chemical Markup Language (CML) framework. Results The NWChem computational chemistry software has been modified and coupled to the FoX library to write CML compliant XML data files. The FoX library was expanded to represent the lexical input files used by the computational chemistry software. Conclusions The production of CML compliant XML files for the computational chemistry software NWChem can be relatively easily accomplished using the FoX library. A unified computational chemistry or CompChem convention and dictionary needs to be developed through a community-based effort. The long-term goal is to enable a researcher to do Google-style chemistry and physics searches.

  3. The Biological Connection Markup Language: a SBGN-compliant format for visualization, filtering and analysis of biological pathways.

    Science.gov (United States)

    Beltrame, Luca; Calura, Enrica; Popovici, Razvan R; Rizzetto, Lisa; Guedez, Damariz Rivero; Donato, Michele; Romualdi, Chiara; Draghici, Sorin; Cavalieri, Duccio

    2011-08-01

    Many models and analysis of signaling pathways have been proposed. However, neither of them takes into account that a biological pathway is not a fixed system, but instead it depends on the organism, tissue and cell type as well as on physiological, pathological and experimental conditions. The Biological Connection Markup Language (BCML) is a format to describe, annotate and visualize pathways. BCML is able to store multiple information, permitting a selective view of the pathway as it exists and/or behave in specific organisms, tissues and cells. Furthermore, BCML can be automatically converted into data formats suitable for analysis and into a fully SBGN-compliant graphical representation, making it an important tool that can be used by both computational biologists and 'wet lab' scientists. The XML schema and the BCML software suite are freely available under the LGPL for download at http://bcml.dc-atlas.net. They are implemented in Java and supported on MS Windows, Linux and OS X.

  4. Restructuring an EHR system and the Medical Markup Language (MML) standard to improve interoperability by archetype technology.

    Science.gov (United States)

    Kobayashi, Shinji; Kume, Naoto; Yoshihara, Hiroyuki

    2015-01-01

    In 2001, we developed an EHR system for regional healthcare information inter-exchange and to provide individual patient data to patients. This system was adopted in three regions in Japan. We also developed a Medical Markup Language (MML) standard for inter- and intra-hospital communications. The system was built on a legacy platform, however, and had not been appropriately maintained or updated to meet clinical requirements. To improve future maintenance costs, we reconstructed the EHR system using archetype technology on the Ruby on Rails platform, and generated MML equivalent forms from archetypes. The system was deployed as a cloud-based system for preliminary use as a regional EHR. The system now has the capability to catch up with new requirements, maintaining semantic interoperability with archetype technology. It is also more flexible than the legacy EHR system.

  5. The utility of three-dimensional optical projection tomography in nerve injection injury imaging

    Czech Academy of Sciences Publication Activity Database

    Cvetko, E.; Čapek, Martin; Damjanovska, M.; Reina, M. A.; Eržen, I.; Stopar-Pintarič, T.

    2015-01-01

    Roč. 70, č. 8 (2015), s. 939-947 ISSN 0003-2409 R&D Projects: GA ČR(CZ) GA13-12412S; GA MŠk(CZ) LH13028 Institutional support: RVO:67985823 Keywords : optical projection tomography * 3D nerve visualization * nerve disruption Subject RIV: EA - Cell Biology Impact factor: 3.794, year: 2015

  6. Implementation of a dedicated digital projectional radiographic system in thoracic imaging

    International Nuclear Information System (INIS)

    Aberle, D.R.; Batra, P.; Hayrapetian, A.S.; Brown, K.; Morioka, C.A.; Steckel, R.J.

    1988-01-01

    An integrated digital radiographic system was evaluated with respect to image quality and impact on diagnosis relative to conventional chest radiographs for a variety of focal and diffuse lung processes. Digital images were acquired with a stimulable phosphor plate detector that was scanned by a semiconductor laser for immediate digitalization to a 2,048 X 2,464 X 10-bit image. Digital images were displayed on a 2,048-line monitor and printed on 14 X 17-inch film with use of a laser film printer (Kodak). Preliminary results with this system, including the effects of user interaction with the display monitor, inverse intensity display, and regional magnification techniques, indicate that it may be successfully implemented for thoracic imaging

  7. Descriptive Analysis on the Impacts of Universal Zero-Markup Drug Policy on a Chinese Urban Tertiary Hospital.

    Directory of Open Access Journals (Sweden)

    Wei Tian

    Full Text Available Universal Zero-Markup Drug Policy (UZMDP mandates no price mark-ups on any drug dispensed by a healthcare institution, and covers the medicines not included in the China's National Essential Medicine System. Five tertiary hospitals in Beijing, China implemented UZMDP in 2012. Its impacts on these hospitals are unknown. We described the effects of UZMDP on a participating hospital, Jishuitan Hospital, Beijing, China (JST.This retrospective longitudinal study examined the hospital-level data of JST and city-level data of tertiary hospitals of Beijing, China (BJT 2009-2015. Rank-sum tests and join-point regression analyses were used to assess absolute changes and differences in trends, respectively.In absolute terms, after the UZDMP implementation, there were increased annual patient-visits and decreased ratios of medicine-to-healthcare-charges (RMOH in JST outpatient and inpatient services; however, in outpatient service, physician work-days decreased and physician-workload and inflation-adjusted per-visit healthcare charges increased, while the inpatient physician work-days increased and inpatient mortality-rate reduced. Interestingly, the decreasing trend in inpatient mortality-rate was neutralized after UZDMP implementation. Compared with BJT and under influence of UZDMP, JST outpatient and inpatient services both had increasing trends in annual patient-visits (annual percentage changes[APC] = 8.1% and 6.5%, respectively and decreasing trends in RMOH (APC = -4.3% and -5.4%, respectively, while JST outpatient services had increasing trend in inflation-adjusted per-visit healthcare charges (APC = 3.4% and JST inpatient service had decreasing trend in inflation-adjusted per-visit medicine-charges (APC = -5.2%.Implementation of UZMDP seems to increase annual patient-visits, reduce RMOH and have different impacts on outpatient and inpatient services in a Chinese urban tertiary hospital.

  8. Descriptive Analysis on the Impacts of Universal Zero-Markup Drug Policy on a Chinese Urban Tertiary Hospital.

    Science.gov (United States)

    Tian, Wei; Yuan, Jiangfan; Yang, Dong; Zhang, Lanjing

    2016-01-01

    Universal Zero-Markup Drug Policy (UZMDP) mandates no price mark-ups on any drug dispensed by a healthcare institution, and covers the medicines not included in the China's National Essential Medicine System. Five tertiary hospitals in Beijing, China implemented UZMDP in 2012. Its impacts on these hospitals are unknown. We described the effects of UZMDP on a participating hospital, Jishuitan Hospital, Beijing, China (JST). This retrospective longitudinal study examined the hospital-level data of JST and city-level data of tertiary hospitals of Beijing, China (BJT) 2009-2015. Rank-sum tests and join-point regression analyses were used to assess absolute changes and differences in trends, respectively. In absolute terms, after the UZDMP implementation, there were increased annual patient-visits and decreased ratios of medicine-to-healthcare-charges (RMOH) in JST outpatient and inpatient services; however, in outpatient service, physician work-days decreased and physician-workload and inflation-adjusted per-visit healthcare charges increased, while the inpatient physician work-days increased and inpatient mortality-rate reduced. Interestingly, the decreasing trend in inpatient mortality-rate was neutralized after UZDMP implementation. Compared with BJT and under influence of UZDMP, JST outpatient and inpatient services both had increasing trends in annual patient-visits (annual percentage changes[APC] = 8.1% and 6.5%, respectively) and decreasing trends in RMOH (APC = -4.3% and -5.4%, respectively), while JST outpatient services had increasing trend in inflation-adjusted per-visit healthcare charges (APC = 3.4%) and JST inpatient service had decreasing trend in inflation-adjusted per-visit medicine-charges (APC = -5.2%). Implementation of UZMDP seems to increase annual patient-visits, reduce RMOH and have different impacts on outpatient and inpatient services in a Chinese urban tertiary hospital.

  9. The PixFEL project: Progress towards a fine pitch X-ray imaging camera for next generation FEL facilities

    Energy Technology Data Exchange (ETDEWEB)

    Rizzo, G., E-mail: giuliana.rizzo@pi.infn.it [Università di Pisa, Dipartimento di Fisica, I-56127 Pisa (Italy); INFN, Sezione di Pisa, I-56127 Pisa (Italy); Batignani, G. [Università di Pisa, Dipartimento di Fisica, I-56127 Pisa (Italy); INFN, Sezione di Pisa, I-56127 Pisa (Italy); Benkechkache, M.A. [Università di Trento, Dipartimento di Ingegneria Industriale, I-38123 Trento (Italy); University Constantine 1, Department of Electronics in the Science and Technology Faculty, I-25017, Constantine (Algeria); Bettarini, S.; Casarosa, G. [Università di Pisa, Dipartimento di Fisica, I-56127 Pisa (Italy); INFN, Sezione di Pisa, I-56127 Pisa (Italy); Comotti, D. [Università di Pavia, Dipartimento di Ingegneria Industriale e dell' Informazione, I-27100 Pavia (Italy); INFN Sezione di Pavia, I-27100 Pavia (Italy); Dalla Betta, G.-F. [Università di Trento, Dipartimento di Ingegneria Industriale, I-38123 Trento (Italy); TIFPA INFN, I-38123 Trento (Italy); Fabris, L. [INFN Sezione di Pavia, I-27100 Pavia (Italy); Università di Bergamo, Dipartimento di Ingegneria e Scienze Applicate, I-24044 Dalmine (Italy); Forti, F. [Università di Pisa, Dipartimento di Fisica, I-56127 Pisa (Italy); INFN, Sezione di Pisa, I-56127 Pisa (Italy); Grassi, M.; Lodola, L.; Malcovati, P. [Università di Pavia, Dipartimento di Ingegneria Industriale e dell' Informazione, I-27100 Pavia (Italy); INFN Sezione di Pavia, I-27100 Pavia (Italy); Manghisoni, M. [INFN Sezione di Pavia, I-27100 Pavia (Italy); Università di Bergamo, Dipartimento di Ingegneria e Scienze Applicate, I-24044 Dalmine (Italy); and others

    2016-07-11

    The INFN PixFEL project is developing the fundamental building blocks for a large area X-ray imaging camera to be deployed at next generation free electron laser (FEL) facilities with unprecedented intensity. Improvement in performance beyond the state of art in imaging instrumentation will be explored adopting advanced technologies like active edge sensors, a 65 nm node CMOS process and vertical integration. These are the key ingredients of the PixFEL project to realize a seamless large area focal plane instrument composed by a matrix of multilayer four-side buttable tiles. In order to minimize the dead area and reduce ambiguities in image reconstruction, a fine pitch active edge thick sensor is being optimized to cope with very high intensity photon flux, up to 10{sup 4} photons per pixel, in the range from 1 to 10 keV. A low noise analog front-end channel with this wide dynamic range and a novel dynamic compression feature, together with a low power 10 bit analog to digital conversion up to 5 MHz, has been realized in a 110 μm pitch with a 65 nm CMOS process. Vertical interconnection of two CMOS tiers will be also explored in the future to build a four-side buttable readout chip with high density memories. In the long run the objective of the PixFEL project is to build a flexible X-ray imaging camera for operation both in burst mode, like at the European X-FEL, or in continuous mode with the high frame rates anticipated for future FEL facilities.

  10. The PixFEL project: Progress towards a fine pitch X-ray imaging camera for next generation FEL facilities

    International Nuclear Information System (INIS)

    Rizzo, G.; Batignani, G.; Benkechkache, M.A.; Bettarini, S.; Casarosa, G.; Comotti, D.; Dalla Betta, G.-F.; Fabris, L.; Forti, F.; Grassi, M.; Lodola, L.; Malcovati, P.; Manghisoni, M.

    2016-01-01

    The INFN PixFEL project is developing the fundamental building blocks for a large area X-ray imaging camera to be deployed at next generation free electron laser (FEL) facilities with unprecedented intensity. Improvement in performance beyond the state of art in imaging instrumentation will be explored adopting advanced technologies like active edge sensors, a 65 nm node CMOS process and vertical integration. These are the key ingredients of the PixFEL project to realize a seamless large area focal plane instrument composed by a matrix of multilayer four-side buttable tiles. In order to minimize the dead area and reduce ambiguities in image reconstruction, a fine pitch active edge thick sensor is being optimized to cope with very high intensity photon flux, up to 10"4 photons per pixel, in the range from 1 to 10 keV. A low noise analog front-end channel with this wide dynamic range and a novel dynamic compression feature, together with a low power 10 bit analog to digital conversion up to 5 MHz, has been realized in a 110 μm pitch with a 65 nm CMOS process. Vertical interconnection of two CMOS tiers will be also explored in the future to build a four-side buttable readout chip with high density memories. In the long run the objective of the PixFEL project is to build a flexible X-ray imaging camera for operation both in burst mode, like at the European X-FEL, or in continuous mode with the high frame rates anticipated for future FEL facilities.

  11. Identification of retinal ganglion cells and their projections involved in central transmission of information about upward and downward image motion.

    Directory of Open Access Journals (Sweden)

    Keisuke Yonehara

    Full Text Available The direction of image motion is coded by direction-selective (DS ganglion cells in the retina. Particularly, the ON DS ganglion cells project their axons specifically to terminal nuclei of the accessory optic system (AOS responsible for optokinetic reflex (OKR. We recently generated a knock-in mouse in which SPIG1 (SPARC-related protein containing immunoglobulin domains 1-expressing cells are visualized with GFP, and found that retinal ganglion cells projecting to the medial terminal nucleus (MTN, the principal nucleus of the AOS, are comprised of SPIG1+ and SPIG1(- ganglion cells distributed in distinct mosaic patterns in the retina. Here we examined light responses of these two subtypes of MTN-projecting cells by targeted electrophysiological recordings. SPIG1+ and SPIG1(- ganglion cells respond preferentially to upward motion and downward motion, respectively, in the visual field. The direction selectivity of SPIG1+ ganglion cells develops normally in dark-reared mice. The MTN neurons are activated by optokinetic stimuli only of the vertical motion as shown by Fos expression analysis. Combination of genetic labeling and conventional retrograde labeling revealed that axons of SPIG1+ and SPIG1(- ganglion cells project to the MTN via different pathways. The axon terminals of the two subtypes are organized into discrete clusters in the MTN. These results suggest that information about upward and downward image motion transmitted by distinct ON DS cells is separately processed in the MTN, if not independently. Our findings provide insights into the neural mechanisms of OKR, how information about the direction of image motion is deciphered by the AOS.

  12. Deployment of a Prototype Plant GFP Imager at the Arthur Clarke Mars Greenhouse of the Haughton Mars Project

    Directory of Open Access Journals (Sweden)

    Robert J. Ferl

    2008-04-01

    Full Text Available The use of engineered plants as biosensors has made elegant strides in the past decades, providing keen insights into the health of plants in general and particularly in the nature and cellular location of stress responses. However, most of the analytical procedures involve laboratory examination of the biosensor plants. With the advent of the green fluorescence protein (GFP as a biosensor molecule, it became at least theoretically possible for analyses of gene expression to occur telemetrically, with the gene expression information of the plant delivered to the investigator over large distances simply as properly processed fluorescence images. Spaceflight and other extraterrestrial environments provide unique challenges to plant life, challenges that often require changes at the gene expression level to accommodate adaptation and survival. Having previously deployed transgenic plant biosensors to evaluate responses to orbital spaceflight, we wished to develop the plants and especially the imaging devices required to conduct such experiments robotically, without operator intervention, within extraterrestrial environments. This requires the development of an autonomous and remotely operated plant GFP imaging system and concomitant development of the communications infrastructure to manage dataflow from the imaging device. Here we report the results of deploying a prototype GFP imaging system within the Arthur Clarke Mars Greenhouse (ACMG an autonomously operated greenhouse located within the Haughton Mars Project in the Canadian High Arctic. Results both demonstrate the applicability of the fundamental GFP biosensor technology and highlight the difficulties in collecting and managing telemetric data from challenging deployment environments.

  13. System and technique for retrieving depth information about a surface by projecting a composite image of modulated light patterns

    Science.gov (United States)

    Hassebrook, Laurence G. (Inventor); Lau, Daniel L. (Inventor); Guan, Chun (Inventor)

    2010-01-01

    A technique, associated system and program code, for retrieving depth information about at least one surface of an object, such as an anatomical feature. Core features include: projecting a composite image comprising a plurality of modulated structured light patterns, at the anatomical feature; capturing an image reflected from the surface; and recovering pattern information from the reflected image, for each of the modulated structured light patterns. Pattern information is preferably recovered for each modulated structured light pattern used to create the composite, by performing a demodulation of the reflected image. Reconstruction of the surface can be accomplished by using depth information from the recovered patterns to produce a depth map/mapping thereof. Each signal waveform used for the modulation of a respective structured light pattern, is distinct from each of the other signal waveforms used for the modulation of other structured light patterns of a composite image; these signal waveforms may be selected from suitable types in any combination of distinct signal waveforms, provided the waveforms used are uncorrelated with respect to each other. The depth map/mapping to be utilized in a host of applications, for example: displaying a 3-D view of the object; virtual reality user-interaction interface with a computerized device; face--or other animal feature or inanimate object--recognition and comparison techniques for security or identification purposes; and 3-D video teleconferencing/telecollaboration.

  14. 3D palmprint and hand imaging system based on full-field composite color sinusoidal fringe projection technique.

    Science.gov (United States)

    Zhang, Zonghua; Huang, Shujun; Xu, Yongjia; Chen, Chao; Zhao, Yan; Gao, Nan; Xiao, Yanjun

    2013-09-01

    Palmprint and hand shape, as two kinds of important biometric characteristics, have been widely studied and applied to human identity recognition. The existing research is based mainly on 2D images, which lose the third-dimensional information. The biological features extracted from 2D images are distorted by pressure and rolling, so the subsequent feature matching and recognition are inaccurate. This paper presents a method to acquire accurate 3D shapes of palmprint and hand by projecting full-field composite color sinusoidal fringe patterns and the corresponding color texture information. A 3D imaging system is designed to capture and process the full-field composite color fringe patterns on hand surface. Composite color fringe patterns having the optimum three fringe numbers are generated by software and projected onto the surface of human hand by a digital light processing projector. From another viewpoint, a color CCD camera captures the deformed fringe patterns and saves them for postprocessing. After compensating for the cross talk and chromatic aberration between color channels, three fringe patterns are extracted from three color channels of a captured composite color image. Wrapped phase information can be calculated from the sinusoidal fringe patterns with high precision. At the same time, the absolute phase of each pixel is determined by the optimum three-fringe selection method. After building up the relationship between absolute phase map and 3D shape data, the 3D palmprint and hand are obtained. Color texture information can be directly captured or demodulated from the captured composite fringe pattern images. Experimental results show that the proposed method and system can yield accurate 3D shape and color texture information of the palmprint and hand shape.

  15. Image quality in thoracic 4D cone-beam CT: A sensitivity analysis of respiratory signal, binning method, reconstruction algorithm, and projection angular spacing

    International Nuclear Information System (INIS)

    Shieh, Chun-Chien; Kipritidis, John; O’Brien, Ricky T.; Keall, Paul J.; Kuncic, Zdenka

    2014-01-01

    Purpose: Respiratory signal, binning method, and reconstruction algorithm are three major controllable factors affecting image quality in thoracic 4D cone-beam CT (4D-CBCT), which is widely used in image guided radiotherapy (IGRT). Previous studies have investigated each of these factors individually, but no integrated sensitivity analysis has been performed. In addition, projection angular spacing is also a key factor in reconstruction, but how it affects image quality is not obvious. An investigation of the impacts of these four factors on image quality can help determine the most effective strategy in improving 4D-CBCT for IGRT. Methods: Fourteen 4D-CBCT patient projection datasets with various respiratory motion features were reconstructed with the following controllable factors: (i) respiratory signal (real-time position management, projection image intensity analysis, or fiducial marker tracking), (ii) binning method (phase, displacement, or equal-projection-density displacement binning), and (iii) reconstruction algorithm [Feldkamp–Davis–Kress (FDK), McKinnon–Bates (MKB), or adaptive-steepest-descent projection-onto-convex-sets (ASD-POCS)]. The image quality was quantified using signal-to-noise ratio (SNR), contrast-to-noise ratio, and edge-response width in order to assess noise/streaking and blur. The SNR values were also analyzed with respect to the maximum, mean, and root-mean-squared-error (RMSE) projection angular spacing to investigate how projection angular spacing affects image quality. Results: The choice of respiratory signals was found to have no significant impact on image quality. Displacement-based binning was found to be less prone to motion artifacts compared to phase binning in more than half of the cases, but was shown to suffer from large interbin image quality variation and large projection angular gaps. Both MKB and ASD-POCS resulted in noticeably improved image quality almost 100% of the time relative to FDK. In addition, SNR

  16. Image quality in thoracic 4D cone-beam CT: A sensitivity analysis of respiratory signal, binning method, reconstruction algorithm, and projection angular spacing

    Energy Technology Data Exchange (ETDEWEB)

    Shieh, Chun-Chien [Radiation Physics Laboratory, Sydney Medical School, University of Sydney, NSW 2006, Australia and Institute of Medical Physics, School of Physics, University of Sydney, NSW 2006 (Australia); Kipritidis, John; O’Brien, Ricky T.; Keall, Paul J., E-mail: paul.keall@sydney.edu.au [Radiation Physics Laboratory, Sydney Medical School, University of Sydney, NSW 2006 (Australia); Kuncic, Zdenka [Institute of Medical Physics, School of Physics, University of Sydney, NSW 2006 (Australia)

    2014-04-15

    Purpose: Respiratory signal, binning method, and reconstruction algorithm are three major controllable factors affecting image quality in thoracic 4D cone-beam CT (4D-CBCT), which is widely used in image guided radiotherapy (IGRT). Previous studies have investigated each of these factors individually, but no integrated sensitivity analysis has been performed. In addition, projection angular spacing is also a key factor in reconstruction, but how it affects image quality is not obvious. An investigation of the impacts of these four factors on image quality can help determine the most effective strategy in improving 4D-CBCT for IGRT. Methods: Fourteen 4D-CBCT patient projection datasets with various respiratory motion features were reconstructed with the following controllable factors: (i) respiratory signal (real-time position management, projection image intensity analysis, or fiducial marker tracking), (ii) binning method (phase, displacement, or equal-projection-density displacement binning), and (iii) reconstruction algorithm [Feldkamp–Davis–Kress (FDK), McKinnon–Bates (MKB), or adaptive-steepest-descent projection-onto-convex-sets (ASD-POCS)]. The image quality was quantified using signal-to-noise ratio (SNR), contrast-to-noise ratio, and edge-response width in order to assess noise/streaking and blur. The SNR values were also analyzed with respect to the maximum, mean, and root-mean-squared-error (RMSE) projection angular spacing to investigate how projection angular spacing affects image quality. Results: The choice of respiratory signals was found to have no significant impact on image quality. Displacement-based binning was found to be less prone to motion artifacts compared to phase binning in more than half of the cases, but was shown to suffer from large interbin image quality variation and large projection angular gaps. Both MKB and ASD-POCS resulted in noticeably improved image quality almost 100% of the time relative to FDK. In addition, SNR

  17. Thin Silicon Detector Technology for Use in Imaging Solar ENAs Project

    Data.gov (United States)

    National Aeronautics and Space Administration — Top Level Objective: To enable capabilities for imaging and spectral measurements of energetic neutral hydrogen atoms (ENAs) produced with energies ∼1MeV/nuc in...

  18. High Spectral Resolution, High Cadence, Imaging X-ray Microcalorimeters for Solar Physics - Phase 2 Project

    Data.gov (United States)

    National Aeronautics and Space Administration — Microcalorimeter x-ray instruments are non-dispersive, high spectral resolution, broad-band, high cadence imaging spectrometers. We have been developing these...

  19. US Participation in the Solar Orbiter Multi Element Telescope for Imaging and Spectroscopy (METIS) Project

    Data.gov (United States)

    National Aeronautics and Space Administration — The Multi Element Telescope for Imaging and Spectroscopy, METIS, investigation has been conceived to perform off-limb and near-Sun coronagraphy and is motivated by...

  20. Basic methodology of tomographic imaging by filtered inverse projection at a turbo-pump. Project report; Methodische Grundlagen fuer die Tomographie durch gefilterte Rueckprojektion an einer Axialpumpe. Projektbericht

    Energy Technology Data Exchange (ETDEWEB)

    Hoppe, D.

    2000-11-01

    A two-phase medium consisting of a fluid containing gas is transported in a turbo-pump via an impeller. The interaction between the gaseous phase and the impeller is to be examined by tomography with gamma rays. Reconstruction of the image of the object is to be made by way of filtered inverse projection. The methodology of using this principle in the given system (geometry and measuring conditions) is explained. (orig./CB) [German] Ein zweiphasiges, aus einer gashaltigen Fluessigkeit bestehendes Medium wird in einer Axialpumpe von einem propellerartigen Laufrad senkrecht zur Drehachse dieses Laufrades transportiert. Die Wechselwirkung zwischen der Gasphase und dem Laufrad soll unter Verwendung von Gamma-Strahlung mittels Tomographie untersucht werden. Dabei ist fuer die Rekonstruktion des Objektbildes das Prinzip der sogenannten gefilterten Rueckprojektion vorgesehen. Die methodischen Grundlagen fuer die Nutzung dieses Prinzips unter von vorgesehenen geometrischen und messtechnischen Bedingungen sind Gegenstand dieser Arbeit. (orig.)

  1. Virtual autopsy using imaging: bridging radiologic and forensic sciences. A review of the Virtopsy and similar projects

    International Nuclear Information System (INIS)

    Bolliger, Stephan A.; Thali, Michael J.; Ross, Steffen; Buck, Ursula; Naether, Silvio; Vock, Peter

    2008-01-01

    The transdisciplinary research project Virtopsy is dedicated to implementing modern imaging techniques into forensic medicine and pathology in order to augment current examination techniques or even to offer alternative methods. Our project relies on three pillars: three-dimensional (3D) surface scanning for the documentation of body surfaces, and both multislice computed tomography (MSCT) and magnetic resonance imaging (MRI) to visualise the internal body. Three-dimensional surface scanning has delivered remarkable results in the past in the 3D documentation of patterned injuries and of objects of forensic interest as well as whole crime scenes. Imaging of the interior of corpses is performed using MSCT and/or MRI. MRI, in addition, is also well suited to the examination of surviving victims of assault, especially choking, and helps visualise internal injuries not seen at external examination of the victim. Apart from the accuracy and three-dimensionality that conventional documentations lack, these techniques allow for the re-examination of the corpse and the crime scene even decades later, after burial of the corpse and liberation of the crime scene. We believe that this virtual, non-invasive or minimally invasive approach will improve forensic medicine in the near future. (orig.)

  2. The Mammographic Head Demonstrator Developed in the Framework of the “IMI” Project:. First Imaging Tests Results

    Science.gov (United States)

    Bisogni, Maria Giuseppina

    2006-04-01

    In this paper we report on the performances and the first imaging test results of a digital mammographic demonstrator based on GaAs pixel detectors. The heart of this prototype is the X-ray detection unit, which is a GaAs pixel sensor read-out by the PCC/MEDIPIXI circuit. Since the active area of the sensor is 1 cm2, 18 detectors have been organized in two staggered rows of nine chips each. To cover the typical mammographic format (18 × 24 cm2) a linear scanning is performed by means of a stepper motor. The system is integrated in mammographic equipment comprehending the X-ray tube, the bias and data acquisition systems and the PC-based control system. The prototype has been developed in the framework of the integrated Mammographic Imaging (IMI) project, an industrial research activity aiming to develop innovative instrumentation for morphologic and functional imaging. The project has been supported by the Italian Ministry of Education, University and Research (MIUR) and by five Italian High Tech companies in collaboration with the universities of Ferrara, Roma “La Sapienza”, Pisa and the INFN.

  3. Examination of imaging detectors for combined radiography procedures in the ACCIS joint project. Automatic cargo container inspection system. Final report

    International Nuclear Information System (INIS)

    Dangendorf, Volker

    2014-01-01

    Currently used screening systems of air cargo are based on X-ray radiation from bremsstrahlung generators. Thus, different substances from light elements of approximately the same density are difficult to distinguish, e.g. the image contrast between explosives and drugs is low compared to harmless organic substances, such as plastic parts or foodstuffs, and requires extensive follow-up investigations. On the other hand, the image contrast is also low in the case of heavy elements with X-ray methods, e.g. Special Nuclear Materials (SNM) such as Pu and.U, which are also transported in a container of lead for camouflage and mixed with goods from other heavy metals, makes it very difficult. Within the framework of the ACCIS Collaborative Project, a new inspection system for airfreight based on neutron and gamma irradiation was researched. Within this framework, the PTB subproject covered the following tasks: 1. Research and development of laboratory prototypes of imaging radiation detectors; 2. Development of a measuring station for the evaluation of the screening method at the PTB accelerator system, 3. Cooperation in the development of a concept for a pulsed radiation source, in particular design and investigation of the beam-producing target. 4. Determination of the physical and dosimetric parameters relevant to radiation protection. Examination of the conditions of application, requirement of operational facility, end user contacts; 6. Coordination of the German partners, in particular organization of the project meetings of the German and Israeli partners. [de

  4. Reduce blurring and distortion in a projection type virtual image display using integrated small optics

    Science.gov (United States)

    Hasegawa, Tatsuya; Yendo, Tomohiro

    2015-03-01

    Head Up Display (HUD) is being applied to automobile. HUD displays information as far virtual image on the windshield. Existing HUD usually displays planar information. If the image corresponding to scenery on the road like Augmented Reality (AR) is displayed on the HUD, driver can efficiently get the information. To actualize this, HUD covering large viewing field is needed. However existing HUD cannot cover large viewing field. Therefore we have proposed system consisting of projector and many small diameter convex lenses. However observed virtual image has blurring and distortion . In this paper, we propose two methods to reduce blurring and distortion of images. First, to reduce blurring of images, distance between each of screen and lens comprised in lens array is adjusted. We inferred from the more distant the lens from center of the array is more blurred that the cause of blurring is curvature of field of lens in the array. Second, to avoid distortion of images, each lens in the array is curved spherically. We inferred from the more distant the lens from center of the array is more distorted that the cause of distortion is incident angle of ray. We confirmed effectiveness of both methods.

  5. Reconstruction of tomographic images from projections of a small number of views by means of mathematical programming

    International Nuclear Information System (INIS)

    Kobayashi, Fujio; Yamaguchi, Shoichiro

    1985-01-01

    Fundamental studies have been made on the application of mathematical programming to the reconstruction of tomographic images from projections of a small number of views without requiring any circular symmetry nor periodicity. Linear programming and quadratic programming were applied to minimize the quadratic sum of the residue and to finally obtain optimized reconstruction images. The mathematical algorithms were verified by the method of computer simulation, and the relationship between the number of picture elements and the number of iterations necessary for convergence was also investigated. The methods of linear programming and quadratic programming require fairly simple mathematical procedures, and strict solutions can be obtained within a finite number of iterations. Their only draw back is the requirement of a large quantity of computer memory. But this problem will be desolved by the advent of large fast memory devices in the near future. (Aoki, K.)

  6. MR tractography; Visualization of structure of nerve fiber system from diffusion weighted images with maximum intensity projection method

    Energy Technology Data Exchange (ETDEWEB)

    Kinosada, Yasutomi; Okuda, Yasuyuki (Mie Univ., Tsu (Japan). School of Medicine); Ono, Mototsugu (and others)

    1993-02-01

    We developed a new noninvasive technique to visualize the anatomical structure of the nerve fiber system in vivo, and named this technique magnetic resonance (MR) tractography and the acquired image an MR tractogram. MR tractography has two steps. One is to obtain diffusion-weighted images sensitized along axes appropriate for depicting the intended nerve fibers with anisotropic water diffusion MR imaging. The other is to extract the anatomical structure of the nerve fiber system from a series of diffusion-weighted images by the maximum intensity projection method. To examine the clinical usefulness of the proposed technique, many contiguous, thin (3 mm) coronal two-dimensional sections of the brain were acquired sequentially in normal volunteers and selected patients with paralyses, on a 1.5 Tesla MR system (Signa, GE) with an ECG-gated Stejskal-Tanner pulse sequence. The structure of the nerve fiber system of normal volunteers was almost the same as the anatomy. The tractograms of patients with paralyses clearly showed the degeneration of nerve fibers and were correlated with clinical symptoms. MR tractography showed great promise for the study of neuroanatomy and neuroradiology. (author).

  7. The Stellar Imager (SI) Project: Resolving Stellar Surfaces, Interiors, and Magnetic Activity

    Science.gov (United States)

    Carpenter, Kenneth G.; Schrijver, K.; Karovska, M.

    2007-01-01

    The Stellar Imager (SI) is a UV/Optical. Space-Based Interferometer designed to enable 0.1 milli-arcsec (mas) spectral imaging of stellar surfaces and, via asteroseismology, stellar interiors and of the Universe in general. The ultra-sharp images of SI will revolutionize our view of many dynamic astrophysical processes by transforming point sources into extended sources, and snapshots into evolving views. The science of SI focuses on the role of magnetism in the Universe, particularly on magnetic activity on the surfaces of stars like the Sun. Its prime goal is to enable long-term forecasting of solar activity and the space weather that it drives. SI will also revolutionize our understanding of the formation of planetary systems, of the habitability and climatology of distant planets, and of many magneto-hydrodynamically controlled processes in the Universe. In this paper we discuss the science goals, technology needs, and baseline design of the SI mission.

  8. The Age-ility Project (Phase 1): Structural and functional imaging and electrophysiological data repository

    NARCIS (Netherlands)

    Karayanidis, F.; Keuken, M.C.; Wong, A.; Rennie, J.L.; de Hollander, G.; Cooper, P.S.; Fulham, W.R.; Lenroot, R.; Parsons, M.; Philips, N.; Michie, P.T.; Forstmann, B.U.

    2015-01-01

    Our understanding of the complex interplay between structural and functional organisation of brain networks is being advanced by the development of novel multi-modal analyses approaches. The Age-ility Project (Phase 1) data repository offers open access to structural MRI, diffusion MRI, and

  9. Seismic calibration shots conducted in 2009 in the Imperial Valley, southern California, for the Salton Seismic Imaging Project (SSIP)

    Science.gov (United States)

    Murphy, Janice; Goldman, Mark; Fuis, Gary; Rymer, Michael; Sickler, Robert; Miller, Summer; Butcher, Lesley; Ricketts, Jason; Criley, Coyn; Stock, Joann; Hole, John; Chavez, Greg

    2011-01-01

    Rupture of the southern section of the San Andreas Fault, from the Coachella Valley to the Mojave Desert, is believed to be the greatest natural hazard facing California in the near future. With an estimated magnitude between 7.2 and 8.1, such an event would result in violent shaking, loss of life, and disruption of lifelines (freeways, aqueducts, power, petroleum, and communication lines) that would bring much of southern California to a standstill. As part of the Nation's efforts to prevent a catastrophe of this magnitude, a number of projects are underway to increase our knowledge of Earth processes in the area and to mitigate the effects of such an event. One such project is the Salton Seismic Imaging Project (SSIP), which is a collaborative venture between the United States Geological Survey (USGS), California Institute of Technology (Caltech), and Virginia Polytechnic Institute and State University (Virginia Tech). This project will generate and record seismic waves that travel through the crust and upper mantle of the Salton Trough. With these data, we will construct seismic images of the subsurface, both reflection and tomographic images. These images will contribute to the earthquake-hazard assessment in southern California by helping to constrain fault locations, sedimentary basin thickness and geometry, and sedimentary seismic velocity distributions. Data acquisition is currently scheduled for winter and spring of 2011. The design and goals of SSIP resemble those of the Los Angeles Region Seismic Experiment (LARSE) of the 1990's. LARSE focused on examining the San Andreas Fault system and associated thrust-fault systems of the Transverse Ranges. LARSE was successful in constraining the geometry of the San Andreas Fault at depth and in relating this geometry to mid-crustal, flower-structure-like decollements in the Transverse Ranges that splay upward into the network of hazardous thrust faults that caused the 1971 M 6.7 San Fernando and 1987 M 5

  10. Neutron imaging: A non-destructive tool for materials testing. Report of a coordinated research project 2003-2006

    International Nuclear Information System (INIS)

    2008-09-01

    The enhancement of utilization of research reactors is one of the major objectives of the IAEA's project on Effective Utilization of Research Reactors. In particular, the improvement of existing installations for neutron imaging and the effective utilization of such facilities are intended. From the experience of Type A facilities, it is obvious that some investment is required to come from simple neutron imaging methods (film, track-etch foils) to the more enhanced ones. Related to the installation and operation of the whole reactor system, the volume of the investment for an imaging device is minor. Also compared to the installations for neutron scattering research, neutron imaging systems are relatively cheap, but very efficient in the use of the neutrons. Therefore, one of the aims of the CRP was to look for adapted solutions for the individual reactor installation and beam line. Specific Research Objectives: To optimize the neutron beams for imaging purpose using modern simulation techniques; To enhance the beam intensity using modern layout principles, neutron optics, like focusing and beam guides and filters; To develop a standardized, low cost, neutron image grabber and analyzer for efficient data collection that can be used with low intensity sources; To improve signal processing techniques used in neutron imaging applications. Expected Research Outputs: Neutron radiography is used at research reactor centres in many Member States, but the facilities are not optimized for attractive potential applications. This fact has been brought out at various discussion meetings. The CRP is aimed at improving the design of beam lines in terms of neutron collimation and intensity; Improvements in resolution are normally achieved at a cost in intensity. For an instrument exhibiting good resolution, one needs to employ a fast counting system. It is proposed to work along these lines to develop an optimised detection system. Many facilities, at present, have small CCD

  11. MO-FG-204-08: Optimization-Based Image Reconstruction From Unevenly Distributed Sparse Projection Views

    International Nuclear Information System (INIS)

    Xie, Huiqiao; Yang, Yi; Tang, Xiangyang; Niu, Tianye; Ren, Yi

    2015-01-01

    Purpose: Optimization-based reconstruction has been proposed and investigated for reconstructing CT images from sparse views, as such the radiation dose can be substantially reduced while maintaining acceptable image quality. The investigation has so far focused on reconstruction from evenly distributed sparse views. Recognizing the clinical situations wherein only unevenly sparse views are available, e.g., image guided radiation therapy, CT perfusion and multi-cycle cardiovascular imaging, we investigate the performance of optimization-based image reconstruction from unevenly sparse projection views in this work. Methods: The investigation is carried out using the FORBILD and an anthropomorphic head phantoms. In the study, 82 views, which are evenly sorted out from a full (360°) axial CT scan consisting of 984 views, form sub-scan I. Another 82 views are sorted out in a similar manner to form sub-scan II. As such, a CT scan with sparse (164) views at 1:6 ratio are formed. By shifting the two sub-scans relatively in view angulation, a CT scan with unevenly distributed sparse (164) views at 1:6 ratio are formed. An optimization-based method is implemented to reconstruct images from the unevenly distributed views. By taking the FBP reconstruction from the full scan (984 views) as the reference, the root mean square (RMS) between the reference and the optimization-based reconstruction is used to evaluate the performance quantitatively. Results: In visual inspection, the optimization-based method outperforms the FBP substantially in the reconstruction from unevenly distributed, which are quantitatively verified by the RMS gauged globally and in ROIs in both the FORBILD and anthropomorphic head phantoms. The RMS increases with increasing severity in the uneven angular distribution, especially in the case of anthropomorphic head phantom. Conclusion: The optimization-based image reconstruction can save radiation dose up to 12-fold while providing acceptable image quality

  12. Image quality in thoracic 4D cone-beam CT: A sensitivity analysis of respiratory signal, binning method, reconstruction algorithm, and projection angular spacing

    OpenAIRE

    Shieh, Chun-Chien; Kipritidis, John; O’Brien, Ricky T.; Kuncic, Zdenka; Keall, Paul J.

    2014-01-01

    Purpose: Respiratory signal, binning method, and reconstruction algorithm are three major controllable factors affecting image quality in thoracic 4D cone-beam CT (4D-CBCT), which is widely used in image guided radiotherapy (IGRT). Previous studies have investigated each of these factors individually, but no integrated sensitivity analysis has been performed. In addition, projection angular spacing is also a key factor in reconstruction, but how it affects image quality is not obvious. An inv...

  13. Induced Polarization with Electromagnetic Coupling: 3D Spectral Imaging Theory, EMSP Project No. 73836

    Energy Technology Data Exchange (ETDEWEB)

    Morgan, F. Dale; Sogade, John

    2004-12-14

    This project was designed as a broad foundational study of spectral induced polarization (SIP) for characterization of contaminated sites. It encompassed laboratory studies of the effects of chemistry on induced polarization, development of 3D forward modeling and inversion codes, and investigations of inductive and capacitive coupling problems. In the laboratory part of the project a physico-chemical model developed in this project was used to invert laboratory IP spectra for the grain size and the effective grain size distribution of the sedimentary rocks as well as the formation factor, porosity, specific surface area, and the apparent fractal dimension. Furthermore, it was established that the IP response changed with the solution chemistry, the concentration of a given solution chemistry, valence of the constituent ions, and ionic radius. In the field part of the project, a 3D complex forward and inverse model was developed. It was used to process data acquired at two frequencies (1/16 Hz and 1/ 4Hz) in a cross-borehole configuration at the A-14 outfall area of the Savannah River Site (SRS) during March 2003 and June 2004. The chosen SRS site was contaminated with Tetrachloroethylene (TCE) and Trichloroethylene (PCE) that were disposed in this area for several decades till the 1980s. The imaginary conductivity produced from the inverted 2003 data correlated very well with the log10 (PCE) concentration derived from point sampling at 1 ft spacing in five ground-truth boreholes drilled after the data acquisition. The equivalent result for the 2004 data revealed that there were significant contaminant movements during the period March 2003 and June 2004, probably related to ground-truth activities and nearby remediation activities. Therefore SIP was successfully used to develop conceptual models of volume distributions of PCE/TCE contamination. In addition, the project developed non-polarizing electrodes that can be deployed in boreholes for years. A total of 28

  14. Digital image management project for dermatological health care environments: a new dedicated software and review of the literature.

    Science.gov (United States)

    Rubegni, Pietro; Nami, Niccolò; Poggiali, Sara; Tataranno, Domenico; Fimiani, M

    2009-05-01

    Because the skin is the only organ completely accessible to visual examination, digital technology has therefore attracted the attention of dermatologists for documenting, monitoring, measuring and classifying morphological manifestations. To describe a digital image management system dedicated to dermatological health care environments and to compare it with other existing softwares for digital image storage. We designed a reliable hardware structure that could ensure future scaling, because storage needs tend to grow exponentially. For the software, we chose a client-web server application based on a relational database and with a 'minimalist' user interface. We developed a software with a ready-made, adaptable index of skin pathologies. It facilitates classification by pathology, patient and visit, with an advanced search option allowing access to all images according to personalized criteria. The software also offers the possibility of comparing two or more digital images (follow-up). The fact that the archives of years of digital photos acquired and saved on PCs can easily be entered in the program distinguishes it from the others in the market. This option is fundamental for accessing all the photos taken in years of practice in the program without entering them one by one. The program is available to any user connected to the local Intranet and the system may directly be available in the future from the Internet. All clinics and surgeries, especially those that rely on digital images, are obliged to keep up with technological advances. It is therefore hoped that our project will become a model for medical structures intending to rationalise digital and other data according to statutory requirements.

  15. A semi-symmetric image encryption scheme based on the function projective synchronization of two hyperchaotic systems.

    Directory of Open Access Journals (Sweden)

    Xiaoqiang Di

    Full Text Available Both symmetric and asymmetric color image encryption have advantages and disadvantages. In order to combine their advantages and try to overcome their disadvantages, chaos synchronization is used to avoid the key transmission for the proposed semi-symmetric image encryption scheme. Our scheme is a hybrid chaotic encryption algorithm, and it consists of a scrambling stage and a diffusion stage. The control law and the update rule of function projective synchronization between the 3-cell quantum cellular neural networks (QCNN response system and the 6th-order cellular neural network (CNN drive system are formulated. Since the function projective synchronization is used to synchronize the response system and drive system, Alice and Bob got the key by two different chaotic systems independently and avoid the key transmission by some extra security links, which prevents security key leakage during the transmission. Both numerical simulations and security analyses such as information entropy analysis, differential attack are conducted to verify the feasibility, security, and efficiency of the proposed scheme.

  16. Projections onto Convex Sets Super-Resolution Reconstruction Based on Point Spread Function Estimation of Low-Resolution Remote Sensing Images

    Directory of Open Access Journals (Sweden)

    Chong Fan

    2017-02-01

    Full Text Available To solve the problem on inaccuracy when estimating the point spread function (PSF of the ideal original image in traditional projection onto convex set (POCS super-resolution (SR reconstruction, this paper presents an improved POCS SR algorithm based on PSF estimation of low-resolution (LR remote sensing images. The proposed algorithm can improve the spatial resolution of the image and benefit agricultural crop visual interpolation. The PSF of the highresolution (HR image is unknown in reality. Therefore, analysis of the relationship between the PSF of the HR image and the PSF of the LR image is important to estimate the PSF of the HR image by using multiple LR images. In this study, the linear relationship between the PSFs of the HR and LR images can be proven. In addition, the novel slant knife-edge method is employed, which can improve the accuracy of the PSF estimation of LR images. Finally, the proposed method is applied to reconstruct airborne digital sensor 40 (ADS40 three-line array images and the overlapped areas of two adjacent GF-2 images by embedding the estimated PSF of the HR image to the original POCS SR algorithm. Experimental results show that the proposed method yields higher quality of reconstructed images than that produced by the blind SR method and the bicubic interpolation method.

  17. Methods of X-ray CT image reconstruction from few projections; Methodes de reconstruction d'images a partir d'un faible nombre de projections en tomographie par rayons X

    Energy Technology Data Exchange (ETDEWEB)

    Wang, H.

    2011-10-24

    To improve the safety (low dose) and the productivity (fast acquisition) of a X-ray CT system, we want to reconstruct a high quality image from a small number of projections. The classical reconstruction algorithms generally fail since the reconstruction procedure is unstable and suffers from artifacts. A new approach based on the recently developed 'Compressed Sensing' (CS) theory assumes that the unknown image is in some sense 'sparse' or 'compressible', and the reconstruction is formulated through a non linear optimization problem (TV/l1 minimization) by enhancing the sparsity. Using the pixel (or voxel in 3D) as basis, to apply the CS framework in CT one usually needs a 'sparsifying' transform, and combines it with the 'X-ray projector' which applies on the pixel image. In this thesis, we have adapted a 'CT-friendly' radial basis of Gaussian family called 'blob' to the CS-CT framework. The blob has better space-frequency localization properties than the pixel, and many operations, such as the X-ray transform, can be evaluated analytically and are highly parallelizable (on GPU platform). Compared to the classical Kaisser-Bessel blob, the new basis has a multi-scale structure: an image is the sum of dilated and translated radial Mexican hat functions. The typical medical objects are compressible under this basis, so the sparse representation system used in the ordinary CS algorithms is no more needed. 2D simulations show that the existing TV and l1 algorithms are more efficient and the reconstructions have better visual quality than the equivalent approach based on the pixel or wavelet basis. The new approach has also been validated on 2D experimental data, where we have observed that in general the number of projections can be reduced to about 50%, without compromising the image quality. (author) [French] Afin d'ameliorer la surete (faible dose) et la productivite (acquisition rapide) du

  18. Magnetic resonance imaging: project planning and management of a superconductive M.R.I. installation

    International Nuclear Information System (INIS)

    Condon, P.M.; Robertson, A.R.

    1989-01-01

    The planning and installation of a Superconductive Magnetic Resonance Imaging installation at the Royal Adelaide Hospital, Adelaide, South Australia is described. Tender specification, assessment of offers via criteria weighted analysis of technical and economic factors and the final recommendation for a 1.0 Tesla unit are discussed. Building and installation considerations are noted including fringe field effects, magnetic shielding, radiofrequency shielding, cryogens, metallic screening and specific considerations in the Magnet room. 9 refs., 7 figs

  19. MAD ADAPTIVE OPTICS IMAGING OF HIGH-LUMINOSITY QUASARS: A PILOT PROJECT

    Energy Technology Data Exchange (ETDEWEB)

    Liuzzo, E. [Osservatorio di Radioastronomia, INAF, via Gobetti 101, I-40129 Bologna (Italy); Falomo, R.; Paiano, S.; Baruffolo, A.; Farinato, J.; Moretti, A.; Ragazzoni, R. [Osservatorio Astronomico di Padova, INAF, vicolo dell’Osservatorio 5, I-35122 Padova (Italy); Treves, A. [Università dell’Insubria (Como) (Italy); Uslenghi, M. [INAF-IASF, via E. Bassini 15, I-20133 Milano (Italy); Arcidiacono, C.; Diolaiti, E.; Lombini, M. [Osservatorio Astronomico di Bologna, INAF, Bologna, Via Ranzani 1, I-40127 Bologna (Italy); Brast, R. [Dipartimento di Fisica e Astronomia, Università di Bologna, Via Irnerio, 46, I-40126, Bologna (Italy); Donaldson, R.; Kolb, J.; Marchetti, E.; Tordo, S., E-mail: liuzzo@ira.inaf.it [European Southern Observatory, Karl-Schwarschild-Strasse 2, D-85748 Garching bei München (Germany)

    2016-08-01

    We present near-IR images of five luminous quasars at z ∼ 2 and one at z ∼ 4 obtained with an experimental adaptive optics (AO) instrument at the European Southern Observatory Very Large Telescope. The observations are part of a program aimed at demonstrating the capabilities of multi-conjugated adaptive optics imaging combined with the use of natural guide stars for high spatial resolution studies on large telescopes. The observations were mostly obtained under poor seeing conditions but in two cases. In spite of these nonoptimal conditions, the resulting images of point sources have cores of FWHM ∼ 0.2 arcsec. We are able to characterize the host galaxy properties for two sources and set stringent upper limits to the galaxy luminosity for the others. We also report on the expected capabilities for investigating the host galaxies of distant quasars with AO systems coupled with future Extremely Large Telescopes. Detailed simulations show that it will be possible to characterize compact (2–3 kpc) quasar host galaxies for quasi-stellar objects at z = 2 with nucleus K -magnitude spanning from 15 to 20 (corresponding to absolute magnitude −31 to −26) and host galaxies that are 4 mag fainter than their nuclei.

  20. Specific NIST projects in support of the NIJ Concealed Weapon Detection and Imaging Program

    Science.gov (United States)

    Paulter, Nicholas G.

    1998-12-01

    The Electricity Division of the National Institute of Standards and Technology is developing revised performance standards for hand-held (HH) and walk-through (WT) metal weapon detectors, test procedures and systems for these detectors, and a detection/imaging system for finding concealed weapons. The revised standards will replace the existing National Institute of Justice (NIJ) standards for HH and WT devices and will include detection performance specifications as well as system specifications (environmental conditions, mechanical strength and safety, response reproducibility and repeatability, quality assurance, test reporting, etc.). These system requirements were obtained from the Law Enforcement and corrections Technology Advisory Council, an advisory council for the NIJ. Reproducible and repeatable test procedures and appropriate measurement systems will be developed for evaluating HH and WT detection performance. A guide to the technology and application of non- eddy-current-based detection/imaging methods (such as acoustic, passive millimeter-wave and microwave, active millimeter-wave and terahertz-wave, x-ray, etc.) Will be developed. The Electricity Division is also researching the development of a high- frequency/high-speed (300 GH to 1 THz) pulse-illuminated, stand- off, video-rate, concealed weapons/contraband imaging system.

  1. MAD Adaptive Optics Imaging of High-luminosity Quasars: A Pilot Project

    Science.gov (United States)

    Liuzzo, E.; Falomo, R.; Paiano, S.; Treves, A.; Uslenghi, M.; Arcidiacono, C.; Baruffolo, A.; Diolaiti, E.; Farinato, J.; Lombini, M.; Moretti, A.; Ragazzoni, R.; Brast, R.; Donaldson, R.; Kolb, J.; Marchetti, E.; Tordo, S.

    2016-08-01

    We present near-IR images of five luminous quasars at z ˜ 2 and one at z ˜ 4 obtained with an experimental adaptive optics (AO) instrument at the European Southern Observatory Very Large Telescope. The observations are part of a program aimed at demonstrating the capabilities of multi-conjugated adaptive optics imaging combined with the use of natural guide stars for high spatial resolution studies on large telescopes. The observations were mostly obtained under poor seeing conditions but in two cases. In spite of these nonoptimal conditions, the resulting images of point sources have cores of FWHM ˜ 0.2 arcsec. We are able to characterize the host galaxy properties for two sources and set stringent upper limits to the galaxy luminosity for the others. We also report on the expected capabilities for investigating the host galaxies of distant quasars with AO systems coupled with future Extremely Large Telescopes. Detailed simulations show that it will be possible to characterize compact (2-3 kpc) quasar host galaxies for quasi-stellar objects at z = 2 with nucleus K-magnitude spanning from 15 to 20 (corresponding to absolute magnitude -31 to -26) and host galaxies that are 4 mag fainter than their nuclei.

  2. A short study on imaging new towers within the city. Students projects

    Directory of Open Access Journals (Sweden)

    Ştefan Mihăilescu

    2014-06-01

    Full Text Available Present article aims to discuss project proposals on the thematic of new towers from the teaching point of view in architecture. The debate regarding high office buildings is released from its financial constrains mainly by the use of a theoretical process focused on conceptual approach regarding the urban integration of the design to better address the concerns of the relation between the new object and the city. Tutoring a complex architecture project involves lectures and interdisciplinary debates on the theme of constituted urban landscape and morphology, culture, identity, history, memory, place and people – all these being important for the project inception. Sustainable urban management and increased density could be very strong arguments in motivating the analysis of city tendencies, its evolution, nature and its structure. All these are only exercises which synthesize a wide range of knowledge from different domains, the lecture of the dedicated site, and the best answer to a specific brief considering a very complex context of future sustainable approach as the suitable attitude regarding the city and its built environment, as well as using the necessary skills and methods to stimulate creativity and research by design.

  3. Synchrotron microCT imaging of soft tissue in juvenile zebrafish reveals retinotectal projections

    Science.gov (United States)

    Xin, Xuying; Clark, Darin; Ang, Khai Chung; van Rossum, Damian B.; Copper, Jean; Xiao, Xianghui; La Riviere, Patrick J.; Cheng, Keith C.

    2017-02-01

    Biomedical research and clinical diagnosis would benefit greatly from full volume determinations of anatomical phenotype. Comprehensive tools for morphological phenotyping are central for the emerging field of phenomics, which requires high-throughput, systematic, accurate, and reproducible data collection from organisms affected by genetic, disease, or environmental variables. Theoretically, complete anatomical phenotyping requires the assessment of every cell type in the whole organism, but this ideal is presently untenable due to the lack of an unbiased 3D imaging method that allows histopathological assessment of any cell type despite optical opacity. Histopathology, the current clinical standard for diagnostic phenotyping, involves the microscopic study of tissue sections to assess qualitative aspects of tissue architecture, disease mechanisms, and physiological state. However, quantitative features of tissue architecture such as cellular composition and cell counting in tissue volumes can only be approximated due to characteristics of tissue sectioning, including incomplete sampling and the constraints of 2D imaging of 5 micron thick tissue slabs. We have used a small, vertebrate organism, the zebrafish, to test the potential of microCT for systematic macroscopic and microscopic morphological phenotyping. While cell resolution is routinely achieved using methods such as light sheet fluorescence microscopy and optical tomography, these methods do not provide the pancellular perspective characteristic of histology, and are constrained by the limited penetration of visible light through pigmented and opaque specimens, as characterizes zebrafish juveniles. Here, we provide an example of neuroanatomy that can be studied by microCT of stained soft tissue at 1.43 micron isotropic voxel resolution. We conclude that synchrotron microCT is a form of 3D imaging that may potentially be adopted towards more reproducible, large-scale, morphological phenotyping of optically

  4. Images of gravitational and magnetic phenomena derived from two-dimensional back-projection Doppler tomography of interacting binary stars

    International Nuclear Information System (INIS)

    Richards, Mercedes T.; Cocking, Alexander S.; Fisher, John G.; Conover, Marshall J.

    2014-01-01

    We have used two-dimensional back-projection Doppler tomography as a tool to examine the influence of gravitational and magnetic phenomena in interacting binaries that undergo mass transfer from a magnetically active star onto a non-magnetic main-sequence star. This multitiered study of over 1300 time-resolved spectra of 13 Algol binaries involved calculations of the predicted dynamical behavior of the gravitational flow and the dynamics at the impact site, analysis of the velocity images constructed from tomography, and the influence on the tomograms of orbital inclination, systemic velocity, orbital coverage, and shadowing. The Hα tomograms revealed eight sources: chromospheric emission, a gas stream along the gravitational trajectory, a star-stream impact region, a bulge of absorption or emission around the mass-gaining star, a Keplerian accretion disk, an absorption zone associated with hotter gas, a disk-stream impact region, and a hot spot where the stream strikes the edge of a disk. We described several methods used to extract the physical properties of the emission sources directly from the velocity images, including S-wave analysis, the creation of simulated velocity tomograms from hydrodynamic simulations, and the use of synthetic spectra with tomography to sequentially extract the separate sources of emission from the velocity image. In summary, the tomography images have revealed results that cannot be explained solely by gravitational effects: chromospheric emission moving with the mass-losing star, a gas stream deflected from the gravitational trajectory, and alternating behavior between stream state and disk state. Our results demonstrate that magnetic effects cannot be ignored in these interacting binaries.

  5. Why do commercial CT scanners still employ traditional, filtered back-projection for image reconstruction?

    International Nuclear Information System (INIS)

    Pan, Xiaochuan; Sidky, Emil Y; Vannier, Michael

    2009-01-01

    Despite major advances in x-ray sources, detector arrays, gantry mechanical design and especially computer performance, one component of computed tomography (CT) scanners has remained virtually constant for the past 25 years—the reconstruction algorithm. Fundamental advances have been made in the solution of inverse problems, especially tomographic reconstruction, but these works have not been translated into clinical and related practice. The reasons are not obvious and seldom discussed. This review seeks to examine the reasons for this discrepancy and provides recommendations on how it can be resolved. We take the example of field of compressive sensing (CS), summarizing this new area of research from the eyes of practical medical physicists and explaining the disconnection between theoretical and application-oriented research. Using a few issues specific to CT, which engineers have addressed in very specific ways, we try to distill the mathematical problem underlying each of these issues with the hope of demonstrating that there are interesting mathematical problems of general importance that can result from in depth analysis of specific issues. We then sketch some unconventional CT-imaging designs that have the potential to impact on CT applications, if the link between applied mathematicians and engineers/physicists were stronger. Finally, we close with some observations on how the link could be strengthened. There is, we believe, an important opportunity to rapidly improve the performance of CT and related tomographic imaging techniques by addressing these issues. (topical review)

  6. Astro-imaging projects for amateur astronomers a maker’s guide

    CERN Document Server

    Chung, Jim

    2015-01-01

    This is the must-have guide for all amateur astronomers who double as makers, doers, tinkerers, problem-solvers, and inventors. In a world where an amateur astronomy habit can easily run into the many thousands of dollars, it is still possible for practitioners to get high-quality results and equipment on a budget by utilizing DIY techniques. Surprisingly, it's not that hard to modify existing equipment to get new and improved usability from older or outdated technology, creating an end result that can outshine the pricey higher-end tools. All it takes is some elbow grease, a creative and open mind and the help of Chung's hard-won knowledge on building and modifying telescopes and cameras. With this book, it is possible for readers to improve their craft, making their equipment more user friendly. The tools are at hand, and the advice on how to do it is here. Readers will discover a comprehensive presentation of astronomical projects that any amateur on any budget can replicate – projects that utilize lead...

  7. The Exploration, Discovery, Recovery, and Preservation of Endangered Electronic Scientific Records, the Lunar Orbiter Image Recovery Project

    Science.gov (United States)

    Wingo, D. R.; Harper, M.

    2017-12-01

    In 1966 and 1967 NASA sent five photo reconnaissance satellites to the Moon to scout out sites for the first Apollo landings. This was the first mission in human history to extensively map the Moon to one meter resolution. The Lunar Orbiter spacecraft obtained photographs via 70 millimeter film in high resolution (one meter), and medium resolution (7-8) meter. Each mission took approximately 200 medium and high resolution photographs. These were processed in an on board film laboratory and then scanned via a 6.5 micron light beam.. These images were then transmitted to the Earth as analog waveforms double modulated as a vestigial sideband (VSB) and Frequency Modulation With Feedback (FMFB). The spacecraft transmissions were received at NASA's Deep Space Network at Goldstone (DSS-12), Madrid (DSS-61) and Woomera (DSS-41). The signals received were shifted to a 10 MHz intermediate frequency spectrum which was then written to 2"analog instrumentation tape drives (Ampex-FR-900's). In parallel the signals were demodulated and displayed on a kinescope, which then was photographed using a 35mm camera, and the 35mm film was then rephotographed, processed, and printed for initial analysis by the landing site selection team. The magnetic tape based analog sigals preserved the higher dynamic range of the spacecraft 70mm film, and this was then digitized utilizing digitizer and fed to a Univac 1170 computer for analysis of rock height, slope angles, and geologic context. After the Apollo missions these tapes were largely forgotten. In 2007, retired NASA archivist Nancy Evans, who had saved the last surviving Ampex FR-900's donated the drives to the Lunar Orbiter Image Recovery Project. The project obtained the 1474 hours of original tapes from NASA JPL, and at NASA Ames refurbished the drives. Additionally, the demodulator system was recreated from archived documentation using modern techniques. The project digitized the 1474 tapes, processed the 20 terabyes of raw data. The

  8. Adaptive statistical iterative reconstruction versus filtered back projection in the same patient: 64 channel liver CT image quality and patient radiation dose

    International Nuclear Information System (INIS)

    Mitsumori, Lee M.; Shuman, William P.; Busey, Janet M.; Kolokythas, Orpheus; Koprowicz, Kent M.

    2012-01-01

    To compare routine dose liver CT reconstructed with filtered back projection (FBP) versus low dose images reconstructed with FBP and adaptive statistical iterative reconstruction (ASIR). In this retrospective study, patients had a routine dose protocol reconstructed with FBP, and again within 17 months (median 6.1 months), had a low dose protocol reconstructed twice, with FBP and ASIR. These reconstructions were compared for noise, image quality, and radiation dose. Nineteen patients were included. (12 male, mean age 58). Noise was significantly lower in low dose images reconstructed with ASIR compared to routine dose images reconstructed with FBP (liver: p <.05, aorta: p < 0.001). Low dose FBP images were scored significantly lower for subjective image quality than low dose ASIR (2.1 ± 0.5, 3.2 ± 0.8, p < 0.001). There was no difference in subjective image quality scores between routine dose FBP images and low dose ASIR images (3.6 ± 0.5, 3.2 ± 0.8, NS).Radiation dose was 41% less for the low dose protocol (4.4 ± 2.4 mSv versus 7.5 ± 5.5 mSv, p < 0.05). Our initial results suggest low dose CT images reconstructed with ASIR may have lower measured noise, similar image quality, yet significantly less radiation dose compared with higher dose images reconstructed with FBP. (orig.)

  9. Adaptive statistical iterative reconstruction versus filtered back projection in the same patient: 64 channel liver CT image quality and patient radiation dose

    Energy Technology Data Exchange (ETDEWEB)

    Mitsumori, Lee M.; Shuman, William P.; Busey, Janet M.; Kolokythas, Orpheus; Koprowicz, Kent M. [University of Washington School of Medicine, Department of Radiology, Seattle, WA (United States)

    2012-01-15

    To compare routine dose liver CT reconstructed with filtered back projection (FBP) versus low dose images reconstructed with FBP and adaptive statistical iterative reconstruction (ASIR). In this retrospective study, patients had a routine dose protocol reconstructed with FBP, and again within 17 months (median 6.1 months), had a low dose protocol reconstructed twice, with FBP and ASIR. These reconstructions were compared for noise, image quality, and radiation dose. Nineteen patients were included. (12 male, mean age 58). Noise was significantly lower in low dose images reconstructed with ASIR compared to routine dose images reconstructed with FBP (liver: p <.05, aorta: p < 0.001). Low dose FBP images were scored significantly lower for subjective image quality than low dose ASIR (2.1 {+-} 0.5, 3.2 {+-} 0.8, p < 0.001). There was no difference in subjective image quality scores between routine dose FBP images and low dose ASIR images (3.6 {+-} 0.5, 3.2 {+-} 0.8, NS).Radiation dose was 41% less for the low dose protocol (4.4 {+-} 2.4 mSv versus 7.5 {+-} 5.5 mSv, p < 0.05). Our initial results suggest low dose CT images reconstructed with ASIR may have lower measured noise, similar image quality, yet significantly less radiation dose compared with higher dose images reconstructed with FBP. (orig.)

  10. National data analysis of general radiography projection method in medical imaging

    Energy Technology Data Exchange (ETDEWEB)

    Kim, Jung Su; Seo, Deok Nam; Choi, In Seok [Dept. of Bio-Convergence Engineering, Korea University Graduate School, Seoul (Korea, Republic of); and others

    2014-09-15

    According to database of medical institutions of health insurance review and assessment service in 2013, 1,118 hospitals and clinics have department of radiology in Korea. And there are CT, fluoroscopic and general radiographic equipment in those hospitals. Above all, general radiographic equipment is the most commonly used in the radiology department. And most of the general radiographic equipment are changing the digital radiography system from the film-screen types of the radiography system nowadays. However, most of the digital radiography department are used the film-screen types of the radiography system. Therefore, in this study, we confirmed present conditions of technical items for general radiography used in hospital and research on general radiographic techniques in domestic medical institutions. We analyzed 26 radiography projection method including chest, skull, spine and pelvis which are generally used in the radiography department.

  11. Three-dimensional sparse electromagnetic imaging accelerated by projected steepest descent

    KAUST Repository

    Desmal, Abdulla

    2016-11-02

    An efficient and accurate scheme for solving the nonlinear electromagnetic inverse scattering problem on three-dimensional sparse investigation domains is proposed. The minimization problem is constructed in such a way that the data misfit between measurements and scattered fields (which are expressed as a nonlinear function of the contrast) is constrained by the contrast\\'s first norm. The resulting minimization problem is solved using nonlinear Landweber iterations accelerated using a steepest descent algorithm. A projection operator is applied at every iteration to enforce the sparsity constraint by thresholding the result of that iteration. Steepest descent algorithm ensures accelerated and convergent solution by utilizing larger iteration steps selected based on a necessary B-condition.

  12. Analysis of Branding Strategy through Instagram with Storytelling in Creating Brand Image on Proud Project

    Directory of Open Access Journals (Sweden)

    Handy Martinus

    2017-10-01

    Full Text Available The objectives of the article were to study the branding strategy of a new age media company through social media with storytelling, how it could be utilized in the building of brand image, and what were the special characteristics of storytelling in social media environment, especially Instagram. The study provided an overview of factors in the online content update from social media to elevate interaction and to maintain the relationship between the company and its audience. Also specifying the points on how the information looked desirable to the customer. The qualitative descriptive content analysis was conducted to investigate how a new age media company with Instagram as the platform, which products were intangible and used storytelling in the delivery utilizes and mixes both components. Data were obtained by conducting the in-depth interview with company’s representative, public relations practitioner, and a follower of the company’s Instagram account, which then analyzed through data reduction. The research suggests that storytelling combined with social media features potentially strengthens all dimensions of brand equity with the brand image as the focus, primarily due to its engaging content, its ability to enhance the formation of an emotional connection, and its capabilities in improving recall and recognition. Branding activities conducted by Proud through Instagram can be said to be effective, taking into account the six main factors in providing updates, namely vividness, interactivity, informational content, entertaining content, position and valence of comments. Storytelling plays a role in communicating the company's brand and value. In addition, storytelling is also a branding tool that becomes an element that uniquely unveils a company with a competitor.

  13. Helium-3 MR q-space imaging with radial acquisition and iterative highly constrained back-projection.

    Science.gov (United States)

    O'Halloran, Rafael L; Holmes, James H; Wu, Yu-Chien; Alexander, Andrew; Fain, Sean B

    2010-01-01

    An undersampled diffusion-weighted stack-of-stars acquisition is combined with iterative highly constrained back-projection to perform hyperpolarized helium-3 MR q-space imaging with combined regional correction of radiofrequency- and T1-related signal loss in a single breath-held scan. The technique is tested in computer simulations and phantom experiments and demonstrated in a healthy human volunteer with whole-lung coverage in a 13-sec breath-hold. Measures of lung microstructure at three different lung volumes are evaluated using inhaled gas volumes of 500 mL, 1000 mL, and 1500 mL to demonstrate feasibility. Phantom results demonstrate that the proposed technique is in agreement with theoretical values, as well as with a fully sampled two-dimensional Cartesian acquisition. Results from the volunteer study demonstrate that the root mean squared diffusion distance increased significantly from the 500-mL volume to the 1000-mL volume. This technique represents the first demonstration of a spatially resolved hyperpolarized helium-3 q-space imaging technique and shows promise for microstructural evaluation of lung disease in three dimensions. Copyright (c) 2009 Wiley-Liss, Inc.

  14. Three-dimensional imaging analysis of Yersinia ruckeri infected rainbow trout (Oncorhynchus mykiss) gills by optical projection tomography

    DEFF Research Database (Denmark)

    Otani, Maki; Raida, Martin Kristian

    Optical projection tomography (OPT) is a new tool for three-dimensional (3D) imaging of small tissues or embryos, based on multi-angle recording of internal fluorescent signals using intact whole mount tissue or fish. To understand the route of infection, gills of Y. ruckeri infected rainbow trout...... were labeled with fluorescent antibody and visualized in 3D by the OPT scanner. Rainbow trout were infected with Y. ruckeri O1 biotype 1 (1 x 109 cells/ml) for 1 hour at 18 °C, and then transferred to clean water. Three fish were sampled at 12 different time points and fixed in 4% PFA. The gills were...... incubated whole with rabbit anti-Y. ruckeri polyclonal antibody and Alexa Fluor®594 conjugated goat anti-rabbit IgG. After embedding in 1% low melting point agarose, specimens were dehydrated in 100% methanol and cleared in BABB (benzyl alcohol: benzyl benzoate) for OPT scanning. 3D imaging results showed...

  15. Electric field conjugation for ground-based high-contrast imaging: robustness study and tests with the Project 1640 coronagraph

    Science.gov (United States)

    Matthews, Christopher T.; Crepp, Justin R.; Vasisht, Gautam; Cady, Eric

    2017-10-01

    The electric field conjugation (EFC) algorithm has shown promise for removing scattered starlight from high-contrast imaging measurements, both in numerical simulations and laboratory experiments. To prepare for the deployment of EFC using ground-based telescopes, we investigate the response of EFC to unaccounted for deviations from an ideal optical model. We explore the linear nature of the algorithm by assessing its response to a range of inaccuracies in the optical model generally present in real systems. We find that the algorithm is particularly sensitive to unresponsive deformable mirror (DM) actuators, misalignment of the Lyot stop, and misalignment of the focal plane mask. Vibrations and DM registration appear to be less of a concern compared to values expected at the telescope. We quantify how accurately one must model these core coronagraph components to ensure successful EFC corrections. We conclude that while the condition of the DM can limit contrast, EFC may still be used to improve the sensitivity of high-contrast imaging observations. Our results have informed the development of a full EFC implementation using the Project 1640 coronagraph at Palomar observatory. While focused on a specific instrument, our results are applicable to the many coronagraphs that may be interested in employing EFC.

  16. A PET reconstruction formulation that enforces non-negativity in projection space for bias reduction in Y-90 imaging

    Science.gov (United States)

    Lim, Hongki; Dewaraja, Yuni K.; Fessler, Jeffrey A.

    2018-02-01

    Most existing PET image reconstruction methods impose a nonnegativity constraint in the image domain that is natural physically, but can lead to biased reconstructions. This bias is particularly problematic for Y-90 PET because of the low probability positron production and high random coincidence fraction. This paper investigates a new PET reconstruction formulation that enforces nonnegativity of the projections instead of the voxel values. This formulation allows some negative voxel values, thereby potentially reducing bias. Unlike the previously reported NEG-ML approach that modifies the Poisson log-likelihood to allow negative values, the new formulation retains the classical Poisson statistical model. To relax the non-negativity constraint embedded in the standard methods for PET reconstruction, we used an alternating direction method of multipliers (ADMM). Because choice of ADMM parameters can greatly influence convergence rate, we applied an automatic parameter selection method to improve the convergence speed. We investigated the methods using lung to liver slices of XCAT phantom. We simulated low true coincidence count-rates with high random fractions corresponding to the typical values from patient imaging in Y-90 microsphere radioembolization. We compared our new methods with standard reconstruction algorithms and NEG-ML and a regularized version thereof. Both our new method and NEG-ML allow more accurate quantification in all volumes of interest while yielding lower noise than the standard method. The performance of NEG-ML can degrade when its user-defined parameter is tuned poorly, while the proposed algorithm is robust to any count level without requiring parameter tuning.

  17. IMAGE Project: Results of Laboratory Tests on Tracers for Supercritical Conditions.

    Science.gov (United States)

    Brandvoll, Øyvind; Opsahl Viig, Sissel; Nardini, Isabella; Muller, Jiri

    2016-04-01

    The use of tracers is a well-established technique for monitoring dynamic behaviour of water and gas through a reservoir. In geothermal reservoirs special challenges are encountered due to high temperatures and pressures. In this work, tracer candidates for monitoring water at supercritical conditions (temperature > 374°C, pressure ca 218 bar), are tested in laboratory experiments. Testing of tracers at supercritical water conditions requires experimental set-ups which tolerate harsh conditions with respect to high temperature and pressure. In addition stringent HES (health, environment and safety) factors have to be taken into consideration when designing and performing the experiments. The setup constructed in this project consists of a pressure vessel, high pressure pump, instrumentation for pressure and temperature control and instrumentation required for accurate sampling of tracers. In order to achieve accurate results, a special focus has been paid to the development of the tracer sampling technique. Perfluorinated cyclic hydrocarbons (PFCs) have been selected as tracer candidates. This group of compounds is today commonly used as gas tracers in oil reservoirs. According to the literature they are stable at temperatures up to 400°C. To start with, five PFCs have been tested for thermal stability in static experiments at 375°C and 108 bar in the experimental setup described above. The tracer candidates will be further tested for several months at the relevant conditions. Preliminary results indicate that some of the PFC compounds show stability after three months. However, in order to arrive at conclusive results, the experiments have to be repeated over a longer period and paying special attention to more accurate sampling procedures.

  18. From data to analysis: linking NWChem and Avogadro with the syntax and semantics of Chemical Markup Language.

    Science.gov (United States)

    de Jong, Wibe A; Walker, Andrew M; Hanwell, Marcus D

    2013-05-24

    Multidisciplinary integrated research requires the ability to couple the diverse sets of data obtained from a range of complex experiments and computer simulations. Integrating data requires semantically rich information. In this paper an end-to-end use of semantically rich data in computational chemistry is demonstrated utilizing the Chemical Markup Language (CML) framework. Semantically rich data is generated by the NWChem computational chemistry software with the FoX library and utilized by the Avogadro molecular editor for analysis and visualization. The NWChem computational chemistry software has been modified and coupled to the FoX library to write CML compliant XML data files. The FoX library was expanded to represent the lexical input files and molecular orbitals used by the computational chemistry software. Draft dictionary entries and a format for molecular orbitals within CML CompChem were developed. The Avogadro application was extended to read in CML data, and display molecular geometry and electronic structure in the GUI allowing for an end-to-end solution where Avogadro can create input structures, generate input files, NWChem can run the calculation and Avogadro can then read in and analyse the CML output produced. The developments outlined in this paper will be made available in future releases of NWChem, FoX, and Avogadro. The production of CML compliant XML files for computational chemistry software such as NWChem can be accomplished relatively easily using the FoX library. The CML data can be read in by a newly developed reader in Avogadro and analysed or visualized in various ways. A community-based effort is needed to further develop the CML CompChem convention and dictionary. This will enable the long-term goal of allowing a researcher to run simple "Google-style" searches of chemistry and physics and have the results of computational calculations returned in a comprehensible form alongside articles from the published literature.

  19. TME2/342: The Role of the EXtensible Markup Language (XML) for Future Healthcare Application Development

    Science.gov (United States)

    Noelle, G; Dudeck, J

    1999-01-01

    Two years, since the World Wide Web Consortium (W3C) has published the first specification of the eXtensible Markup Language (XML) there exist some concrete tools and applications to work with XML-based data. In particular, new generation Web browsers offer great opportunities to develop new kinds of medical, web-based applications. There are several data-exchange formats in medicine, which have been established in the last years: HL-7, DICOM, EDIFACT and, in the case of Germany, xDT. Whereas communication and information exchange becomes increasingly important, the development of appropriate and necessary interfaces causes problems, rising costs and effort. It has been also recognised that it is difficult to define a standardised interchange format, for one of the major future developments in medical telematics: the electronic patient record (EPR) and its availability on the Internet. Whereas XML, especially in an industrial environment, is celebrated as a generic standard and a solution for all problems concerning e-commerce, in a medical context there are only few applications developed. Nevertheless, the medical environment is an appropriate area for building XML applications: as the information and communication management becomes increasingly important in medical businesses, the role of the Internet changes quickly from an information to a communication medium. The first XML based applications in healthcare show us the advantage for a future engagement of the healthcare industry in XML: such applications are open, easy to extend and cost-effective. Additionally, XML is much more than a simple new data interchange format: many proposals for data query (XQL), data presentation (XSL) and other extensions have been proposed to the W3C and partly realised in medical applications.

  20. Cone beam CT imaging with limited angle of projections and prior knowledge for volumetric verification of non-coplanar beam radiation therapy: a proof of concept study

    Science.gov (United States)

    Meng, Bowen; Xing, Lei; Han, Bin; Koong, Albert; Chang, Daniel; Cheng, Jason; Li, Ruijiang

    2013-11-01

    Non-coplanar beams are important for treatment of both cranial and noncranial tumors. Treatment verification of such beams with couch rotation/kicks, however, is challenging, particularly for the application of cone beam CT (CBCT). In this situation, only limited and unconventional imaging angles are feasible to avoid collision between the gantry, couch, patient, and on-board imaging system. The purpose of this work is to develop a CBCT verification strategy for patients undergoing non-coplanar radiation therapy. We propose an image reconstruction scheme that integrates a prior image constrained compressed sensing (PICCS) technique with image registration. Planning CT or CBCT acquired at the neutral position is rotated and translated according to the nominal couch rotation/translation to serve as the initial prior image. Here, the nominal couch movement is chosen to have a rotational error of 5° and translational error of 8 mm from the ground truth in one or more axes or directions. The proposed reconstruction scheme alternates between two major steps. First, an image is reconstructed using the PICCS technique implemented with total-variation minimization and simultaneous algebraic reconstruction. Second, the rotational/translational setup errors are corrected and the prior image is updated by applying rigid image registration between the reconstructed image and the previous prior image. The PICCS algorithm and rigid image registration are alternated iteratively until the registration results fall below a predetermined threshold. The proposed reconstruction algorithm is evaluated with an anthropomorphic digital phantom and physical head phantom. The proposed algorithm provides useful volumetric images for patient setup using projections with an angular range as small as 60°. It reduced the translational setup errors from 8 mm to generally <1 mm and the rotational setup errors from 5° to <1°. Compared with the PICCS algorithm alone, the integration of rigid

  1. Classification of projection images of proteins with structural polymorphism by manifold: A simulation study for x-ray free-electron laser diffraction imaging

    Science.gov (United States)

    Yoshidome, Takashi; Oroguchi, Tomotaka; Nakasako, Masayoshi; Ikeguchi, Mitsunori

    2015-09-01

    Coherent x-ray diffraction imaging (CXDI) enables us to visualize noncrystalline sample particles with micrometer to submicrometer dimensions. Using x-ray free-electron laser (XFEL) sources, two-dimensional diffraction patterns are collected from fresh samples supplied to the irradiation area in the "diffraction-before-destruction" scheme. A recent significant increase in the intensity of the XFEL pulse is promising and will allow us to visualize the three-dimensional structures of proteins using XFEL-CXDI in the future. For the protocol proposed for molecular structure determination using future XFEL-CXDI [T. Oroguchi and M. Nakasako, Phys. Rev. E 87, 022712 (2013), 10.1103/PhysRevE.87.022712], we require an algorithm that can classify the data in accordance with the structural polymorphism of proteins arising from their conformational dynamics. However, most of the algorithms proposed primarily require the numbers of conformational classes, and then the results are biased by the numbers. To improve this point, here we examine whether a method based on the manifold concept can classify simulated XFEL-CXDI data with respect to the structural polymorphism of a protein that predominantly adopts two states. After random sampling of the conformations of the two states and in-between states from the trajectories of molecular dynamics simulations, a diffraction pattern is calculated from each conformation. Classification was performed by using our custom-made program suite named enma, in which the diffusion map (DM) method developed based on the manifold concept was implemented. We successfully classify most of the projection electron density maps phase retrieved from diffraction patterns into each of the two states and in-between conformations without the knowledge of the number of conformational classes. We also examined the classification of the projection electron density maps of each of the three states with respect to the Euler angle. The present results suggest

  2. Effects of defect pixel correction algorithms for x-ray detectors on image quality in planar projection and volumetric CT data sets

    International Nuclear Information System (INIS)

    Kuttig, Jan; Steiding, Christian; Hupfer, Martin; Karolczak, Marek; Kolditz, Daniel

    2015-01-01

    In this study we compared various defect pixel correction methods for reducing artifact appearance within projection images used for computed tomography (CT) reconstructions.Defect pixel correction algorithms were examined with respect to their artifact behaviour within planar projection images as well as in volumetric CT reconstructions. We investigated four algorithms: nearest neighbour, linear and adaptive linear interpolation, and a frequency-selective spectral-domain approach.To characterise the quality of each algorithm in planar image data, we inserted line defects of varying widths and orientations into images. The structure preservation of each algorithm was analysed by corrupting and correcting the image of a slit phantom pattern and by evaluating its line spread function (LSF). The noise preservation was assessed by interpolating corrupted flat images and estimating the noise power spectrum (NPS) of the interpolated region.For the volumetric investigations, we examined the structure and noise preservation within a structured aluminium foam, a mid-contrast cone-beam phantom and a homogeneous Polyurethane (PUR) cylinder.The frequency-selective algorithm showed the best structure and noise preservation for planar data of the correction methods tested. For volumetric data it still showed the best noise preservation, whereas the structure preservation was outperformed by the linear interpolation.The frequency-selective spectral-domain approach in the correction of line defects is recommended for planar image data, but its abilities within high-contrast volumes are restricted. In that case, the application of a simple linear interpolation might be the better choice to correct line defects within projection images used for CT. (paper)

  3. A service protocol for post-processing of medical images on the mobile device

    Science.gov (United States)

    He, Longjun; Ming, Xing; Xu, Lang; Liu, Qian

    2014-03-01

    With computing capability and display size growing, the mobile device has been used as a tool to help clinicians view patient information and medical images anywhere and anytime. It is uneasy and time-consuming for transferring medical images with large data size from picture archiving and communication system to mobile client, since the wireless network is unstable and limited by bandwidth. Besides, limited by computing capability, memory and power endurance, it is hard to provide a satisfactory quality of experience for radiologists to handle some complex post-processing of medical images on the mobile device, such as real-time direct interactive three-dimensional visualization. In this work, remote rendering technology is employed to implement the post-processing of medical images instead of local rendering, and a service protocol is developed to standardize the communication between the render server and mobile client. In order to make mobile devices with different platforms be able to access post-processing of medical images, the Extensible Markup Language is taken to describe this protocol, which contains four main parts: user authentication, medical image query/ retrieval, 2D post-processing (e.g. window leveling, pixel values obtained) and 3D post-processing (e.g. maximum intensity projection, multi-planar reconstruction, curved planar reformation and direct volume rendering). And then an instance is implemented to verify the protocol. This instance can support the mobile device access post-processing of medical image services on the render server via a client application or on the web page.

  4. Phase accuracy evaluation for phase-shifting fringe projection profilometry based on uniform-phase coded image

    Science.gov (United States)

    Zhang, Chunwei; Zhao, Hong; Zhu, Qian; Zhou, Changquan; Qiao, Jiacheng; Zhang, Lu

    2018-06-01

    Phase-shifting fringe projection profilometry (PSFPP) is a three-dimensional (3D) measurement technique widely adopted in industry measurement. It recovers the 3D profile of measured objects with the aid of the fringe phase. The phase accuracy is among the dominant factors that determine the 3D measurement accuracy. Evaluation of the phase accuracy helps refine adjustable measurement parameters, contributes to evaluating the 3D measurement accuracy, and facilitates improvement of the measurement accuracy. Although PSFPP has been deeply researched, an effective, easy-to-use phase accuracy evaluation method remains to be explored. In this paper, methods based on the uniform-phase coded image (UCI) are presented to accomplish phase accuracy evaluation for PSFPP. These methods work on the principle that the phase value of a UCI can be manually set to be any value, and once the phase value of a UCI pixel is the same as that of a pixel of a corresponding sinusoidal fringe pattern, their phase accuracy values are approximate. The proposed methods provide feasible approaches to evaluating the phase accuracy for PSFPP. Furthermore, they can be used to experimentally research the property of the random and gamma phase errors in PSFPP without the aid of a mathematical model to express random phase error or a large-step phase-shifting algorithm. In this paper, some novel and interesting phenomena are experimentally uncovered with the aid of the proposed methods.

  5. Integration of fringe projection and two-dimensional digital image correlation for three-dimensional displacements measurements

    Science.gov (United States)

    Felipe-Sesé, Luis; López-Alba, Elías; Siegmann, Philip; Díaz, Francisco A.

    2016-12-01

    A low-cost approach for three-dimensional (3-D) full-field displacement measurement is applied for the analysis of large displacements involved in two different mechanical events. The method is based on a combination of fringe projection and two-dimensional digital image correlation (DIC) techniques. The two techniques have been employed simultaneously using an RGB camera and a color encoding method; therefore, it is possible to measure in-plane and out-of-plane displacements at the same time with only one camera even at high speed rates. The potential of the proposed methodology has been employed for the analysis of large displacements during contact experiments in a soft material block. Displacement results have been successfully compared with those obtained using a 3D-DIC commercial system. Moreover, the analysis of displacements during an impact test on a metal plate was performed to emphasize the application of the methodology for dynamics events. Results show a good level of agreement, highlighting the potential of FP + 2D DIC as low-cost alternative for the analysis of large deformations problems.

  6. A MODIFIED PROJECTIVE TRANSFORMATION SCHEME FOR MOSAICKING MULTI-CAMERA IMAGING SYSTEM EQUIPPED ON A LARGE PAYLOAD FIXED-WING UAS

    Directory of Open Access Journals (Sweden)

    J. P. Jhan

    2015-03-01

    Full Text Available In recent years, Unmanned Aerial System (UAS has been applied to collect aerial images for mapping, disaster investigation, vegetation monitoring and etc. It is a higher mobility and lower risk platform for human operation, but the low payload and short operation time reduce the image collection efficiency. In this study, one nadir and four oblique consumer grade DSLR cameras composed multiple camera system is equipped on a large payload UAS, which is designed to collect large ground coverage images in an effective way. The field of view (FOV is increased to 127 degree, which is thus suitable to collect disaster images in mountainous area. The synthetic acquired five images are registered and mosaicked as larger format virtual image for reducing the number of images, post processing time, and for easier stereo plotting. Instead of traditional image matching and applying bundle adjustment method to estimate transformation parameters, the IOPs and ROPs of multiple cameras are calibrated and derived the coefficients of modified projective transformation (MPT model for image mosaicking. However, there are some uncertainty of indoor calibrated IOPs and ROPs since the different environment conditions as well as the vibration of UAS, which will cause misregistration effect of initial MPT results. Remaining residuals are analysed through tie points matching on overlapping area of initial MPT results, in which displacement and scale difference are introduced and corrected to modify the ROPs and IOPs for finer registration results. In this experiment, the internal accuracy of mosaic image is better than 0.5 pixels after correcting the systematic errors. Comparison between separate cameras and mosaic images through rigorous aerial triangulation are conducted, in which the RMSE of 5 control and 9 check points is less than 5 cm and 10 cm in planimetric and vertical directions, respectively, for all cases. It proves that the designed imaging system and the

  7. Comparison of iterative model, hybrid iterative, and filtered back projection reconstruction techniques in low-dose brain CT: impact of thin-slice imaging

    Energy Technology Data Exchange (ETDEWEB)

    Nakaura, Takeshi; Iyama, Yuji; Kidoh, Masafumi; Yokoyama, Koichi [Amakusa Medical Center, Diagnostic Radiology, Amakusa, Kumamoto (Japan); Kumamoto University, Department of Diagnostic Radiology, Graduate School of Medical Sciences, Kumamoto (Japan); Oda, Seitaro; Yamashita, Yasuyuki [Kumamoto University, Department of Diagnostic Radiology, Graduate School of Medical Sciences, Kumamoto (Japan); Tokuyasu, Shinichi [Philips Electronics, Kumamoto (Japan); Harada, Kazunori [Amakusa Medical Center, Department of Surgery, Kumamoto (Japan)

    2016-03-15

    The purpose of this study was to evaluate the utility of iterative model reconstruction (IMR) in brain CT especially with thin-slice images. This prospective study received institutional review board approval, and prior informed consent to participate was obtained from all patients. We enrolled 34 patients who underwent brain CT and reconstructed axial images with filtered back projection (FBP), hybrid iterative reconstruction (HIR) and IMR with 1 and 5 mm slice thicknesses. The CT number, image noise, contrast, and contrast noise ratio (CNR) between the thalamus and internal capsule, and the rate of increase of image noise in 1 and 5 mm thickness images between the reconstruction methods, were assessed. Two independent radiologists assessed image contrast, image noise, image sharpness, and overall image quality on a 4-point scale. The CNRs in 1 and 5 mm slice thickness were significantly higher with IMR (1.2 ± 0.6 and 2.2 ± 0.8, respectively) than with FBP (0.4 ± 0.3 and 1.0 ± 0.4, respectively) and HIR (0.5 ± 0.3 and 1.2 ± 0.4, respectively) (p < 0.01). The mean rate of increasing noise from 5 to 1 mm thickness images was significantly lower with IMR (1.7 ± 0.3) than with FBP (2.3 ± 0.3) and HIR (2.3 ± 0.4) (p < 0.01). There were no significant differences in qualitative analysis of unfamiliar image texture between the reconstruction techniques. IMR offers significant noise reduction and higher contrast and CNR in brain CT, especially for thin-slice images, when compared to FBP and HIR. (orig.)

  8. a Modified Projective Transformation Scheme for Mosaicking Multi-Camera Imaging System Equipped on a Large Payload Fixed-Wing Uas

    Science.gov (United States)

    Jhan, J. P.; Li, Y. T.; Rau, J. Y.

    2015-03-01

    In recent years, Unmanned Aerial System (UAS) has been applied to collect aerial images for mapping, disaster investigation, vegetation monitoring and etc. It is a higher mobility and lower risk platform for human operation, but the low payload and short operation time reduce the image collection efficiency. In this study, one nadir and four oblique consumer grade DSLR cameras composed multiple camera system is equipped on a large payload UAS, which is designed to collect large ground coverage images in an effective way. The field of view (FOV) is increased to 127 degree, which is thus suitable to collect disaster images in mountainous area. The synthetic acquired five images are registered and mosaicked as larger format virtual image for reducing the number of images, post processing time, and for easier stereo plotting. Instead of traditional image matching and applying bundle adjustment method to estimate transformation parameters, the IOPs and ROPs of multiple cameras are calibrated and derived the coefficients of modified projective transformation (MPT) model for image mosaicking. However, there are some uncertainty of indoor calibrated IOPs and ROPs since the different environment conditions as well as the vibration of UAS, which will cause misregistration effect of initial MPT results. Remaining residuals are analysed through tie points matching on overlapping area of initial MPT results, in which displacement and scale difference are introduced and corrected to modify the ROPs and IOPs for finer registration results. In this experiment, the internal accuracy of mosaic image is better than 0.5 pixels after correcting the systematic errors. Comparison between separate cameras and mosaic images through rigorous aerial triangulation are conducted, in which the RMSE of 5 control and 9 check points is less than 5 cm and 10 cm in planimetric and vertical directions, respectively, for all cases. It proves that the designed imaging system and the proposed scheme

  9. Lessons in scientific data interoperability: XML and the eMinerals project.

    Science.gov (United States)

    White, T O H; Bruin, R P; Chiang, G-T; Dove, M T; Tyer, R P; Walker, A M

    2009-03-13

    A collaborative environmental eScience project produces a broad range of data, notable as much for its diversity, in source and format, as its quantity. We find that extensible markup language (XML) and associated technologies are invaluable in managing this deluge of data. We describe Fo X, a toolkit for allowing Fortran codes to read and write XML, thus allowing existing scientific tools to be easily re-used in an XML-centric workflow.

  10. Hyper Text Mark-up Language and Dublin Core metadata element set usage in websites of Iranian State Universities’ libraries

    Science.gov (United States)

    Zare-Farashbandi, Firoozeh; Ramezan-Shirazi, Mahtab; Ashrafi-Rizi, Hasan; Nouri, Rasool

    2014-01-01

    Introduction: Recent progress in providing innovative solutions in the organization of electronic resources and research in this area shows a global trend in the use of new strategies such as metadata to facilitate description, place for, organization and retrieval of resources in the web environment. In this context, library metadata standards have a special place; therefore, the purpose of the present study has been a comparative study on the Central Libraries’ Websites of Iran State Universities for Hyper Text Mark-up Language (HTML) and Dublin Core metadata elements usage in 2011. Materials and Methods: The method of this study is applied-descriptive and data collection tool is the check lists created by the researchers. Statistical community includes 98 websites of the Iranian State Universities of the Ministry of Health and Medical Education and Ministry of Science, Research and Technology and method of sampling is the census. Information was collected through observation and direct visits to websites and data analysis was prepared by Microsoft Excel software, 2011. Results: The results of this study indicate that none of the websites use Dublin Core (DC) metadata and that only a few of them have used overlaps elements between HTML meta tags and Dublin Core (DC) elements. The percentage of overlaps of DC elements centralization in the Ministry of Health were 56% for both description and keywords and, in the Ministry of Science, were 45% for the keywords and 39% for the description. But, HTML meta tags have moderate presence in both Ministries, as the most-used elements were keywords and description (56%) and the least-used elements were date and formatter (0%). Conclusion: It was observed that the Ministry of Health and Ministry of Science follows the same path for using Dublin Core standard on their websites in the future. Because Central Library Websites are an example of scientific web pages, special attention in designing them can help the researchers

  11. Hyper Text Mark-up Language and Dublin Core metadata element set usage in websites of Iranian State Universities' libraries.

    Science.gov (United States)

    Zare-Farashbandi, Firoozeh; Ramezan-Shirazi, Mahtab; Ashrafi-Rizi, Hasan; Nouri, Rasool

    2014-01-01

    Recent progress in providing innovative solutions in the organization of electronic resources and research in this area shows a global trend in the use of new strategies such as metadata to facilitate description, place for, organization and retrieval of resources in the web environment. In this context, library metadata standards have a special place; therefore, the purpose of the present study has been a comparative study on the Central Libraries' Websites of Iran State Universities for Hyper Text Mark-up Language (HTML) and Dublin Core metadata elements usage in 2011. The method of this study is applied-descriptive and data collection tool is the check lists created by the researchers. Statistical community includes 98 websites of the Iranian State Universities of the Ministry of Health and Medical Education and Ministry of Science, Research and Technology and method of sampling is the census. Information was collected through observation and direct visits to websites and data analysis was prepared by Microsoft Excel software, 2011. The results of this study indicate that none of the websites use Dublin Core (DC) metadata and that only a few of them have used overlaps elements between HTML meta tags and Dublin Core (DC) elements. The percentage of overlaps of DC elements centralization in the Ministry of Health were 56% for both description and keywords and, in the Ministry of Science, were 45% for the keywords and 39% for the description. But, HTML meta tags have moderate presence in both Ministries, as the most-used elements were keywords and description (56%) and the least-used elements were date and formatter (0%). It was observed that the Ministry of Health and Ministry of Science follows the same path for using Dublin Core standard on their websites in the future. Because Central Library Websites are an example of scientific web pages, special attention in designing them can help the researchers to achieve faster and more accurate information resources

  12. Techniques for virtual lung nodule insertion: volumetric and morphometric comparison of projection-based and image-based methods for quantitative CT

    Science.gov (United States)

    Robins, Marthony; Solomon, Justin; Sahbaee, Pooyan; Sedlmair, Martin; Choudhury, Kingshuk Roy; Pezeshk, Aria; Sahiner, Berkman; Samei, Ehsan

    2017-09-01

    Virtual nodule insertion paves the way towards the development of standardized databases of hybrid CT images with known lesions. The purpose of this study was to assess three methods (an established and two newly developed techniques) for inserting virtual lung nodules into CT images. Assessment was done by comparing virtual nodule volume and shape to the CT-derived volume and shape of synthetic nodules. 24 synthetic nodules (three sizes, four morphologies, two repeats) were physically inserted into the lung cavity of an anthropomorphic chest phantom (KYOTO KAGAKU). The phantom was imaged with and without nodules on a commercial CT scanner (SOMATOM Definition Flash, Siemens) using a standard thoracic CT protocol at two dose levels (1.4 and 22 mGy CTDIvol). Raw projection data were saved and reconstructed with filtered back-projection and sinogram affirmed iterative reconstruction (SAFIRE, strength 5) at 0.6 mm slice thickness. Corresponding 3D idealized, virtual nodule models were co-registered with the CT images to determine each nodule’s location and orientation. Virtual nodules were voxelized, partial volume corrected, and inserted into nodule-free CT data (accounting for system imaging physics) using two methods: projection-based Technique A, and image-based Technique B. Also a third Technique C based on cropping a region of interest from the acquired image of the real nodule and blending it into the nodule-free image was tested. Nodule volumes were measured using a commercial segmentation tool (iNtuition, TeraRecon, Inc.) and deformation was assessed using the Hausdorff distance. Nodule volumes and deformations were compared between the idealized, CT-derived and virtual nodules using a linear mixed effects regression model which utilized the mean, standard deviation, and coefficient of variation (Mea{{n}RHD} , ST{{D}RHD} and C{{V}RHD}{) }~ of the regional Hausdorff distance. Overall, there was a close concordance between the volumes of the CT-derived and

  13. Fast backprojection-based reconstruction of spectral-spatial EPR images from projections with the constant sweep of a magnetic field.

    Science.gov (United States)

    Komarov, Denis A; Hirata, Hiroshi

    2017-08-01

    In this paper, we introduce a procedure for the reconstruction of spectral-spatial EPR images using projections acquired with the constant sweep of a magnetic field. The application of a constant field-sweep and a predetermined data sampling rate simplifies the requirements for EPR imaging instrumentation and facilitates the backprojection-based reconstruction of spectral-spatial images. The proposed approach was applied to the reconstruction of a four-dimensional numerical phantom and to actual spectral-spatial EPR measurements. Image reconstruction using projections with a constant field-sweep was three times faster than the conventional approach with the application of a pseudo-angle and a scan range that depends on the applied field gradient. Spectral-spatial EPR imaging with a constant field-sweep for data acquisition only slightly reduces the signal-to-noise ratio or functional resolution of the resultant images and can be applied together with any common backprojection-based reconstruction algorithm. Copyright © 2017 Elsevier Inc. All rights reserved.

  14. MatchGUI: A Graphical MATLAB-Based Tool for Automatic Image Co-Registration

    Science.gov (United States)

    Ansar, Adnan I.

    2011-01-01

    MatchGUI software, based on MATLAB, automatically matches two images and displays the match result by superimposing one image on the other. A slider bar allows focus to shift between the two images. There are tools for zoom, auto-crop to overlap region, and basic image markup. Given a pair of ortho-rectified images (focused primarily on Mars orbital imagery for now), this software automatically co-registers the imagery so that corresponding image pixels are aligned. MatchGUI requires minimal user input, and performs a registration over scale and inplane rotation fully automatically

  15. Classification of cryo electron microscopy images, noisy tomographic images recorded with unknown projection directions, by simultaneously estimating reconstructions and application to an assembly mutant of Cowpea Chlorotic Mottle Virus and portals of the bacteriophage P22

    Science.gov (United States)

    Lee, Junghoon; Zheng, Yili; Yin, Zhye; Doerschuk, Peter C.; Johnson, John E.

    2010-08-01

    Cryo electron microscopy is frequently used on biological specimens that show a mixture of different types of object. Because the electron beam rapidly destroys the specimen, the beam current is minimized which leads to noisy images (SNR substantially less than 1) and only one projection image per object (with an unknown projection direction) is collected. For situations where the objects can reasonably be described as coming from a finite set of classes, an approach based on joint maximum likelihood estimation of the reconstruction of each class and then use of the reconstructions to label the class of each image is described and demonstrated on two challenging problems: an assembly mutant of Cowpea Chlorotic Mottle Virus and portals of the bacteriophage P22.

  16. Mark-up bancário, conflito distributivo e utilização da capacidade produtiva: uma macrodinâmica pós-keynesiana

    Directory of Open Access Journals (Sweden)

    Lima Gilberto Tadeu

    2003-01-01

    Full Text Available Elabora-se um modelo macrodinâmico pós-keynesiano de utilização da capacidade, distribuição e inflação por conflito, no qual a oferta de moeda de crédito é endógena. A taxa nominal de juros é determinada pela aplicação de um mark-up sobre a taxa básica fixada pela autoridade monetária. Ao longo do tempo, o mark-up bancário varia com a taxa de lucro sobre o capital físico, enquanto a taxa básica varia com excessos de demanda que não são acomodáveis pela utilização da capacidade. São analisados os casos em que a demanda é suficiente ou não para gerar a plena utilização da capacidade.

  17. The German Dunkelfeld project: a pilot study to prevent child sexual abuse and the use of child abusive images.

    Science.gov (United States)

    Beier, Klaus M; Grundmann, Dorit; Kuhle, Laura F; Scherner, Gerold; Konrad, Anna; Amelung, Till

    2015-02-01

    Sexual interest toward prepubescents and pubescents (pedophilia and hebephilia) constitutes a major risk factor for child sexual abuse (CSA) and viewing of child abusive images, i.e., child pornography offenses (CPO). Most child sexual exploitation involving CSA and CPO are undetected and unprosecuted in the "Dunkelfeld" (German: "dark field"). This study assesses a treatment program to enhance behavioral control and reduce associated dynamic risk factors (DRF) in self-motivated pedophiles/hebephiles in the Dunkelfeld. Between 2005 and 2011, 319 undetected help-seeking pedophiles and hebephiles expressed interest in taking part in an anonymous and confidential 1-year-treatment program using broad cognitive behavioral methodology in the Prevention Project Dunkelfeld. Therapy was assessed using nonrandomized waiting list control design (n=53 treated group [TG]; n=22 untreated control group [CG]). Self-reported pre-/posttreatment DRF changes were assessed and compared with CG. Offending behavior characteristics were also assessed via self-reporting. No pre-/postassessment changes occurred in the control group. Emotional deficits and offense-supportive cognitions decreased in the TG; posttherapy sexual self-regulation increased. Treatment-related changes were distributed unequally across offender groups. None of the offending behavior reported for the TG was identified as such by the legal authorities. However, five of 25 CSA offenders and 29 of 32 CPO offenders reported ongoing behaviors under therapy. Therapy for pedophiles/hebephiles in the Dunkelfeld can alter child sexual offending DRF and reduce-related behaviors. Unidentified, unlawful child sexual exploitative behaviors are more prevalent in this population than in officially reported recidivism. Further research into factors predictive of problematic sexual behaviors in the Dunkelfeld is warranted. © 2014 International Society for Sexual Medicine.

  18. Consideration of Shoulder Joint's Image with the Changed Tube Angle of the Shoulder Oblique Projection in Supine Position

    International Nuclear Information System (INIS)

    Seo, Jae Hyun; Choi, Nam Gil

    2008-01-01

    There is a standard shoulder oblique method (Grashey method) available to view the shoulder joint. This method projects AP view of the shoulder joint so that the Humerus head's subuxation or joint degeneration can be easily visualized. However, in this view, the patients, with supine or sitting or erect position, have to keep their body obliquely. Whereas, the patients who are not well or operated, usually feel very uncomfortable to keep their body in this position and hence, we need other persons' help and much efforts will be needed to get the good quality shoulder joint view. Therefore, we thought of examining a method which shows the joint well by angling the tube to Medio-Lateral direction and without keeping the patients' one side upward in supine position. For this study, total 15 subjects with no history of neurological or psychiatric illness, were recruited for examinations. They consisted of 9 males and 6 females. Statistic group analysis was performed with ANOVA test. Scores of the evaluation of the experts were 1.01±0.54 at 25 degrees, 2.50±0.50 at 30 degrees, 2.85±0.36 at 35 degrees and 2.33±0.47 at 40 degrees, respectively, and they were significant(p<0.05, Table 1). Joint space of the Humerus head and Scapula were well distinguished at 35 degrees, 30 degrees and 40 degrees with the almost same score. However, the degree of distortion at 40 degrees was more severe than that at 30 degrees. Ultimately, 30-35 degrees views were shown to yield good quality shoulder oblique images. In conclusion, this method may be very useful for the patients who are uncomfortable and for the emergency patients. In order to get similar or comparable view, the same X-tube angle is recommended to be used before and after the operation. Therefore, we hope that this new angled method seems to be efficient.

  19. A general approach to flaw simulation in castings by superimposing projections of 3D models onto real X-ray images

    International Nuclear Information System (INIS)

    Hahn, D.; Mery, D.

    2003-01-01

    In order to evaluate the sensitivity of defect inspection systems, it is convenient to examine simulated data. This gives the possibility to tune the parameters of the inspection method and to test the performance of the system in critical cases. In this paper, a practical method for the simulation of defects in radioscopic images of aluminium castings is presented. The approach simulates only the flaws and not the whole radioscopic image of the object under test. A 3D mesh is used to model a flaw with complex geometry, which is projected and superimposed onto real radioscopic images of a homogeneous object according to the exponential attenuation law for X- rays. The new grey value of a pixel, where the 3D flaw is projected, depends only on four parameters: (a) the grey value of the original X-ray image without flaw; (b) the linear absorption coefficient of the examined material; (c) the maximal thickness observable in the radioscopic image; and (d) the length of the intersection of the 3D flaw with the modelled X-ray beam, that is projected into the pixel. A simulation of a complex flaw modelled as a 3D mesh can be performed in any position of the castings by using the algorithm described in this paper. This allows the evaluation of the performance of defect inspection systems in cases where the detection is known to be difficult. In this paper, we show experimental results on real X-ray images of aluminium wheels, in which 3D flaws like blowholes, cracks and inclusions are simulated

  20. Comparison of the image qualities of filtered back-projection, adaptive statistical iterative reconstruction, and model-based iterative reconstruction for CT venography at 80 kVp

    International Nuclear Information System (INIS)

    Kim, Jin Hyeok; Choo, Ki Seok; Moon, Tae Yong; Lee, Jun Woo; Jeon, Ung Bae; Kim, Tae Un; Hwang, Jae Yeon; Yun, Myeong-Ja; Jeong, Dong Wook; Lim, Soo Jin

    2016-01-01

    To evaluate the subjective and objective qualities of computed tomography (CT) venography images at 80 kVp using model-based iterative reconstruction (MBIR) and to compare these with those of filtered back projection (FBP) and adaptive statistical iterative reconstruction (ASIR) using the same CT data sets. Forty-four patients (mean age: 56.1 ± 18.1) who underwent 80 kVp CT venography (CTV) for the evaluation of deep vein thrombosis (DVT) during 4 months were enrolled in this retrospective study. The same raw data were reconstructed using FBP, ASIR, and MBIR. Objective and subjective image analysis were performed at the inferior vena cava (IVC), femoral vein, and popliteal vein. The mean CNR of MBIR was significantly greater than those of FBP and ASIR and images reconstructed using MBIR had significantly lower objective image noise (p <.001). Subjective image quality and confidence of detecting DVT by MBIR group were significantly greater than those of FBP and ASIR (p <.005), and MBIR had the lowest score for subjective image noise (p <.001). CTV at 80 kVp with MBIR was superior to FBP and ASIR regarding subjective and objective image qualities. (orig.)

  1. MaRIE 1.0: The Matter-Radiation Interactions in Extremes Project, and the Challenge of Dynamic Mesoscale Imaging

    Energy Technology Data Exchange (ETDEWEB)

    Barnes, Cris William [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Barber, John L. [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Kober, Edward Martin [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Lookman, Turab [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Sandberg, Richard L. [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Shlachter, Jack S. [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Sheffield, Richard L. [Los Alamos National Lab. (LANL), Los Alamos, NM (United States)

    2015-02-23

    The Matter-Radiation Interactions in Extremes project will build the experimental facility for the time-dependent control of dynamic material performance. An x-ray free electron laser at up to 42-keV fundamental energy and with photon pulses down to sub-nanosecond spacing, MaRIE 1.0 is designed to meet the challenges of time-dependent mesoscale materials science. Those challenges will be outlined, the techniques of coherent diffractive imaging and dynamic polycrystalline diffraction described, and the resulting requirements defined for a coherent x-ray source. The talk concludes with the role of the MaRIE project and science in the future.

  2. Evaluation of radiological workstations and web-browser-based image distribution clients for a PACS project in hands-on workshops

    International Nuclear Information System (INIS)

    Boehm, Thomas; Handgraetinger, Oliver; Voellmy, Daniel R.; Marincek, Borut; Wildermuth, Simon; Link, Juergen; Ploner, Ricardo

    2004-01-01

    The methodology and outcome of a hands-on workshop for the evaluation of PACS (picture archiving and communication system) software for a multihospital PACS project are described. The following radiological workstations and web-browser-based image distribution software clients were evaluated as part of a multistep evaluation of PACS vendors in March 2001: Impax DS 3000 V 4.1/Impax Web1000 (Agfa-Gevaert, Mortsel, Belgium); PathSpeed V 8.0/PathSpeed Web (GE Medical Systems, Milwaukee, Wis., USA); ID Report/ID Web (Image Devices, Idstein, Germany); EasyVision DX/EasyWeb (Philips Medical Systems, Eindhoven, Netherlands); and MagicView 1000 VB33a/MagicWeb (Siemens Medical Systems, Erlangen, Germany). A set of anonymized DICOM test data was provided to enable direct image comparison. Radiologists (n=44) evaluated the radiological workstations and nonradiologists (n=53) evaluated the image distribution software clients using different questionnaires. One vendor was not able to import the provided DICOM data set. Another vendor had problems in displaying imported cross-sectional studies in the correct stack order. Three vendors (Agfa-Gevaert, GE, Philips) presented server-client solutions with web access. Two (Siemens, Image Devices) presented stand-alone solutions. The highest scores in the class of radiological workstations were achieved by ID Report from Image Devices (p<0.005). In the class of image distribution clients, the differences were statistically not significant. Questionnaire-based evaluation was shown to be useful for guaranteeing systematic assessment. The workshop was a great success in raising interest in the PACS project in a large group of future clinical users. The methodology used in the present study may be useful for other hospitals evaluating PACS. (orig.)

  3. Adaptive iterative dose reduction algorithm in CT: Effect on image quality compared with filtered back projection in body phantoms of different sizes

    International Nuclear Information System (INIS)

    Kim, Milim; Lee, Jeong Min; Son, Hyo Shin; Han, Joon Koo; Choi, Byung Ihn; Yoon, Jeong Hee; Choi, Jin Woo

    2014-01-01

    To evaluate the impact of the adaptive iterative dose reduction (AIDR) three-dimensional (3D) algorithm in CT on noise reduction and the image quality compared to the filtered back projection (FBP) algorithm and to compare the effectiveness of AIDR 3D on noise reduction according to the body habitus using phantoms with different sizes. Three different-sized phantoms with diameters of 24 cm, 30 cm, and 40 cm were built up using the American College of Radiology CT accreditation phantom and layers of pork belly fat. Each phantom was scanned eight times using different mAs. Images were reconstructed using the FBP and three different strengths of the AIDR 3D. The image noise, the contrast-to-noise ratio (CNR) and the signal-to-noise ratio (SNR) of the phantom were assessed. Two radiologists assessed the image quality of the 4 image sets in consensus. The effectiveness of AIDR 3D on noise reduction compared with FBP were also compared according to the phantom sizes. Adaptive iterative dose reduction 3D significantly reduced the image noise compared with FBP and enhanced the SNR and CNR (p < 0.05) with improved image quality (p < 0.05). When a stronger reconstruction algorithm was used, greater increase of SNR and CNR as well as noise reduction was achieved (p < 0.05). The noise reduction effect of AIDR 3D was significantly greater in the 40-cm phantom than in the 24-cm or 30-cm phantoms (p < 0.05). The AIDR 3D algorithm is effective to reduce the image noise as well as to improve the image-quality parameters compared by FBP algorithm, and its effectiveness may increase as the phantom size increases.

  4. Adaptive iterative dose reduction algorithm in CT: Effect on image quality compared with filtered back projection in body phantoms of different sizes

    Energy Technology Data Exchange (ETDEWEB)

    Kim, Milim; Lee, Jeong Min; Son, Hyo Shin; Han, Joon Koo; Choi, Byung Ihn [College of Medicine, Seoul National University, Seoul (Korea, Republic of); Yoon, Jeong Hee; Choi, Jin Woo [Dept. of Radiology, Seoul National University Hospital, Seoul (Korea, Republic of)

    2014-04-15

    To evaluate the impact of the adaptive iterative dose reduction (AIDR) three-dimensional (3D) algorithm in CT on noise reduction and the image quality compared to the filtered back projection (FBP) algorithm and to compare the effectiveness of AIDR 3D on noise reduction according to the body habitus using phantoms with different sizes. Three different-sized phantoms with diameters of 24 cm, 30 cm, and 40 cm were built up using the American College of Radiology CT accreditation phantom and layers of pork belly fat. Each phantom was scanned eight times using different mAs. Images were reconstructed using the FBP and three different strengths of the AIDR 3D. The image noise, the contrast-to-noise ratio (CNR) and the signal-to-noise ratio (SNR) of the phantom were assessed. Two radiologists assessed the image quality of the 4 image sets in consensus. The effectiveness of AIDR 3D on noise reduction compared with FBP were also compared according to the phantom sizes. Adaptive iterative dose reduction 3D significantly reduced the image noise compared with FBP and enhanced the SNR and CNR (p < 0.05) with improved image quality (p < 0.05). When a stronger reconstruction algorithm was used, greater increase of SNR and CNR as well as noise reduction was achieved (p < 0.05). The noise reduction effect of AIDR 3D was significantly greater in the 40-cm phantom than in the 24-cm or 30-cm phantoms (p < 0.05). The AIDR 3D algorithm is effective to reduce the image noise as well as to improve the image-quality parameters compared by FBP algorithm, and its effectiveness may increase as the phantom size increases.

  5. Image quality and radiation dose of low dose coronary CT angiography in obese patients: Sinogram affirmed iterative reconstruction versus filtered back projection

    International Nuclear Information System (INIS)

    Wang, Rui; Schoepf, U. Joseph; Wu, Runze; Reddy, Ryan P.; Zhang, Chuanchen; Yu, Wei; Liu, Yi; Zhang, Zhaoqi

    2012-01-01

    Purpose: To investigate the image quality and radiation dose of low radiation dose CT coronary angiography (CTCA) using sinogram affirmed iterative reconstruction (SAFIRE) compared with standard dose CTCA using filtered back-projection (FBP) in obese patients. Materials and methods: Seventy-eight consecutive obese patients were randomized into two groups and scanned using a prospectively ECG-triggered step-and-shot (SAS) CTCA protocol on a dual-source CT scanner. Thirty-nine patients (protocol A) were examined using a routine radiation dose protocol at 120 kV and images were reconstructed with FBP (protocol A). Thirty-nine patients (protocol B) were examined using a low dose protocol at 100 kV and images were reconstructed with SAFIRE. Two blinded observers independently assessed the image quality of each coronary segment using a 4-point scale (1 = non-diagnostic, 4 = excellent) and measured the objective parameters image noise, signal-to-noise ratio (SNR), and contrast-to-noise ratio (CNR). Radiation dose was calculated. Results: The coronary artery image quality scores, image noise, SNR and CNR were not significantly different between protocols A and B (all p > 0.05), with image quality scores of 3.51 ± 0.70 versus 3.55 ± 0.47, respectively. The effective radiation dose was significantly lower in protocol B (4.41 ± 0.83 mSv) than that in protocol A (8.83 ± 1.74 mSv, p < 0.01). Conclusion: Compared with standard dose CTCA using FBP, low dose CTCA using SAFIRE can maintain diagnostic image quality with 50% reduction of radiation dose.

  6. Imaging

    International Nuclear Information System (INIS)

    Kellum, C.D.; Fisher, L.M.; Tegtmeyer, C.J.

    1987-01-01

    This paper examines the advantages of the use of excretory urography for diagnosis. According to the authors, excretory urography remains the basic radiologic examination of the urinary tract and is the foundation for the evaluation of suspected urologic disease. Despite development of the newer diagnostic modalities such as isotope scanning, ultrasonography, CT, and magnetic resonsance imaging (MRI), excretory urography has maintained a prominent role in ruorradiology. Some indications have been altered and will continue to change with the newer imaging modalities, but the initial evaluation of suspected urinary tract structural abnormalities; hematuria, pyuria, and calculus disease is best performed with excretory urography. The examination is relatively inexpensive and simple to perform, with few contraindictions. Excretory urography, when properly performed, can provide valuable information about the renal parenchyma, pelvicalyceal system, ureters, and urinary bladder

  7. Accuracy of Dual-Energy Virtual Monochromatic CT Numbers: Comparison between the Single-Source Projection-Based and Dual-Source Image-Based Methods.

    Science.gov (United States)

    Ueguchi, Takashi; Ogihara, Ryota; Yamada, Sachiko

    2018-03-21

    To investigate the accuracy of dual-energy virtual monochromatic computed tomography (CT) numbers obtained by two typical hardware and software implementations: the single-source projection-based method and the dual-source image-based method. A phantom with different tissue equivalent inserts was scanned with both single-source and dual-source scanners. A fast kVp-switching feature was used on the single-source scanner, whereas a tin filter was used on the dual-source scanner. Virtual monochromatic CT images of the phantom at energy levels of 60, 100, and 140 keV were obtained by both projection-based (on the single-source scanner) and image-based (on the dual-source scanner) methods. The accuracy of virtual monochromatic CT numbers for all inserts was assessed by comparing measured values to their corresponding true values. Linear regression analysis was performed to evaluate the dependency of measured CT numbers on tissue attenuation, method, and their interaction. Root mean square values of systematic error over all inserts at 60, 100, and 140 keV were approximately 53, 21, and 29 Hounsfield unit (HU) with the single-source projection-based method, and 46, 7, and 6 HU with the dual-source image-based method, respectively. Linear regression analysis revealed that the interaction between the attenuation and the method had a statistically significant effect on the measured CT numbers at 100 and 140 keV. There were attenuation-, method-, and energy level-dependent systematic errors in the measured virtual monochromatic CT numbers. CT number reproducibility was comparable between the two scanners, and CT numbers had better accuracy with the dual-source image-based method at 100 and 140 keV. Copyright © 2018 The Association of University Radiologists. Published by Elsevier Inc. All rights reserved.

  8. The Advanced Rapid Imaging and Analysis (ARIA) Project: Status of SAR products for Earthquakes, Floods, Volcanoes and Groundwater-related Subsidence

    Science.gov (United States)

    Owen, S. E.; Yun, S. H.; Hua, H.; Agram, P. S.; Liu, Z.; Sacco, G. F.; Manipon, G.; Linick, J. P.; Fielding, E. J.; Lundgren, P.; Farr, T. G.; Webb, F.; Rosen, P. A.; Simons, M.

    2017-12-01

    The Advanced Rapid Imaging and Analysis (ARIA) project for Natural Hazards is focused on rapidly generating high-level geodetic imaging products and placing them in the hands of the solid earth science and local, national, and international natural hazard communities by providing science product generation, exploration, and delivery capabilities at an operational level. Space-based geodetic measurement techniques including Interferometric Synthetic Aperture Radar (InSAR), differential Global Positioning System, and SAR-based change detection have become critical additions to our toolset for understanding and mapping the damage and deformation caused by earthquakes, volcanic eruptions, floods, landslides, and groundwater extraction. Up until recently, processing of these data sets has been handcrafted for each study or event and has not generated products rapidly and reliably enough for response to natural disasters or for timely analysis of large data sets. The ARIA project, a joint venture co-sponsored by the California Institute of Technology and by NASA through the Jet Propulsion Laboratory, has been capturing the knowledge applied to these responses and building it into an automated infrastructure to generate imaging products in near real-time that can improve situational awareness for disaster response. In addition to supporting the growing science and hazard response communities, the ARIA project has developed the capabilities to provide automated imaging and analysis capabilities necessary to keep up with the influx of raw SAR data from geodetic imaging missions such as ESA's Sentinel-1A/B, now operating with repeat intervals as short as 6 days, and the upcoming NASA NISAR mission. We will present the progress and results we have made on automating the analysis of Sentinel-1A/B SAR data for hazard monitoring and response, with emphasis on recent developments and end user engagement in flood extent mapping and deformation time series for both volcano

  9. The body project 4 all: A pilot randomized controlled trial of a mixed-gender dissonance-based body image program.

    Science.gov (United States)

    Kilpela, Lisa Smith; Blomquist, Kerstin; Verzijl, Christina; Wilfred, Salomé; Beyl, Robbie; Becker, Carolyn Black

    2016-06-01

    The Body Project is a cognitive dissonance-based body image improvement program with ample research support among female samples. More recently, researchers have highlighted the extent of male body dissatisfaction and disordered eating behaviors; however, boys/men have not been included in the majority of body image improvement programs. This study aims to explore the efficacy of a mixed-gender Body Project compared with the historically female-only body image intervention program. Participants included male and female college students (N = 185) across two sites. We randomly assigned women to a mixed-gender modification of the two-session, peer-led Body Project (MG), the two-session, peer-led, female-only (FO) Body Project, or a waitlist control (WL), and men to either MG or WL. Participants completed self-report measures assessing negative affect, appearance-ideal internalization, body satisfaction, and eating disorder pathology at baseline, post-test, and at 2- and 6-month follow-up. Linear mixed effects modeling to estimate the change from baseline over time for each dependent variable across conditions were used. For women, results were mixed regarding post-intervention improvement compared with WL, and were largely non-significant compared with WL at 6-month follow-up. Alternatively, results indicated that men in MG consistently improved compared with WL through 6-month follow-up on all measures except negative affect and appearance-ideal internalization. Results differed markedly between female and male samples, and were more promising for men than for women. Various explanations are provided, and further research is warranted prior to drawing firm conclusions regarding mixed-gender programming of the Body Project. © 2016 Wiley Periodicals, Inc.(Int J Eat Disord 2016; 49:591-602). © 2016 Wiley Periodicals, Inc.

  10. Comparison of transaxial source images and 3-plane, thin-slab maximal intensity projection images for the diagnosis of coronary artery stenosis with using ECG-gated cardiac CT

    International Nuclear Information System (INIS)

    Choi, Jin Woo; Seo, Joon Beom; Do, Kyung Hyun

    2006-01-01

    We wanted to compare the transaxial source images with the optimized three plane, thin-slab maximum intensity projection (MIP) images from electrocardiographic (ECG)-gated cardiac CT for their ability to detect hemodynamically significant stenosis (HSS), and we did this by means of performing a receiver operating characteristic (ROC) analysis. Twenty-eight patients with a heart rate less than 66 beats per minute and who were undergoing both retrospective ECG-gated cardiac CT and conventional coronary angiography were included in this study. The contrast-enhanced CT scans were obtained with a collimation of 16 x 0.75-mm and a rotation time of 420 msec. The tranaxial images were reconstructed at the mid-diastolic phase with a 1-mm slice thickness and a 0.5-mm increment. Using the transaxial images, the slab MIP images were created with a 4-mm thickness and a 2-mm increment, and they covered the entire heart in the horizontal long axis (4 chamber view), in the vertical long axis (2 chamber view) and in the short axis. The transaxial images and MIP images were independently evaluated for their ability to detect HSS. Conventional coronary angiograms of the same study group served as the standard of reference. Four radiologists were requested to rank each image with using a five-point scale (1 = definitely negative, 2 = probably negative, 3 = indeterminate, 4 = probably positive, and 5 definitely positive) for the presence of HSS; the data were then interpreted using ROC analysis. There was no statistical difference in the area under the ROC curve between transaxial images and MIP images for the detection of HSS (0.8375 and 0.8708, respectively; ρ > 0.05). The mean reading time for the transaxial source images and the MIP images was 116 and 126.5 minutes, respectively. The diagnostic performance of the MIP images for detecting HSS of the coronary arteries is acceptable and this technique's ability to detect HSS is comparable to that of the transaxial source images

  11. 3-D Velocity Model of the Coachella Valley, Southern California Based on Explosive Shots from the Salton Seismic Imaging Project

    Science.gov (United States)

    Persaud, P.; Stock, J. M.; Fuis, G. S.; Hole, J. A.; Goldman, M.; Scheirer, D. S.

    2014-12-01

    We have analyzed explosive shot data from the 2011 Salton Seismic Imaging Project (SSIP) across a 2-D seismic array and 5 profiles in the Coachella Valley to produce a 3-D P-wave velocity model that will be used in calculations of strong ground shaking. Accurate maps of seismicity and active faults rely both on detailed geological field mapping and a suitable velocity model to accurately locate earthquakes. Adjoint tomography of an older version of the SCEC 3-D velocity model shows that crustal heterogeneities strongly influence seismic wave propagation from moderate earthquakes (Tape et al., 2010). These authors improve the crustal model and subsequently simulate the details of ground motion at periods of 2 s and longer for hundreds of ray paths. Even with improvements such as the above, the current SCEC velocity model for the Salton Trough does not provide a match of the timing or waveforms of the horizontal S-wave motions, which Wei et al. (2013) interpret as caused by inaccuracies in the shallow velocity structure. They effectively demonstrate that the inclusion of shallow basin structure improves the fit in both travel times and waveforms. Our velocity model benefits from the inclusion of known location and times of a subset of 126 shots detonated over a 3-week period during the SSIP. This results in an improved velocity model particularly in the shallow crust. In addition, one of the main challenges in developing 3-D velocity models is an uneven stations-source distribution. To better overcome this challenge, we also include the first arrival times of the SSIP shots at the more widely spaced Southern California Seismic Network (SCSN) in our inversion, since the layout of the SSIP is complementary to the SCSN. References: Tape, C., et al., 2010, Seismic tomography of the Southern California crust based on spectral-element and adjoint methods: Geophysical Journal International, v. 180, no. 1, p. 433-462. Wei, S., et al., 2013, Complementary slip distributions

  12. 3D computed tomography using a microfocus X-ray source: Analysis of artifact formation in the reconstructed images using simulated as well as experimental projection data

    International Nuclear Information System (INIS)

    Krimmel, S.; Stephan, J.; Baumann, J.

    2005-01-01

    The scope of this contribution is to identify and to quantify the influence of different parameters on the formation of image artifacts in X-ray computed tomography (CT) resulting for example, from beam hardening or from partial lack of information using 3D cone beam CT. In general, the reconstructed image quality depends on a number of acquisition parameters concerning the X-ray source (e.g. X-ray spectrum), the geometrical setup (e.g. cone beam angle), the sample properties (e.g. absorption characteristics) and the detector properties. While it is difficult to distinguish the influence of different effects clearly in experimental projection data, they can be selected individually with the help of simulated projection data by varying the parameter set. The reconstruction of the 3D data set is performed with the filtered back projection algorithm according to Feldkamp, Davis and Kress for experimental as well as for simulated projection data. The experimental data are recorded with an industrial microfocus CT system which features a focal spot size of a few micrometers and uses a digital flat panel detector for data acquisition

  13. Patient-specific 3D models created by 3D imaging system or bi-planar imaging coupled with Moiré-Fringe projections: a comparative study of accuracy and reliability on spinal curvatures and vertebral rotation data.

    Science.gov (United States)

    Hocquelet, Arnaud; Cornelis, François; Jirot, Anna; Castaings, Laurent; de Sèze, Mathieu; Hauger, Olivier

    2016-10-01

    The aim of this study is to compare the accuracy and reliability of spinal curvatures and vertebral rotation data based on patient-specific 3D models created by 3D imaging system or by bi-planar imaging coupled with Moiré-Fringe projections. Sixty-two consecutive patients from a single institution were prospectively included. For each patient, frontal and sagittal calibrated low-dose bi-planar X-rays were performed and coupled simultaneously with an optical Moiré back surface-based technology. The 3D reconstructions of spine and pelvis were performed independently by one radiologist and one technician in radiology using two different semi-automatic methods using 3D radio-imaging system (method 1) or bi-planar imaging coupled with Moiré projections (method 2). Both methods were compared using Bland-Altman analysis, and reliability using intraclass correlation coefficient (ICC). ICC showed good to very good agreement. Between the two techniques, the maximum 95 % prediction limits was -4.9° degrees for the measurements of spinal coronal curves and less than 5° for other parameters. Inter-rater reliability was excellent for all parameters across both methods, except for axial rotation with method 2 for which ICC was fair. Method 1 was faster for reconstruction time than method 2 for both readers (13.4 vs. 20.7 min and 10.6 vs. 13.9 min; p = 0.0001). While a lower accuracy was observed for the evaluation of the axial rotation, bi-planar imaging coupled with Moiré-Fringe projections may be an accurate and reliable tool to perform 3D reconstructions of the spine and pelvis.

  14. A simple method for 3D lesion reconstruction from two projected angiographic images: implementation to a stereotactic radiotherapy treatment planning system

    International Nuclear Information System (INIS)

    Theodorou, K.; Kappas, C.; Gaboriaud, G.; Mazal, A.D.; Petrascu, O.; Rosenwald, J.C.

    1997-01-01

    Introduction: The most used imaging modality for diagnosis and localisation of arteriovenous malformations (AVMs) treated with stereotactic radiotherapy is angiography. The fact that the angiographic images are projected images imposes the need of the 3D reconstruction of the lesion. This, together with the 3D head anatomy from CT images could provide all the necessary information for stereotactic treatment planning. We have developed a method to combine the complementary information provided by angiography and 2D computerized tomography, matching the reconstructed AVM structure with the reconstructed head of the patient. Materials and methods: The ISIS treatment planning system, developed at Institute Curie, has been used for image acquisition, stereotactic localisation and 3D visualisation. A series of CT slices are introduced in the system as well as two orthogonal angiographic projected images of the lesion. A simple computer program has been developed for the 3D reconstruction of the lesion and for the superposition of the target contour on the CT slices of the head. Results and conclusions: In our approach we consider that the reconstruction can be made if the AVM is approximated with a number of adjacent ellipses. We assessed the method comparing the values of the reconstructed and the actual volumes of the target using linear regression analysis. For treatment planning purposes we overlapped the reconstructed AVM on the CT slices of the head. The above feature is to our knowledge a feature that the majority of the commercial stereotactic radiotherapy treatment planning system could not provide. The implementation of the method into ISIS TPS shows that we can reliably approximate and visualize the target volume

  15. The land-use projections and resulting emissions in the IPCC SRES scenarios as simulated by the IMAGE 2.2 model

    International Nuclear Information System (INIS)

    Strengers, B.; Eickhout, B.; De Vries, B.; Bouwman, L.; Leemans, R.

    2005-01-01

    The Intergovernmental Panel on Climate Change (IPCC) developed a new series of emission scenarios (SRES). Six global models were used to develop SRES but most focused primarily on energy and industry related emissions. Land-use emissions were only covered by three models, where IMAGE included the most detailed, spatially explicit description of global land-use and land-cover dynamics. To complement their calculations the other models used land-use emission from AIM and IMAGE, leading to inconsistent estimates. Representation of the land-use emissions in SRES is therefore poor. This paper presents details on the IMAGE 2.1 land-use results to complement the SRES report. The IMAGE SRES scenarios are based on the original IPCC SRES assumptions and narratives using the latest version of IMAGE (IMAGE 2.2). IMAGE provides comprehensive emission estimates because not only emissions are addressed but also the resulting atmospheric concentrations, climate change and impacts. Additionally, in SRES the scenario assumptions were only presented and quantified for 4 'macro-regions'. The IMAGE 2.2 SRES implementation has been extended towards 17 regions. We focus on land-use aspects and show that land-related emissions not only depend on population projections but also on the temporal and spatial dynamics of different land-related sources and sinks of greenhouse gases. We also illustrate the importance of systemic feed backs and interactions in the climate system that influence land-use emissions, such as deforestation and forest regrowth, soil respiration and CO2-fertilisation

  16. The development of MML (Medical Markup Language) version 3.0 as a medical document exchange format for HL7 messages.

    Science.gov (United States)

    Guo, Jinqiu; Takada, Akira; Tanaka, Koji; Sato, Junzo; Suzuki, Muneou; Suzuki, Toshiaki; Nakashima, Yusei; Araki, Kenji; Yoshihara, Hiroyuki

    2004-12-01

    Medical Markup Language (MML), as a set of standards, has been developed over the last 8 years to allow the exchange of medical data between different medical information providers. MML Version 2.21 used XML as a metalanguage and was announced in 1999. In 2001, MML was updated to Version 2.3, which contained 12 modules. The latest version--Version 3.0--is based on the HL7 Clinical Document Architecture (CDA). During the development of this new version, the structure of MML Version 2.3 was analyzed, subdivided into several categories, and redefined so the information defined in MML could be described in HL7 CDA Level One. As a result of this development, it has become possible to exchange MML Version 3.0 medical documents via HL7 messages.

  17. CT imaging of bronchus related to solitary pulmonary lesion: comparison of minimum intensity projection and multi-planar reconstruction

    International Nuclear Information System (INIS)

    Zhou Jun; Shan Fei; Zhang Zhiyong; Yang Shan; Zhang Xingwei; Wu Dong; Zhan Songhua

    2011-01-01

    Objective: To investigate the clinical value of 64-slice computed tomography with MinIP and MPR for imaging the bronchus related to a solitary pulmonary lesion (SPL). Methods: Seventy-five subjects with solitary pulmonary lesions underwent chest 64-slice CT and their bronchi were analyzed retrospectively. All images of thin-section (0.625 mm) were reconstructed with MPR and MinIP into images of 1, 2, 3, and 5 mm thickness and 1 mm gap in two orthogonal planes along the long axis of bronchus related to the SPL. The image quality of four series of MinIP and MPR images was evaluated in the aspect of bronchus visibility and pulmonary vascular masking. One-way ANOVA with Bonferroni correction and interclass correlation coefficient were used in the statistical analysis. Results: (1) The mean scores of display of the bronchi on MinIP images of four series (4.85, 4.77 and 4.84, 4.63 and 4.67, 4.25 and 4.28, in 1, 2, 3, and 5 mm thickness, respectively) and on MPR images of 1 or 2 mm thickness (4.77 and 4.76, 4.04 and 4.27, in 1 and 2 mm thickness, respectively) were good or excellent. MPR images of 1 mm thickness and MinIP images of 1-3 mm thickness showed no significant differences (t=0.318, P> 0.05 for all), but they were superior to MinIP images of 5 mm thickness (t=6.318 and 6.610, P 0.05). (2) The effect of suppression of pulmonary vascular markings on MinIP images was better with the increase of slice thickness (F= 45.312 and 40.415, P<0.01). The mean scores of MinIP images of 3 mm and 5 mm thickness (4.67 and 4.64, 5.00 and 4.97, for 3 and 5 mm thickness, respectively) were good or excellent, but MinIP images of 2 mm thickness were just acceptable. Conclusion: MinIP images of 3 mm thickness may display the bronchus related to SPL more clearly. (authors)

  18. Estimation of error in maximal intensity projection-based internal target volume of lung tumors: a simulation and comparison study using dynamic magnetic resonance imaging.

    Science.gov (United States)

    Cai, Jing; Read, Paul W; Baisden, Joseph M; Larner, James M; Benedict, Stanley H; Sheng, Ke

    2007-11-01

    To evaluate the error in four-dimensional computed tomography (4D-CT) maximal intensity projection (MIP)-based lung tumor internal target volume determination using a simulation method based on dynamic magnetic resonance imaging (dMRI). Eight healthy volunteers and six lung tumor patients underwent a 5-min MRI scan in the sagittal plane to acquire dynamic images of lung motion. A MATLAB program was written to generate re-sorted dMRI using 4D-CT acquisition methods (RedCAM) by segmenting and rebinning the MRI scans. The maximal intensity projection images were generated from RedCAM and dMRI, and the errors in the MIP-based internal target area (ITA) from RedCAM (epsilon), compared with those from dMRI, were determined and correlated with the subjects' respiratory variability (nu). Maximal intensity projection-based ITAs from RedCAM were comparatively smaller than those from dMRI in both phantom studies (epsilon = -21.64% +/- 8.23%) and lung tumor patient studies (epsilon = -20.31% +/- 11.36%). The errors in MIP-based ITA from RedCAM correlated linearly (epsilon = -5.13nu - 6.71, r(2) = 0.76) with the subjects' respiratory variability. Because of the low temporal resolution and retrospective re-sorting, 4D-CT might not accurately depict the excursion of a moving tumor. Using a 4D-CT MIP image to define the internal target volume might therefore cause underdosing and an increased risk of subsequent treatment failure. Patient-specific respiratory variability might also be a useful predictor of the 4D-CT-induced error in MIP-based internal target volume determination.

  19. Synthetic imaging diagnosis of valvular heart diseases (especially, mytral valve), for the most part of angled projections in cineangiocardiographic study

    International Nuclear Information System (INIS)

    Katabuchi, Tetsuro; Wakamatsu, Takashi; Nakayama, Kazuhiko

    1981-01-01

    Recently, owing to developments of high output X-ray tube, high resolution image intensifier and mobile U or C arm, increasing remarkably, has been application of cinegraphy to angiocardiographic study. Surgical treatments for heart diseases have been very advanced in this few years, so that before operation, is demanded to precise anatomically and functionally diagnosis of them. In this paper, are discussed cineangiocardiography, echocardiography, in regard to the most useful investigation of valvelar heart diseases and some problems of imaging techniques, finally, introduced the newest examination method of diagnostic imaging. (author)

  20. Ground-based multi-station spectroscopic imaging with ALIS. - Scientific highlights, project status and future prospects

    Science.gov (United States)

    Brändström; Gustavsson, Björn; Pellinen-Wannberg, Asta; Sandahl, Ingrid; Sergienko, Tima; Steen, Ake

    2005-08-01

    The Auroral Large Imaging System (ALIS) was first proposed at the ESA-PAC meeting in Lahnstein 1989. The first spectroscopic imaging station was operational in 1994, and since then up to six stations have been in simultaneous operation. Each station has a scientific-grade CCD-detector and a filter-wheel for narrow-band interference-filters with six positions. The field-of-view is around 70°. Each imager is mounted in a positioning system, enabling imaging of a common volume from several sites. This enables triangulation and tomography. Raw data from ALIS is freely available at ("http://alis.irf.se") and ALIS is open for scientific colaboration. ALIS made the first unambiguous observations of Radio-induced optical emissions at high latitudes, and the detection of water in a Leonid meteor-trail. Both rockets and satellite coordination are considered for future observations with ALIS.

  1. HELICoiD project: a new use of hyperspectral imaging for brain cancer detection in real-time during neurosurgical operations

    Science.gov (United States)

    Fabelo, Himar; Ortega, Samuel; Kabwama, Silvester; Callico, Gustavo M.; Bulters, Diederik; Szolna, Adam; Pineiro, Juan F.; Sarmiento, Roberto

    2016-05-01

    Hyperspectral images allow obtaining large amounts of information about the surface of the scene that is captured by the sensor. Using this information and a set of complex classification algorithms is possible to determine which material or substance is located in each pixel. The HELICoiD (HypErspectraL Imaging Cancer Detection) project is a European FET project that has the goal to develop a demonstrator capable to discriminate, with high precision, between normal and tumour tissues, operating in real-time, during neurosurgical operations. This demonstrator could help the neurosurgeons in the process of brain tumour resection, avoiding the excessive extraction of normal tissue and unintentionally leaving small remnants of tumour. Such precise delimitation of the tumour boundaries will improve the results of the surgery. The HELICoiD demonstrator is composed of two hyperspectral cameras obtained from Headwall. The first one in the spectral range from 400 to 1000 nm (visible and near infrared) and the second one in the spectral range from 900 to 1700 nm (near infrared). The demonstrator also includes an illumination system that covers the spectral range from 400 nm to 2200 nm. A data processing unit is in charge of managing all the parts of the demonstrator, and a high performance platform aims to accelerate the hyperspectral image classification process. Each one of these elements is installed in a customized structure specially designed for surgical environments. Preliminary results of the classification algorithms offer high accuracy (over 95%) in the discrimination between normal and tumour tissues.

  2. “Una imagen real de la Argentina”: Image-based Counter-narratives through the Walking Archive and the Project Hegemony.

    Directory of Open Access Journals (Sweden)

    Elena Rosauro

    2013-07-01

    Full Text Available After the 1978 World Cup in Argentina, José Alfredo Martínez de Hoz —then Minister of Economy under the Military Dictatorship— promoted an initiative to publish an article in Time Magazine in which “a real image of Argentina would be given” —in the words of businessman Carlos Pedro Blaquier, strongly close to the military regime—. But who deines the extent of reality of an image? Who makes these “real images” and how do they articulate within the construction of national history and identity? Along this article, and departing from the construction of the past in Argentina through the “real images” produced within the economic and artistic institutions, we will examine the image-based counternarratives propounded by Eduardo Molinari through his Walking Archive and the collaborative project Hegemony. These two contemporary artistic projects focus mainly on the last decades of the 20th century in order to give visibility to the existent relations among economic groups, the military, politicians and the cultural system in Argentina. These relations have provided legitimacy to certain processes of construction of “real” narratives and also to certain artistic practices, while rejecting others.

  3. "Anatomy and imaging": 10 years of experience with an interdisciplinary teaching project in preclinical medical education - from an elective to a curricular course.

    Science.gov (United States)

    Schober, A; Pieper, C C; Schmidt, R; Wittkowski, W

    2014-05-01

    Presentation of an interdisciplinary, interactive, tutor-based preclinical teaching project called "Anatomy and Imaging". Experience report, analysis of evaluation results and selective literature review. From 2001 to 2012, 618 students took the basic course (4 periods per week throughout the semester) and 316 took the advanced course (2 periods per week). We reviewed 557 (return rate 90.1 %) and 292 (92.4 %) completed evaluation forms of the basic and the advanced course. Results showed overall high satisfaction with the courses (1.33 and 1.56, respectively, on a 5-point Likert scale). The recognizability of the relevance of the course content for medical training, the promotion of the interest in medicine and the quality of the student tutors were evaluated especially positively. The "Anatomy and Imaging" teaching project is a successful concept for integrating medical imaging into the preclinical stage of medical education. The course was offered as part of the curriculum in 2013 for the first time. "Anatomia in mortuis" and "Anatomia in vivo" are not regarded as rivaling entities in the delivery of knowledge, but as complementary methods. © Georg Thieme Verlag KG Stuttgart · New York.

  4. The Mircen project, neuro-degenerative disease: mechanisms, therapeutics and imaging research Unit URA Cea Cnrs 2210

    International Nuclear Information System (INIS)

    Hantraye, Ph.

    2006-01-01

    During the post-genomic era, significant advances in our understanding of the molecular basis of disease have been made. The power of functional and molecular imaging in translating this knowledge into effective therapy is now being more and more recognized. Thus, molecular imaging plays a vital role in the early identific