WorldWideScience

Sample records for image markup project

  1. The caBIG annotation and image Markup project.

    Science.gov (United States)

    Channin, David S; Mongkolwat, Pattanasak; Kleper, Vladimir; Sepukar, Kastubh; Rubin, Daniel L

    2010-04-01

    Image annotation and markup are at the core of medical interpretation in both the clinical and the research setting. Digital medical images are managed with the DICOM standard format. While DICOM contains a large amount of meta-data about whom, where, and how the image was acquired, DICOM says little about the content or meaning of the pixel data. An image annotation is the explanatory or descriptive information about the pixel data of an image that is generated by a human or machine observer. An image markup is the graphical symbols placed over the image to depict an annotation. While DICOM is the standard for medical image acquisition, manipulation, transmission, storage, and display, there are no standards for image annotation and markup. Many systems expect annotation to be reported verbally, while markups are stored in graphical overlays or proprietary formats. This makes it difficult to extract and compute with both of them. The goal of the Annotation and Image Markup (AIM) project is to develop a mechanism, for modeling, capturing, and serializing image annotation and markup data that can be adopted as a standard by the medical imaging community. The AIM project produces both human- and machine-readable artifacts. This paper describes the AIM information model, schemas, software libraries, and tools so as to prepare researchers and developers for their use of AIM.

  2. Managing and Querying Image Annotation and Markup in XML.

    Science.gov (United States)

    Wang, Fusheng; Pan, Tony; Sharma, Ashish; Saltz, Joel

    2010-01-01

    Proprietary approaches for representing annotations and image markup are serious barriers for researchers to share image data and knowledge. The Annotation and Image Markup (AIM) project is developing a standard based information model for image annotation and markup in health care and clinical trial environments. The complex hierarchical structures of AIM data model pose new challenges for managing such data in terms of performance and support of complex queries. In this paper, we present our work on managing AIM data through a native XML approach, and supporting complex image and annotation queries through native extension of XQuery language. Through integration with xService, AIM databases can now be conveniently shared through caGrid.

  3. Managing and Querying Image Annotation and Markup in XML

    Science.gov (United States)

    Wang, Fusheng; Pan, Tony; Sharma, Ashish; Saltz, Joel

    2010-01-01

    Proprietary approaches for representing annotations and image markup are serious barriers for researchers to share image data and knowledge. The Annotation and Image Markup (AIM) project is developing a standard based information model for image annotation and markup in health care and clinical trial environments. The complex hierarchical structures of AIM data model pose new challenges for managing such data in terms of performance and support of complex queries. In this paper, we present our work on managing AIM data through a native XML approach, and supporting complex image and annotation queries through native extension of XQuery language. Through integration with xService, AIM databases can now be conveniently shared through caGrid. PMID:21218167

  4. Application of whole slide image markup and annotation for pathologist knowledge capture.

    Science.gov (United States)

    Campbell, Walter S; Foster, Kirk W; Hinrichs, Steven H

    2013-01-01

    The ability to transfer image markup and annotation data from one scanned image of a slide to a newly acquired image of the same slide within a single vendor platform was investigated. The goal was to study the ability to use image markup and annotation data files as a mechanism to capture and retain pathologist knowledge without retaining the entire whole slide image (WSI) file. Accepted mathematical principles were investigated as a method to overcome variations in scans of the same glass slide and to accurately associate image markup and annotation data across different WSI of the same glass slide. Trilateration was used to link fixed points within the image and slide to the placement of markups and annotations of the image in a metadata file. Variation in markup and annotation placement between WSI of the same glass slide was reduced from over 80 μ to less than 4 μ in the x-axis and from 17 μ to 6 μ in the y-axis (P < 0.025). This methodology allows for the creation of a highly reproducible image library of histopathology images and interpretations for educational and research use.

  5. The National Cancer Informatics Program (NCIP) Annotation and Image Markup (AIM) Foundation model.

    Science.gov (United States)

    Mongkolwat, Pattanasak; Kleper, Vladimir; Talbot, Skip; Rubin, Daniel

    2014-12-01

    Knowledge contained within in vivo imaging annotated by human experts or computer programs is typically stored as unstructured text and separated from other associated information. The National Cancer Informatics Program (NCIP) Annotation and Image Markup (AIM) Foundation information model is an evolution of the National Institute of Health's (NIH) National Cancer Institute's (NCI) Cancer Bioinformatics Grid (caBIG®) AIM model. The model applies to various image types created by various techniques and disciplines. It has evolved in response to the feedback and changing demands from the imaging community at NCI. The foundation model serves as a base for other imaging disciplines that want to extend the type of information the model collects. The model captures physical entities and their characteristics, imaging observation entities and their characteristics, markups (two- and three-dimensional), AIM statements, calculations, image source, inferences, annotation role, task context or workflow, audit trail, AIM creator details, equipment used to create AIM instances, subject demographics, and adjudication observations. An AIM instance can be stored as a Digital Imaging and Communications in Medicine (DICOM) structured reporting (SR) object or Extensible Markup Language (XML) document for further processing and analysis. An AIM instance consists of one or more annotations and associated markups of a single finding along with other ancillary information in the AIM model. An annotation describes information about the meaning of pixel data in an image. A markup is a graphical drawing placed on the image that depicts a region of interest. This paper describes fundamental AIM concepts and how to use and extend AIM for various imaging disciplines.

  6. Informatics in radiology: An open-source and open-access cancer biomedical informatics grid annotation and image markup template builder.

    Science.gov (United States)

    Mongkolwat, Pattanasak; Channin, David S; Kleper, Vladimir; Rubin, Daniel L

    2012-01-01

    In a routine clinical environment or clinical trial, a case report form or structured reporting template can be used to quickly generate uniform and consistent reports. Annotation and image markup (AIM), a project supported by the National Cancer Institute's cancer biomedical informatics grid, can be used to collect information for a case report form or structured reporting template. AIM is designed to store, in a single information source, (a) the description of pixel data with use of markups or graphical drawings placed on the image, (b) calculation results (which may or may not be directly related to the markups), and (c) supplemental information. To facilitate the creation of AIM annotations with data entry templates, an AIM template schema and an open-source template creation application were developed to assist clinicians, image researchers, and designers of clinical trials to quickly create a set of data collection items, thereby ultimately making image information more readily accessible.

  7. Markups and Exporting Behavior

    DEFF Research Database (Denmark)

    De Loecker, Jan; Warzynski, Frederic Michel Patrick

    2012-01-01

    In this paper, we develop a method to estimate markups using plant-level production data. Our approach relies on cost minimizing producers and the existence of at least one variable input of production. The suggested empirical framework relies on the estimation of a production function and provides...... estimates of plant- level markups without specifying how firms compete in the product market. We rely on our method to explore the relationship be- tween markups and export behavior. We find that markups are estimated significantly higher when controlling for unobserved productivity; that exporters charge......, on average, higher markups and that markups increase upon export entry....

  8. A Leaner, Meaner Markup Language.

    Science.gov (United States)

    Online & CD-ROM Review, 1997

    1997-01-01

    In 1996 a working group of the World Wide Web Consortium developed and released a simpler form of markup language, Extensible Markup Language (XML), combining the flexibility of standard Generalized Markup Language (SGML) and the Web suitability of HyperText Markup Language (HTML). Reviews SGML and discusses XML's suitability for journal…

  9. Treatment of Markup in Statistical Machine Translation

    OpenAIRE

    Müller, Mathias

    2017-01-01

    We present work on handling XML markup in Statistical Machine Translation (SMT). The methods we propose can be used to effectively preserve markup (for instance inline formatting or structure) and to place markup correctly in a machine-translated segment. We evaluate our approaches with parallel data that naturally contains markup or where markup was inserted to create synthetic examples. In our experiments, hybrid reinsertion has proven the most accurate method to handle markup, while alignm...

  10. Markups and Firm-Level Export Status

    DEFF Research Database (Denmark)

    De Loecker, Jan; Warzynski, Frederic

    We derive an estimating equation to estimate markups using the insight of Hall (1986) and the control function approach of Olley and Pakes (1996). We rely on our method to explore the relationship between markups and export behavior using plant-level data. We find significantly higher markups when...... we control for unobserved productivity shocks. Furthermore, we find significant higher markups for exporting firms and present new evidence on markup-export status dynamics. More specifically, we find that firms' markups significantly increase (decrease) after entering (exiting) export markets. We...... see these results as a first step in opening up the productivity-export black box, and provide a potential explanation for the big measured productivity premia for firms entering export markets....

  11. LOG2MARKUP: State module to transform a Stata text log into a markup document

    DEFF Research Database (Denmark)

    2016-01-01

    log2markup extract parts of the text version from the Stata log command and transform the logfile into a markup based document with the same name, but with extension markup (or otherwise specified in option extension) instead of log. The author usually uses markdown for writing documents. However...

  12. Changes in latent fingerprint examiners' markup between analysis and comparison.

    Science.gov (United States)

    Ulery, Bradford T; Hicklin, R Austin; Roberts, Maria Antonia; Buscaglia, JoAnn

    2015-02-01

    After the initial analysis of a latent print, an examiner will sometimes revise the assessment during comparison with an exemplar. Changes between analysis and comparison may indicate that the initial analysis of the latent was inadequate, or that confirmation bias may have affected the comparison. 170 volunteer latent print examiners, each randomly assigned 22 pairs of prints from a pool of 320 total pairs, provided detailed markup documenting their interpretations of the prints and the bases for their comparison conclusions. We describe changes in value assessments and markup of features and clarity. When examiners individualized, they almost always added or deleted minutiae (90.3% of individualizations); every examiner revised at least some markups. For inconclusive and exclusion determinations, changes were less common, and features were added more frequently when the image pair was mated (same source). Even when individualizations were based on eight or fewer corresponding minutiae, in most cases some of those minutiae had been added during comparison. One erroneous individualization was observed: the markup changes were notably extreme, and almost all of the corresponding minutiae had been added during comparison. Latents assessed to be of value for exclusion only (VEO) during analysis were often individualized when compared to a mated exemplar (26%); in our previous work, where examiners were not required to provide markup of features, VEO individualizations were much less common (1.8%). Published by Elsevier Ireland Ltd.

  13. The Behavior Markup Language: Recent Developments and Challenges

    NARCIS (Netherlands)

    Vilhjalmsson, Hannes; Cantelmo, Nathan; Cassell, Justine; Chafai, Nicholas E.; Kipp, Michael; Kopp, Stefan; Mancini, Maurizio; Marsella, Stacy; Marshall, Andrew N.; Pelachaud, Catherine; Ruttkay, Z.M.; Thorisson, Kristinn R.; van Welbergen, H.; van der Werf, Rick J.; Pelachaud, Catherine; Martin, Jean-Claude; Andre, Elisabeth; Collet, Gerard; Karpouzis, Kostas; Pele, Danielle

    2007-01-01

    Since the beginning of the SAIBA effort to unify key interfaces in the multi-modal behavior generation process, the Behavior Markup Language (BML) has both gained ground as an important component in many projects worldwide, and continues to undergo further refinement. This paper reports on the

  14. The geometry description markup language

    International Nuclear Information System (INIS)

    Chytracek, R.

    2001-01-01

    Currently, a lot of effort is being put on designing complex detectors. A number of simulation and reconstruction frameworks and applications have been developed with the aim to make this job easier. A very important role in this activity is played by the geometry description of the detector apparatus layout and its working environment. However, no real common approach to represent geometry data is available and such data can be found in various forms starting from custom semi-structured text files, source code (C/C++/FORTRAN), to XML and database solutions. The XML (Extensible Markup Language) has proven to provide an interesting approach for describing detector geometries, with several different but incompatible XML-based solutions existing. Therefore, interoperability and geometry data exchange among different frameworks is not possible at present. The author introduces a markup language for geometry descriptions. Its aim is to define a common approach for sharing and exchanging of geometry description data. Its requirements and design have been driven by experience and user feedback from existing projects which have their geometry description in XML

  15. Hospital markup and operation outcomes in the United States.

    Science.gov (United States)

    Gani, Faiz; Ejaz, Aslam; Makary, Martin A; Pawlik, Timothy M

    2016-07-01

    Although the price hospitals charge for operations has broad financial implications, hospital pricing is not subject to regulation. We sought to characterize national variation in hospital price markup for major cardiothoracic and gastrointestinal operations and to evaluate perioperative outcomes of hospitals relative to hospital price markup. All hospitals in which a patient underwent a cardiothoracic or gastrointestinal procedure were identified using the Nationwide Inpatient Sample for 2012. Markup ratios (ratio of charges to costs) for the total cost of hospitalization were compared across hospitals. Risk-adjusted morbidity, failure-to-rescue, and mortality were calculated using multivariable, hierarchical logistic regression. Among the 3,498 hospitals identified, markup ratios ranged from 0.5-12.2, with a median markup ratio of 2.8 (interquartile range 2.7-3.9). For the 888 hospitals with extreme markup (greatest markup ratio quartile: markup ratio >3.9), the median markup ratio was 4.9 (interquartile range 4.3-6.0), with 10% of these hospitals billing more than 7 times the Medicare-allowable costs (markup ratio ≥7.25). Extreme markup hospitals were more often large (46.3% vs 33.8%, P markup ratio compared with 19.3% (n = 452) and 6.8% (n = 35) of nonprofit and government hospitals, respectively. Perioperative morbidity (32.7% vs 26.4%, P markup hospitals. There is wide variation in hospital markup for cardiothoracic and gastrointestinal procedures, with approximately a quarter of hospital charges being 4 times greater than the actual cost of hospitalization. Hospitals with an extreme markup had greater perioperative morbidity. Copyright © 2016 Elsevier Inc. All rights reserved.

  16. Definition of an XML markup language for clinical laboratory procedures and comparison with generic XML markup.

    Science.gov (United States)

    Saadawi, Gilan M; Harrison, James H

    2006-10-01

    Clinical laboratory procedure manuals are typically maintained as word processor files and are inefficient to store and search, require substantial effort for review and updating, and integrate poorly with other laboratory information. Electronic document management systems could improve procedure management and utility. As a first step toward building such systems, we have developed a prototype electronic format for laboratory procedures using Extensible Markup Language (XML). Representative laboratory procedures were analyzed to identify document structure and data elements. This information was used to create a markup vocabulary, CLP-ML, expressed as an XML Document Type Definition (DTD). To determine whether this markup provided advantages over generic markup, we compared procedures structured with CLP-ML or with the vocabulary of the Health Level Seven, Inc. (HL7) Clinical Document Architecture (CDA) narrative block. CLP-ML includes 124 XML tags and supports a variety of procedure types across different laboratory sections. When compared with a general-purpose markup vocabulary (CDA narrative block), CLP-ML documents were easier to edit and read, less complex structurally, and simpler to traverse for searching and retrieval. In combination with appropriate software, CLP-ML is designed to support electronic authoring, reviewing, distributing, and searching of clinical laboratory procedures from a central repository, decreasing procedure maintenance effort and increasing the utility of procedure information. A standard electronic procedure format could also allow laboratories and vendors to share procedures and procedure layouts, minimizing duplicative word processor editing. Our results suggest that laboratory-specific markup such as CLP-ML will provide greater benefit for such systems than generic markup.

  17. Endogenous Markups, Firm Productivity and International Trade:

    DEFF Research Database (Denmark)

    Bellone, Flora; Musso, Patrick; Nesta, Lionel

    ) markups are positively related to firm productivity; 3) markups are negatively related to import penetration; 4) markups are positively related to firm export intensity and markups are higher on the export market than on the domestic ones in the presence of trade barriers and/or if competitors...... on the export market are less efficient than competitors on the domestic market. We estimate micro-level price cost margins (PCMs) using firm-level data extending the techniques developed by Hall (1986, 1988) and extended by Domowitz et al. (1988) and Roeger (1995) for the French manufacturing industry from......In this paper, we test key micro-level theoretical predictions ofMelitz and Ottaviano (MO) (2008), a model of international trade with heterogenous firms and endogenous mark-ups. At the firm-level, the MO model predicts that: 1) firm markups are negatively related to domestic market size; 2...

  18. Wine Price Markup in California Restaurants

    OpenAIRE

    Amspacher, William

    2011-01-01

    The study quantifies the relationship between retail wine price and restaurant mark-up. Ordinary Least Squares regressions were run to estimate how restaurant mark-up responded to retail price. Separate regressions were run for white wine, red wine, and both red and white combined. Both slope and intercept coefficients for each of these regressions were highly significant and indicated the expected inverse relationship between retail price and mark-up.

  19. Markups and Firm-Level Export Status

    DEFF Research Database (Denmark)

    De Loecker, Jan; Warzynski, Frederic

    and export behavior using plant-level data. We find that i) markups are estimated significantly higher when controlling for unobserved productivity, ii) exporters charge on average higher markups and iii) firms' markups increase (decrease) upon export entry (exit).We see these findings as a first step...... in opening up the productivity-export black box, and provide a potential explanation for the big measured productivity premia for firms entering export markets....

  20. TEI Standoff Markup - A work in progress

    NARCIS (Netherlands)

    Spadini, E.; Turska, Magdalena; Broughton, Misha

    2015-01-01

    Markup is said to be standoff, or external, when the markup data is placed outside of the text it is meant to tag” (). One of the most widely recognized limitations of inline XML markup is its inability to cope with element overlap; standoff has been considered as a possible solution to

  1. XML/TEI Stand-off Markup. One step beyond.

    NARCIS (Netherlands)

    Spadini, E.; Turska, Magdalena

    2018-01-01

    Stand-off markup is widely considered as a possible solution for overcoming the limitation of inline XML markup, primarily dealing with multiple overlapping hierarchies. Considering previous contributions on the subject and implementations of stand-off markup, we propose a new TEI-based model for

  2. Chemical Markup, XML and the World-Wide Web. 8. Polymer Markup Language.

    Science.gov (United States)

    Adams, Nico; Winter, Jerry; Murray-Rust, Peter; Rzepa, Henry S

    2008-11-01

    Polymers are among the most important classes of materials but are only inadequately supported by modern informatics. The paper discusses the reasons why polymer informatics is considerably more challenging than small molecule informatics and develops a vision for the computer-aided design of polymers, based on modern semantic web technologies. The paper then discusses the development of Polymer Markup Language (PML). PML is an extensible language, designed to support the (structural) representation of polymers and polymer-related information. PML closely interoperates with Chemical Markup Language (CML) and overcomes a number of the previously identified challenges.

  3. Percentage Retail Mark-Ups

    OpenAIRE

    Thomas von Ungern-Sternberg

    1999-01-01

    A common assumption in the literature on the double marginalization problem is that the retailer can set his mark-up only in the second stage of the game after the producer has moved. To the extent that the sequence of moves is designed to reflect the relative bargaining power of the two parties it is just as plausible to let the retailer move first. Furthermore, retailers frequently calculate their selling prices by adding a percentage mark-up to their wholesale prices. This allows a retaile...

  4. Descriptive markup languages and the development of digital humanities

    Directory of Open Access Journals (Sweden)

    Boris Bosančić

    2012-11-01

    Full Text Available The paper discusses the role of descriptive markup languages in the development of digital humanities, a new research discipline that is part of social sciences and humanities, which focuses on the use of computers in research. A chronological review of the development of digital humanities, and then descriptive markup languages is exposed, through several developmental stages. It is shown that the development of digital humanities since the mid-1980s and the appearance of SGML, markup language that was the foundation of TEI, a key standard for the encoding and exchange of humanities texts in the digital environment, is inseparable from the development of markup languages. Special attention is dedicated to the presentation of the Text Encoding Initiative – TEI development, a key organization that developed the titled standard, both from organizational and markup perspectives. By this time, TEI standard is published in five versions, and during 2000s SGML is replaced by XML markup language. Key words: markup languages, digital humanities, text encoding, TEI, SGML, XML

  5. TumorML: Concept and requirements of an in silico cancer modelling markup language.

    Science.gov (United States)

    Johnson, David; Cooper, Jonathan; McKeever, Steve

    2011-01-01

    This paper describes the initial groundwork carried out as part of the European Commission funded Transatlantic Tumor Model Repositories project, to develop a new markup language for computational cancer modelling, TumorML. In this paper we describe the motivations for such a language, arguing that current state-of-the-art biomodelling languages are not suited to the cancer modelling domain. We go on to describe the work that needs to be done to develop TumorML, the conceptual design, and a description of what existing markup languages will be used to compose the language specification.

  6. PENDEKATAN MODEL MATEMATIS UNTUK MENENTUKAN PERSENTASE MARKUP HARGA JUAL PRODUK

    Directory of Open Access Journals (Sweden)

    Oviliani Yenty Yuliana

    2002-01-01

    Full Text Available The purpose of this research is to design Mathematical models that can determine the selling volume as an alternative to improve the markup percentage. Mathematical models was designed with double regression statistic. Selling volume is a function of markup, market condition, and substitute condition variables. The designed Mathematical model has fulfilled by the test of: error upon assumption, accurate model, validation model, and multi collinear problem. The Mathematical model has applied in application program with expectation that the application program can give: (1 alternative to decide percentage markup for user, (2 Illustration of gross profit estimation that will be achieve for selected percentage markup, (3 Illustration of estimation percentage of the units sold that will be achieve for selected percentage markup, and (4 Illustration of total net income before tax will get for specific period. Abstract in Bahasa Indonesia : Penelitian ini bertujuan untuk merancang model Matematis guna menetapkan volume penjualan, sebagai alternatif untuk menentukan persentase markup harga jual produk. Model Matematis dirancang menggunakan Statistik Regresi Berganda. Volume penjualan merupakan fungsi dari variabel markup, kondisi pasar, dan kondisi pengganti. Model Matematis yang dirancang sudah memenuhi uji: asumsi atas error, akurasi model, validasi model, dan masalah multikolinearitas. Rancangan model Matematis tersebut diterapkan dalam program aplikasi dengan harapan dapat memberi: (1 alternatif bagi pengguna mengenai berapa besar markup yang sebaiknya ditetapkan, (2 gambaran perkiraan laba kotor yang akan diperoleh setiap pemilihan markup, (3 gambaran perkiraan persentase unit yang terjual setiap pemilihan markup, dan (4 gambaran total laba kotor sebelum pajak yang dapat diperoleh pada periode yang bersangkutan. Kata kunci: model Matematis, aplikasi program, volume penjualan, markup, laba kotor.

  7. The Accelerator Markup Language and the Universal Accelerator Parser

    International Nuclear Information System (INIS)

    Sagan, D.; Forster, M.; Cornell U., LNS; Bates, D.A.; LBL, Berkeley; Wolski, A.; Liverpool U.; Cockcroft Inst. Accel. Sci. Tech.; Schmidt, F.; CERN; Walker, N.J.; DESY; Larrieu, T.; Roblin, Y.; Jefferson Lab; Pelaia, T.; Oak Ridge; Tenenbaum, P.; Woodley, M.; SLAC; Reiche, S.; UCLA

    2006-01-01

    A major obstacle to collaboration on accelerator projects has been the sharing of lattice description files between modeling codes. To address this problem, a lattice description format called Accelerator Markup Language (AML) has been created. AML is based upon the standard eXtensible Markup Language (XML) format; this provides the flexibility for AML to be easily extended to satisfy changing requirements. In conjunction with AML, a software library, called the Universal Accelerator Parser (UAP), is being developed to speed the integration of AML into any program. The UAP is structured to make it relatively straightforward (by giving appropriate specifications) to read and write lattice files in any format. This will allow programs that use the UAP code to read a variety of different file formats. Additionally, this will greatly simplify conversion of files from one format to another. Currently, besides AML, the UAP supports the MAD lattice format

  8. Modularization and Structured Markup for Learning Content in an Academic Environment

    Science.gov (United States)

    Schluep, Samuel; Bettoni, Marco; Schar, Sissel Guttormsen

    2006-01-01

    This article aims to present a flexible component model for modular, web-based learning content, and a simple structured markup schema for the separation of content and presentation. The article will also contain an overview of the dynamic Learning Content Management System (dLCMS) project, which implements these concepts. Content authors are a…

  9. Answer Markup Algorithms for Southeast Asian Languages.

    Science.gov (United States)

    Henry, George M.

    1991-01-01

    Typical markup methods for providing feedback to foreign language learners are not applicable to languages not written in a strictly linear fashion. A modification of Hart's edit markup software is described, along with a second variation based on a simple edit distance algorithm adapted to a general Southeast Asian font system. (10 references)…

  10. Variation in markup of general surgical procedures by hospital market concentration.

    Science.gov (United States)

    Cerullo, Marcelo; Chen, Sophia Y; Dillhoff, Mary; Schmidt, Carl R; Canner, Joseph K; Pawlik, Timothy M

    2018-04-01

    Increasing hospital market concentration (with concomitantly decreasing hospital market competition) may be associated with rising hospital prices. Hospital markup - the relative increase in price over costs - has been associated with greater hospital market concentration. Patients undergoing a cardiothoracic or gastrointestinal procedure in the 2008-2011 Nationwide Inpatient Sample (NIS) were identified and linked to Hospital Market Structure Files. The association between market concentration, hospital markup and hospital for-profit status was assessed using mixed-effects log-linear models. A weighted total of 1,181,936 patients were identified. In highly concentrated markets, private for-profit status was associated with an 80.8% higher markup compared to public/private not-for-profit status (95%CI: +69.5% - +96.9%; p markup compared to public/private not-for-profit status in unconcentrated markets (95%CI: +45.4% - +81.1%; p markup. Government and private not-for-profit hospitals employed lower markups in more concentrated markets, whereas private for-profit hospitals employed higher markups in more concentrated markets. Copyright © 2017 Elsevier Inc. All rights reserved.

  11. Interexaminer variation of minutia markup on latent fingerprints.

    Science.gov (United States)

    Ulery, Bradford T; Hicklin, R Austin; Roberts, Maria Antonia; Buscaglia, JoAnn

    2016-07-01

    Latent print examiners often differ in the number of minutiae they mark during analysis of a latent, and also during comparison of a latent with an exemplar. Differences in minutia counts understate interexaminer variability: examiners' markups may have similar minutia counts but differ greatly in which specific minutiae were marked. We assessed variability in minutia markup among 170 volunteer latent print examiners. Each provided detailed markup documenting their examinations of 22 latent-exemplar pairs of prints randomly assigned from a pool of 320 pairs. An average of 12 examiners marked each latent. The primary factors associated with minutia reproducibility were clarity, which regions of the prints examiners chose to mark, and agreement on value or comparison determinations. In clear areas (where the examiner was "certain of the location, presence, and absence of all minutiae"), median reproducibility was 82%; in unclear areas, median reproducibility was 46%. Differing interpretations regarding which regions should be marked (e.g., when there is ambiguity in the continuity of a print) contributed to variability in minutia markup: especially in unclear areas, marked minutiae were often far from the nearest minutia marked by a majority of examiners. Low reproducibility was also associated with differences in value or comparison determinations. Lack of standardization in minutia markup and unfamiliarity with test procedures presumably contribute to the variability we observed. We have identified factors accounting for interexaminer variability; implementing standards for detailed markup as part of documentation and focusing future training efforts on these factors may help to facilitate transparency and reduce subjectivity in the examination process. Published by Elsevier Ireland Ltd.

  12. Instrument Remote Control via the Astronomical Instrument Markup Language

    Science.gov (United States)

    Sall, Ken; Ames, Troy; Warsaw, Craig; Koons, Lisa; Shafer, Richard

    1998-01-01

    The Instrument Remote Control (IRC) project ongoing at NASA's Goddard Space Flight Center's (GSFC) Information Systems Center (ISC) supports NASA's mission by defining an adaptive intranet-based framework that provides robust interactive and distributed control and monitoring of remote instruments. An astronomical IRC architecture that combines the platform-independent processing capabilities of Java with the power of Extensible Markup Language (XML) to express hierarchical data in an equally platform-independent, as well as human readable manner, has been developed. This architecture is implemented using a variety of XML support tools and Application Programming Interfaces (API) written in Java. IRC will enable trusted astronomers from around the world to easily access infrared instruments (e.g., telescopes, cameras, and spectrometers) located in remote, inhospitable environments, such as the South Pole, a high Chilean mountaintop, or an airborne observatory aboard a Boeing 747. Using IRC's frameworks, an astronomer or other scientist can easily define the type of onboard instrument, control the instrument remotely, and return monitoring data all through the intranet. The Astronomical Instrument Markup Language (AIML) is the first implementation of the more general Instrument Markup Language (IML). The key aspects of our approach to instrument description and control applies to many domains, from medical instruments to machine assembly lines. The concepts behind AIML apply equally well to the description and control of instruments in general. IRC enables us to apply our techniques to several instruments, preferably from different observatories.

  13. SGML-Based Markup for Literary Texts: Two Problems and Some Solutions.

    Science.gov (United States)

    Barnard, David; And Others

    1988-01-01

    Identifies the Standard Generalized Markup Language (SGML) as the best basis for a markup standard for encoding literary texts. Outlines solutions to problems using SGML and discusses the problem of maintaining multiple views of a document. Examines several ways of reducing the burden of markups. (GEA)

  14. An Introduction to the Extensible Markup Language (XML).

    Science.gov (United States)

    Bryan, Martin

    1998-01-01

    Describes Extensible Markup Language (XML), a subset of the Standard Generalized Markup Language (SGML) that is designed to make it easy to interchange structured documents over the Internet. Topics include Document Type Definition (DTD), components of XML, the use of XML, text and non-text elements, and uses for XML-coded files. (LRW)

  15. The Long-Run Relationship Between Inflation and the Markup in the U.S.

    OpenAIRE

    Sandeep Mazumder

    2011-01-01

    This paper examines the long-run relationship between inflation and a new measure of the price-marginal cost markup. This new markup index is derived while accounting for labor adjustment costs, which a large number of the papers that estimate the markup have ignored. We then examine the long-run relationship between this markup measure, which is estimated using U.S. manufacturing data, and inflation. We find that decreases in the markup that are associated with a percentage point increase in...

  16. Markup heterogeneity, export status ans the establishment of the euro

    OpenAIRE

    Guillou , Sarah; Nesta , Lionel

    2015-01-01

    We investigate the effects of the establishment of the euro on the markups of French manufacturing firms. Merging firm-level census data with customs data, we estimate time-varying firm-specific markups and distinguish between eurozone exporters from other firms between 1995 and 2007. We find that the establishment of the euro has had a pronounced pro-competitive impact by reducing firm markups by 14 percentage points. By reducing export costs, the euro represented an opp...

  17. Markup cyclicality, employment adjustment, and financial constraints

    OpenAIRE

    Askildsen, Jan Erik; Nilsen, Øivind Anti

    2001-01-01

    We investigate the existence of markups and their cyclical behaviour. Markup is not directly observed. Instead, it is given as a price-cost relation that is estimated from a dynamic model of the firm. The model incorporates potential costly employment adjustments and takes into consideration that firms may be financially constrained. When considering size of the future labour stock, financially constrained firms may behave as if they have a higher discount factor, which may affect the realise...

  18. Astronomical Instrumentation System Markup Language

    Science.gov (United States)

    Goldbaum, Jesse M.

    2016-05-01

    The Astronomical Instrumentation System Markup Language (AISML) is an Extensible Markup Language (XML) based file format for maintaining and exchanging information about astronomical instrumentation. The factors behind the need for an AISML are first discussed followed by the reasons why XML was chosen as the format. Next it's shown how XML also provides the framework for a more precise definition of an astronomical instrument and how these instruments can be combined to form an Astronomical Instrumentation System (AIS). AISML files for several instruments as well as one for a sample AIS are provided. The files demonstrate how AISML can be utilized for various tasks from web page generation and programming interface to instrument maintenance and quality management. The advantages of widespread adoption of AISML are discussed.

  19. QUESTION ANSWERING SYSTEM BERBASIS ARTIFICIAL INTELLIGENCE MARKUP LANGUAGE SEBAGAI MEDIA INFORMASI

    Directory of Open Access Journals (Sweden)

    Fajrin Azwary

    2016-04-01

    Full Text Available Artificial intelligence technology nowadays, can be processed with a variety of forms, such as chatbot, and the various methods, one of them using Artificial Intelligence Markup Language (AIML. AIML using template matching, by comparing the specific patterns in the database. AIML template design process begins with determining the necessary information, then formed into questions, these questions adapted to AIML pattern. From the results of the study, can be known that the Question-Answering System in the chatbot using Artificial Intelligence Markup Language are able to communicate and deliver information. Keywords: Artificial Intelligence, Template Matching, Artificial Intelligence Markup Language, AIML Teknologi kecerdasan buatan saat ini dapat diolah dengan berbagai macam bentuk, seperti ChatBot, dan berbagai macam metode, salah satunya menggunakan Artificial Intelligence Markup Language (AIML. AIML menggunakan metode template matching yaitu dengan membandingkan pola-pola tertentu pada database. Proses perancangan template AIML diawali dengan menentukan informasi yang diperlukan, kemudian dibentuk menjadi pertanyaan, pertanyaan tersebut disesuaikan dengan bentuk pattern AIML. Hasil penelitian dapat diperoleh bahwa Question-Answering System dalam bentuk ChatBot menggunakan Artificial Intelligence Markup Language dapat berkomunikasi dan menyampaikan informasi. Kata kunci : Kecerdasan Buatan, Pencocokan Pola, Artificial Intelligence Markup Language, AIML

  20. Semantic Web Services with Web Ontology Language (OWL-S) - Specification of Agent-Services for DARPA Agent Markup Language (DAML)

    National Research Council Canada - National Science Library

    Sycara, Katia P

    2006-01-01

    CMU did research and development on semantic web services using OWL-S, the semantic web service language under the Defense Advanced Research Projects Agency- DARPA Agent Markup Language (DARPA-DAML) program...

  1. Are the determinants of markup size industry-specific? The case of Slovenian manufacturing firms

    Directory of Open Access Journals (Sweden)

    Ponikvar Nina

    2011-01-01

    Full Text Available The aim of this paper is to identify factors that affect the pricing policy in Slovenian manufacturing firms in terms of the markup size and, most of all, to explicitly account for the possibility of differences in pricing procedures among manufacturing industries. Accordingly, the analysis of the dynamic panel is carried out on an industry-by-industry basis, allowing the coefficients on the markup determinants to vary across industries. We find that the oligopoly theory of markup determination for the most part holds for the manufacturing sector as a whole, although large variability in markup determinants exists across industries within the Slovenian manufacturing. Our main conclusion is that each industry should be investigated separately in detail in order to assess the precise role of markup factors in the markup-determination process.

  2. The Commercial Office Market and the Markup for Full Service Leases

    OpenAIRE

    Jonathan A. Wiley; Yu Liu; Dongshin Kim; Tom Springer

    2014-01-01

    Because landlords assume all of the operating expense risk, rents for gross leases exceed those for net leases. The markup, or spread, for gross leases varies between properties and across markets. Specifically, the markup is expected to increase with the cost of real estate services at the property, and to be influenced by market conditions. A matching procedure is applied to measure the services markup as the percentage difference between the actual rent on a gross lease relative to the act...

  3. Trade reforms, mark-ups and bargaining power of workers: the case ...

    African Journals Online (AJOL)

    Ethiopian Journal of Economics ... workers between 1996 and 2007, a model of mark-up with labor bargaining power was estimated using random effects and LDPDM. ... Keywords: Trade reform, mark-up, bargaining power, rent, trade unions ...

  4. Balancing medicine prices and business sustainability: analyses of pharmacy costs, revenues and profit shed light on retail medicine mark-ups in rural Kyrgyzstan.

    Science.gov (United States)

    Waning, Brenda; Maddix, Jason; Soucy, Lyne

    2010-07-13

    Numerous not-for-profit pharmacies have been created to improve access to medicines for the poor, but many have failed due to insufficient financial planning and management. These pharmacies are not well described in health services literature despite strong demand from policy makers, implementers, and researchers. Surveys reporting unaffordable medicine prices and high mark-ups have spurred efforts to reduce medicine prices, but price reduction goals are arbitrary in the absence of information on pharmacy costs, revenues, and profit structures. Health services research is needed to develop sustainable and "reasonable" medicine price goals and strategic initiatives to reach them. We utilized cost accounting methods on inventory and financial information obtained from a not-for-profit rural pharmacy network in mountainous Kyrgyzstan to quantify costs, revenues, profits and medicine mark-ups during establishment and maintenance periods (October 2004-December 2007). Twelve pharmacies and one warehouse were established in remote Kyrgyzstan with 100%, respectively. Annual mark-ups increased dramatically each year to cover increasing recurrent costs, and by 2007, only 19% and 46% of products revealed mark-ups of 100%. 2007 medicine mark-ups varied substantially across these products, ranging from 32% to 244%. Mark-ups needed to sustain private pharmacies would be even higher in the absence of government subsidies. Pharmacy networks can be established in hard-to-reach regions with little funding using public-private partnership, resource-sharing models. Medicine prices and mark-ups must be interpreted with consideration for regional costs of business. Mark-ups vary dramatically across medicines. Some mark-ups appear "excessive" but are likely necessary for pharmacy viability. Pharmacy financial data is available in remote settings and can be used towards determination of "reasonable" medicine price goals. Health systems researchers must document the positive and negative

  5. Improving Interoperability by Incorporating UnitsML Into Markup Languages.

    Science.gov (United States)

    Celebi, Ismet; Dragoset, Robert A; Olsen, Karen J; Schaefer, Reinhold; Kramer, Gary W

    2010-01-01

    Maintaining the integrity of analytical data over time is a challenge. Years ago, data were recorded on paper that was pasted directly into a laboratory notebook. The digital age has made maintaining the integrity of data harder. Nowadays, digitized analytical data are often separated from information about how the sample was collected and prepared for analysis and how the data were acquired. The data are stored on digital media, while the related information about the data may be written in a paper notebook or stored separately in other digital files. Sometimes the connection between this "scientific meta-data" and the analytical data is lost, rendering the spectrum or chromatogram useless. We have been working with ASTM Subcommittee E13.15 on Analytical Data to create the Analytical Information Markup Language or AnIML-a new way to interchange and store spectroscopy and chromatography data based on XML (Extensible Markup Language). XML is a language for describing what data are by enclosing them in computer-useable tags. Recording the units associated with the analytical data and metadata is an essential issue for any data representation scheme that must be addressed by all domain-specific markup languages. As scientific markup languages proliferate, it is very desirable to have a single scheme for handling units to facilitate moving information between different data domains. At NIST, we have been developing a general markup language just for units that we call UnitsML. This presentation will describe how UnitsML is used and how it is being incorporated into AnIML.

  6. Planned growth as a determinant of the markup: the case of Slovenian manufacturing

    Directory of Open Access Journals (Sweden)

    Maks Tajnikar

    2009-11-01

    Full Text Available The paper follows the idea of heterodox economists that a cost-plus price is above all a reproductive price and growth price. The authors apply a firm-level model of markup determination which, in line with theory and empirical evidence, contains proposed firm-specific determinants of the markup, including the firm’s planned growth. The positive firm-level relationship between growth and markup that is found in data for Slovenian manufacturing firms implies that retained profits gathered via the markup are an important source of growth financing and that the investment decisions of Slovenian manufacturing firms affect their pricing policy and decisions on the markup size as proposed by Post-Keynesian theory. The authors thus conclude that at least a partial trade-off between a firm’s growth and competitive outcome exists in Slovenian manufacturing.

  7. The Price-Marginal Cost Markup and its Determinants in U.S. Manufacturing

    OpenAIRE

    Mazumder, Sandeep

    2009-01-01

    This paper estimates the price-marginal cost markup for US manufacturing using a new methodology. Most existing techniques of estimating the markup are a variant on Hall's (1988) framework involving the manipulation of the Solow Residual. However this paper argues that this notion is based on the unreasonable assumption that labor can be costlessly adjusted at a fixed wage rate. By relaxing this assumption, we are able to derive a generalized markup index, which when estimated using manufactu...

  8. An object-oriented approach for harmonization of multimedia markup languages

    Science.gov (United States)

    Chen, Yih-Feng; Kuo, May-Chen; Sun, Xiaoming; Kuo, C.-C. Jay

    2003-12-01

    An object-oriented methodology is proposed to harmonize several different markup languages in this research. First, we adopt the Unified Modelling Language (UML) as the data model to formalize the concept and the process of the harmonization process between the eXtensible Markup Language (XML) applications. Then, we design the Harmonization eXtensible Markup Language (HXML) based on the data model and formalize the transformation between the Document Type Definitions (DTDs) of the original XML applications and HXML. The transformation between instances is also discussed. We use the harmonization of SMIL and X3D as an example to demonstrate the proposed methodology. This methodology can be generalized to various application domains.

  9. Data on the interexaminer variation of minutia markup on latent fingerprints.

    Science.gov (United States)

    Ulery, Bradford T; Hicklin, R Austin; Roberts, Maria Antonia; Buscaglia, JoAnn

    2016-09-01

    The data in this article supports the research paper entitled "Interexaminer variation of minutia markup on latent fingerprints" [1]. The data in this article describes the variability in minutia markup during both analysis of the latents and comparison between latents and exemplars. The data was collected in the "White Box Latent Print Examiner Study," in which each of 170 volunteer latent print examiners provided detailed markup documenting their examinations of latent-exemplar pairs of prints randomly assigned from a pool of 320 pairs. Each examiner examined 22 latent-exemplar pairs; an average of 12 examiners marked each latent.

  10. A quality assessment tool for markup-based clinical guidelines.

    Science.gov (United States)

    Shalom, Erez; Shahar, Yuval; Taieb-Maimon, Meirav; Lunenfeld, Eitan

    2008-11-06

    We introduce a tool for quality assessment of procedural and declarative knowledge. We developed this tool for evaluating the specification of mark-up-based clinical GLs. Using this graphical tool, the expert physician and knowledge engineer collaborate to perform scoring, using pre-defined scoring scale, each of the knowledge roles of the mark-ups, comparing it to a gold standard. The tool enables scoring the mark-ups simultaneously at different sites by different users at different locations.

  11. Balancing medicine prices and business sustainability: analyses of pharmacy costs, revenues and profit shed light on retail medicine mark-ups in rural Kyrgyzstan

    Directory of Open Access Journals (Sweden)

    Maddix Jason

    2010-07-01

    Full Text Available Abstract Background Numerous not-for-profit pharmacies have been created to improve access to medicines for the poor, but many have failed due to insufficient financial planning and management. These pharmacies are not well described in health services literature despite strong demand from policy makers, implementers, and researchers. Surveys reporting unaffordable medicine prices and high mark-ups have spurred efforts to reduce medicine prices, but price reduction goals are arbitrary in the absence of information on pharmacy costs, revenues, and profit structures. Health services research is needed to develop sustainable and "reasonable" medicine price goals and strategic initiatives to reach them. Methods We utilized cost accounting methods on inventory and financial information obtained from a not-for-profit rural pharmacy network in mountainous Kyrgyzstan to quantify costs, revenues, profits and medicine mark-ups during establishment and maintenance periods (October 2004-December 2007. Results Twelve pharmacies and one warehouse were established in remote Kyrgyzstan with 100%, respectively. Annual mark-ups increased dramatically each year to cover increasing recurrent costs, and by 2007, only 19% and 46% of products revealed mark-ups of 100%. 2007 medicine mark-ups varied substantially across these products, ranging from 32% to 244%. Mark-ups needed to sustain private pharmacies would be even higher in the absence of government subsidies. Conclusion Pharmacy networks can be established in hard-to-reach regions with little funding using public-private partnership, resource-sharing models. Medicine prices and mark-ups must be interpreted with consideration for regional costs of business. Mark-ups vary dramatically across medicines. Some mark-ups appear "excessive" but are likely necessary for pharmacy viability. Pharmacy financial data is available in remote settings and can be used towards determination of "reasonable" medicine price goals

  12. XML schemas and mark-up practices of taxonomic literature.

    Science.gov (United States)

    Penev, Lyubomir; Lyal, Christopher Hc; Weitzman, Anna; Morse, David R; King, David; Sautter, Guido; Georgiev, Teodor; Morris, Robert A; Catapano, Terry; Agosti, Donat

    2011-01-01

    We review the three most widely used XML schemas used to mark-up taxonomic texts, TaxonX, TaxPub and taXMLit. These are described from the viewpoint of their development history, current status, implementation, and use cases. The concept of "taxon treatment" from the viewpoint of taxonomy mark-up into XML is discussed. TaxonX and taXMLit are primarily designed for legacy literature, the former being more lightweight and with a focus on recovery of taxon treatments, the latter providing a much more detailed set of tags to facilitate data extraction and analysis. TaxPub is an extension of the National Library of Medicine Document Type Definition (NLM DTD) for taxonomy focussed on layout and recovery and, as such, is best suited for mark-up of new publications and their archiving in PubMedCentral. All three schemas have their advantages and shortcomings and can be used for different purposes.

  13. Semantic Markup for Literary Scholars: How Descriptive Markup Affects the Study and Teaching of Literature.

    Science.gov (United States)

    Campbell, D. Grant

    2002-01-01

    Describes a qualitative study which investigated the attitudes of literary scholars towards the features of semantic markup for primary texts in XML format. Suggests that layout is a vital part of the reading process which implies that the standardization of DTDs (Document Type Definitions) should extend to styling as well. (Author/LRW)

  14. Development of clinical contents model markup language for electronic health records.

    Science.gov (United States)

    Yun, Ji-Hyun; Ahn, Sun-Ju; Kim, Yoon

    2012-09-01

    To develop dedicated markup language for clinical contents models (CCM) to facilitate the active use of CCM in electronic health record systems. Based on analysis of the structure and characteristics of CCM in the clinical domain, we designed extensible markup language (XML) based CCM markup language (CCML) schema manually. CCML faithfully reflects CCM in both the syntactic and semantic aspects. As this language is based on XML, it can be expressed and processed in computer systems and can be used in a technology-neutral way. CCML HAS THE FOLLOWING STRENGTHS: it is machine-readable and highly human-readable, it does not require a dedicated parser, and it can be applied for existing electronic health record systems.

  15. PENDEKATAN MODEL MATEMATIS UNTUK MENENTUKAN PERSENTASE MARKUP HARGA JUAL PRODUK

    OpenAIRE

    Oviliani Yenty Yuliana; Yohan Wahyudi; Siana Halim

    2002-01-01

    The purpose of this research is to design Mathematical models that can determine the selling volume as an alternative to improve the markup percentage. Mathematical models was designed with double regression statistic. Selling volume is a function of markup, market condition, and substitute condition variables. The designed Mathematical model has fulfilled by the test of: error upon assumption, accurate model, validation model, and multi collinear problem. The Mathematical model has applied i...

  16. Field Data and the Gas Hydrate Markup Language

    Directory of Open Access Journals (Sweden)

    Ralf Löwner

    2007-06-01

    Full Text Available Data and information exchange are crucial for any kind of scientific research activities and are becoming more and more important. The comparison between different data sets and different disciplines creates new data, adds value, and finally accumulates knowledge. Also the distribution and accessibility of research results is an important factor for international work. The gas hydrate research community is dispersed across the globe and therefore, a common technical communication language or format is strongly demanded. The CODATA Gas Hydrate Data Task Group is creating the Gas Hydrate Markup Language (GHML, a standard based on the Extensible Markup Language (XML to enable the transport, modeling, and storage of all manner of objects related to gas hydrate research. GHML initially offers an easily deducible content because of the text-based encoding of information, which does not use binary data. The result of these investigations is a custom-designed application schema, which describes the features, elements, and their properties, defining all aspects of Gas Hydrates. One of the components of GHML is the "Field Data" module, which is used for all data and information coming from the field. It considers international standards, particularly the standards defined by the W3C (World Wide Web Consortium and the OGC (Open Geospatial Consortium. Various related standards were analyzed and compared with our requirements (in particular the Geographic Markup Language (ISO19136, GML and the whole ISO19000 series. However, the requirements demanded a quick solution and an XML application schema readable for any scientist without a background in information technology. Therefore, ideas, concepts and definitions have been used to build up the modules of GHML without importing any of these Markup languages. This enables a comprehensive schema and simple use.

  17. STMML. A markup language for scientific, technical and medical publishing

    Directory of Open Access Journals (Sweden)

    Peter Murray-Rust

    2006-01-01

    Full Text Available STMML is an XML-based markup language covering many generic aspects of scientific information. It has been developed as a re-usable core for more specific markup languages. It supports data structures, data types, metadata, scientific units and some basic components of scientific narrative. The central means of adding semantic information is through dictionaries. The specification is through an XML Schema which can be used to validate STMML documents or fragments. Many examples of the language are given.

  18. Monopoly, Pareto and Ramsey mark-ups

    NARCIS (Netherlands)

    Ten Raa, T.

    2009-01-01

    Monopoly prices are too high. It is a price level problem, in the sense that the relative mark-ups have Ramsey optimal proportions, at least for independent constant elasticity demands. I show that this feature of monopoly prices breaks down the moment one demand is replaced by the textbook linear

  19. Biological Dynamics Markup Language (BDML): an open format for representing quantitative biological dynamics data.

    Science.gov (United States)

    Kyoda, Koji; Tohsato, Yukako; Ho, Kenneth H L; Onami, Shuichi

    2015-04-01

    Recent progress in live-cell imaging and modeling techniques has resulted in generation of a large amount of quantitative data (from experimental measurements and computer simulations) on spatiotemporal dynamics of biological objects such as molecules, cells and organisms. Although many research groups have independently dedicated their efforts to developing software tools for visualizing and analyzing these data, these tools are often not compatible with each other because of different data formats. We developed an open unified format, Biological Dynamics Markup Language (BDML; current version: 0.2), which provides a basic framework for representing quantitative biological dynamics data for objects ranging from molecules to cells to organisms. BDML is based on Extensible Markup Language (XML). Its advantages are machine and human readability and extensibility. BDML will improve the efficiency of development and evaluation of software tools for data visualization and analysis. A specification and a schema file for BDML are freely available online at http://ssbd.qbic.riken.jp/bdml/. Supplementary data are available at Bioinformatics online. © The Author 2014. Published by Oxford University Press.

  20. The Systems Biology Markup Language (SBML): Language Specification for Level 3 Version 1 Core.

    Science.gov (United States)

    Hucka, Michael; Bergmann, Frank T; Hoops, Stefan; Keating, Sarah M; Sahle, Sven; Schaff, James C; Smith, Lucian P; Wilkinson, Darren J

    2015-09-04

    Computational models can help researchers to interpret data, understand biological function, and make quantitative predictions. The Systems Biology Markup Language (SBML) is a file format for representing computational models in a declarative form that can be exchanged between different software systems. SBML is oriented towards describing biological processes of the sort common in research on a number of topics, including metabolic pathways, cell signaling pathways, and many others. By supporting SBML as an input/output format, different tools can all operate on an identical representation of a model, removing opportunities for translation errors and assuring a common starting point for analyses and simulations. This document provides the specification for Version 1 of SBML Level 3 Core. The specification defines the data structures prescribed by SBML as well as their encoding in XML, the eXtensible Markup Language. This specification also defines validation rules that determine the validity of an SBML document, and provides many examples of models in SBML form. Other materials and software are available from the SBML project web site, http://sbml.org/.

  1. SuML: A Survey Markup Language for Generalized Survey Encoding

    Science.gov (United States)

    Barclay, MW; Lober, WB; Karras, BT

    2002-01-01

    There is a need in clinical and research settings for a sophisticated, generalized, web based survey tool that supports complex logic, separation of content and presentation, and computable guidelines. There are many commercial and open source survey packages available that provide simple logic; few provide sophistication beyond “goto” statements; none support the use of guidelines. These tools are driven by databases, static web pages, and structured documents using markup languages such as eXtensible Markup Language (XML). We propose a generalized, guideline aware language and an implementation architecture using open source standards.

  2. Extreme Markup: The Fifty US Hospitals With The Highest Charge-To-Cost Ratios.

    Science.gov (United States)

    Bai, Ge; Anderson, Gerard F

    2015-06-01

    Using Medicare cost reports, we examined the fifty US hospitals with the highest charge-to-cost ratios in 2012. These hospitals have markups (ratios of charges over Medicare-allowable costs) approximately ten times their Medicare-allowable costs compared to a national average of 3.4 and a mode of 2.4. Analysis of the fifty hospitals showed that forty-nine are for profit (98 percent), forty-six are owned by for-profit hospital systems (92 percent), and twenty (40 percent) operate in Florida. One for-profit hospital system owns half of these fifty hospitals. While most public and private health insurers do not use hospital charges to set their payment rates, uninsured patients are commonly asked to pay the full charges, and out-of-network patients and casualty and workers' compensation insurers are often expected to pay a large portion of the full charges. Because it is difficult for patients to compare prices, market forces fail to constrain hospital charges. Federal and state governments may want to consider limitations on the charge-to-cost ratio, some form of all-payer rate setting, or mandated price disclosure to regulate hospital markups. Project HOPE—The People-to-People Health Foundation, Inc.

  3. Systems Biology Markup Language (SBML) Level 2 Version 5: Structures and Facilities for Model Definitions.

    Science.gov (United States)

    Hucka, Michael; Bergmann, Frank T; Dräger, Andreas; Hoops, Stefan; Keating, Sarah M; Le Novère, Nicolas; Myers, Chris J; Olivier, Brett G; Sahle, Sven; Schaff, James C; Smith, Lucian P; Waltemath, Dagmar; Wilkinson, Darren J

    2015-09-04

    Computational models can help researchers to interpret data, understand biological function, and make quantitative predictions. The Systems Biology Markup Language (SBML) is a file format for representing computational models in a declarative form that can be exchanged between different software systems. SBML is oriented towards describing biological processes of the sort common in research on a number of topics, including metabolic pathways, cell signaling pathways, and many others. By supporting SBML as an input/output format, different tools can all operate on an identical representation of a model, removing opportunities for translation errors and assuring a common starting point for analyses and simulations. This document provides the specification for Version 5 of SBML Level 2. The specification defines the data structures prescribed by SBML as well as their encoding in XML, the eXtensible Markup Language. This specification also defines validation rules that determine the validity of an SBML document, and provides many examples of models in SBML form. Other materials and software are available from the SBML project web site, http://sbml.org.

  4. Systems Biology Markup Language (SBML Level 2 Version 5: Structures and Facilities for Model Definitions

    Directory of Open Access Journals (Sweden)

    Hucka Michael

    2015-06-01

    Full Text Available Computational models can help researchers to interpret data, understand biological function, and make quantitative predictions. The Systems Biology Markup Language (SBML is a file format for representing computational models in a declarative form that can be exchanged between different software systems. SBML is oriented towards describing biological processes of the sort common in research on a number of topics, including metabolic pathways, cell signaling pathways, and many others. By supporting SBML as an input/output format, different tools can all operate on an identical representation of a model, removing opportunities for translation errors and assuring a common starting point for analyses and simulations. This document provides the specification for Version 5 of SBML Level 2. The specification defines the data structures prescribed by SBML as well as their encoding in XML, the eXtensible Markup Language. This specification also defines validation rules that determine the validity of an SBML document, and provides many examples of models in SBML form. Other materials and software are available from the SBML project web site, http://sbml.org/.

  5. The Systems Biology Markup Language (SBML): Language Specification for Level 3 Version 2 Core.

    Science.gov (United States)

    Hucka, Michael; Bergmann, Frank T; Dräger, Andreas; Hoops, Stefan; Keating, Sarah M; Le Novère, Nicolas; Myers, Chris J; Olivier, Brett G; Sahle, Sven; Schaff, James C; Smith, Lucian P; Waltemath, Dagmar; Wilkinson, Darren J

    2018-03-09

    Computational models can help researchers to interpret data, understand biological functions, and make quantitative predictions. The Systems Biology Markup Language (SBML) is a file format for representing computational models in a declarative form that different software systems can exchange. SBML is oriented towards describing biological processes of the sort common in research on a number of topics, including metabolic pathways, cell signaling pathways, and many others. By supporting SBML as an input/output format, different tools can all operate on an identical representation of a model, removing opportunities for translation errors and assuring a common starting point for analyses and simulations. This document provides the specification for Version 2 of SBML Level 3 Core. The specification defines the data structures prescribed by SBML, their encoding in XML (the eXtensible Markup Language), validation rules that determine the validity of an SBML document, and examples of models in SBML form. The design of Version 2 differs from Version 1 principally in allowing new MathML constructs, making more child elements optional, and adding identifiers to all SBML elements instead of only selected elements. Other materials and software are available from the SBML project website at http://sbml.org/.

  6. Non-Stationary Inflation and the Markup: an Overview of the Research and some Implications for Policy

    OpenAIRE

    Bill Russell

    2006-01-01

    This paper reports on research into the negative relationship between inflation and the markup. It is argued that this relationship can be thought of as ‘long-run’ in nature which suggests that inflation has a persistent effect on the markup and, therefore, the real wage. A ‘rule of thumb’ from the estimates indicate that a 10 percentage point increase in inflation (as occurred worldwide in the 1970s) is associated with around a 7 per cent fall in the markup accompanied by a similar increase ...

  7. Wanda ML - a markup language for digital annotation

    NARCIS (Netherlands)

    Franke, K.Y.; Guyon, I.; Schomaker, L.R.B.; Vuurpijl, L.G.

    2004-01-01

    WANDAML is an XML-based markup language for the annotation and filter journaling of digital documents. It addresses in particular the needs of forensic handwriting data examination, by allowing experts to enter information about writer, material (pen, paper), script and content, and to record chains

  8. The WANDAML Markup Language for Digital Document Annotation

    NARCIS (Netherlands)

    Franke, K.; Guyon, I.; Schomaker, L.; Vuurpijl, L.

    2004-01-01

    WANDAML is an XML-based markup language for the annotation and filter journaling of digital documents. It addresses in particular the needs of forensic handwriting data examination, by allowing experts to enter information about writer, material (pen, paper), script and content, and to record chains

  9. Genomic Sequence Variation Markup Language (GSVML).

    Science.gov (United States)

    Nakaya, Jun; Kimura, Michio; Hiroi, Kaei; Ido, Keisuke; Yang, Woosung; Tanaka, Hiroshi

    2010-02-01

    With the aim of making good use of internationally accumulated genomic sequence variation data, which is increasing rapidly due to the explosive amount of genomic research at present, the development of an interoperable data exchange format and its international standardization are necessary. Genomic Sequence Variation Markup Language (GSVML) will focus on genomic sequence variation data and human health applications, such as gene based medicine or pharmacogenomics. We developed GSVML through eight steps, based on case analysis and domain investigations. By focusing on the design scope to human health applications and genomic sequence variation, we attempted to eliminate ambiguity and to ensure practicability. We intended to satisfy the requirements derived from the use case analysis of human-based clinical genomic applications. Based on database investigations, we attempted to minimize the redundancy of the data format, while maximizing the data covering range. We also attempted to ensure communication and interface ability with other Markup Languages, for exchange of omics data among various omics researchers or facilities. The interface ability with developing clinical standards, such as the Health Level Seven Genotype Information model, was analyzed. We developed the human health-oriented GSVML comprising variation data, direct annotation, and indirect annotation categories; the variation data category is required, while the direct and indirect annotation categories are optional. The annotation categories contain omics and clinical information, and have internal relationships. For designing, we examined 6 cases for three criteria as human health application and 15 data elements for three criteria as data formats for genomic sequence variation data exchange. The data format of five international SNP databases and six Markup Languages and the interface ability to the Health Level Seven Genotype Model in terms of 317 items were investigated. GSVML was developed as

  10. Automation and integration of components for generalized semantic markup of electronic medical texts.

    Science.gov (United States)

    Dugan, J M; Berrios, D C; Liu, X; Kim, D K; Kaizer, H; Fagan, L M

    1999-01-01

    Our group has built an information retrieval system based on a complex semantic markup of medical textbooks. We describe the construction of a set of web-based knowledge-acquisition tools that expedites the collection and maintenance of the concepts required for text markup and the search interface required for information retrieval from the marked text. In the text markup system, domain experts (DEs) identify sections of text that contain one or more elements from a finite set of concepts. End users can then query the text using a predefined set of questions, each of which identifies a subset of complementary concepts. The search process matches that subset of concepts to relevant points in the text. The current process requires that the DE invest significant time to generate the required concepts and questions. We propose a new system--called ACQUIRE (Acquisition of Concepts and Queries in an Integrated Retrieval Environment)--that assists a DE in two essential tasks in the text-markup process. First, it helps her to develop, edit, and maintain the concept model: the set of concepts with which she marks the text. Second, ACQUIRE helps her to develop a query model: the set of specific questions that end users can later use to search the marked text. The DE incorporates concepts from the concept model when she creates the questions in the query model. The major benefit of the ACQUIRE system is a reduction in the time and effort required for the text-markup process. We compared the process of concept- and query-model creation using ACQUIRE to the process used in previous work by rebuilding two existing models that we previously constructed manually. We observed a significant decrease in the time required to build and maintain the concept and query models.

  11. Monopoly, Pareto and Ramsey mark-ups

    OpenAIRE

    Ten Raa, T.

    2009-01-01

    Monopoly prices are too high. It is a price level problem, in the sense that the relative mark-ups have Ramsey optimal proportions, at least for independent constant elasticity demands. I show that this feature of monopoly prices breaks down the moment one demand is replaced by the textbook linear demand or, even within the constant elasticity framework, dependence is introduced. The analysis provides a single Generalized Inverse Elasticity Rule for the problems of monopoly, Pareto and Ramsey.

  12. The Systems Biology Markup Language (SBML: Language Specification for Level 3 Version 2 Core

    Directory of Open Access Journals (Sweden)

    Hucka Michael

    2018-03-01

    Full Text Available Computational models can help researchers to interpret data, understand biological functions, and make quantitative predictions. The Systems Biology Markup Language (SBML is a file format for representing computational models in a declarative form that different software systems can exchange. SBML is oriented towards describing biological processes of the sort common in research on a number of topics, including metabolic pathways, cell signaling pathways, and many others. By supporting SBML as an input/output format, different tools can all operate on an identical representation of a model, removing opportunities for translation errors and assuring a common starting point for analyses and simulations. This document provides the specification for Version 2 of SBML Level 3 Core. The specification defines the data structures prescribed by SBML, their encoding in XML (the eXtensible Markup Language, validation rules that determine the validity of an SBML document, and examples of models in SBML form. The design of Version 2 differs from Version 1 principally in allowing new MathML constructs, making more child elements optional, and adding identifiers to all SBML elements instead of only selected elements. Other materials and software are available from the SBML project website at http://sbml.org/.

  13. The Systems Biology Markup Language (SBML: Language Specification for Level 3 Version 1 Core

    Directory of Open Access Journals (Sweden)

    Hucka Michael

    2018-04-01

    Full Text Available Computational models can help researchers to interpret data, understand biological functions, and make quantitative predictions. The Systems Biology Markup Language (SBML is a file format for representing computational models in a declarative form that different software systems can exchange. SBML is oriented towards describing biological processes of the sort common in research on a number of topics, including metabolic pathways, cell signaling pathways, and many others. By supporting SBML as an input/output format, different tools can all operate on an identical representation of a model, removing opportunities for translation errors and assuring a common starting point for analyses and simulations. This document provides the specification for Release 2 of Version 1 of SBML Level 3 Core. The specification defines the data structures prescribed by SBML, their encoding in XML (the eXtensible Markup Language, validation rules that determine the validity of an SBML document, and examples of models in SBML form. No design changes have been made to the description of models between Release 1 and Release 2; changes are restricted to the format of annotations, the correction of errata and the addition of clarifications. Other materials and software are available from the SBML project website at http://sbml.org/.

  14. Semi-automated XML markup of biosystematic legacy literature with the GoldenGATE editor.

    Science.gov (United States)

    Sautter, Guido; Böhm, Klemens; Agosti, Donat

    2007-01-01

    Today, digitization of legacy literature is a big issue. This also applies to the domain of biosystematics, where this process has just started. Digitized biosystematics literature requires a very precise and fine grained markup in order to be useful for detailed search, data linkage and mining. However, manual markup on sentence level and below is cumbersome and time consuming. In this paper, we present and evaluate the GoldenGATE editor, which is designed for the special needs of marking up OCR output with XML. It is built in order to support the user in this process as far as possible: Its functionality ranges from easy, intuitive tagging through markup conversion to dynamic binding of configurable plug-ins provided by third parties. Our evaluation shows that marking up an OCR document using GoldenGATE is three to four times faster than with an off-the-shelf XML editor like XML-Spy. Using domain-specific NLP-based plug-ins, these numbers are even higher.

  15. Development of the atomic and molecular data markup language for internet data exchange

    International Nuclear Information System (INIS)

    Ralchenko, Yuri; Clark Robert E.H.; Humbert, Denis; Schultz, David R.; Kato, Takako; Rhee, Yong Joo

    2006-01-01

    Accelerated development of the Internet technologies, including those relevant to the atomic and molecular physics, poses new requirements for the proper communication between computers, users and applications. To this end, a new standard for atomic and molecular data exchange that would reflect the recent achievements in this field becomes a necessity. We report here on development of the Atomic and Molecular Data Markup Language (AMDML) that is based on eXtensible Markup Language (XML). The present version of the AMDML Schema covers atomic spectroscopic data as well as the electron-impact collisions. (author)

  16. Semantic markup of nouns and adjectives for the Electronic corpus of texts in Tuvan language

    Directory of Open Access Journals (Sweden)

    Bajlak Ch. Oorzhak

    2016-12-01

    Full Text Available The article examines the progress of semantic markup of the Electronic corpus of texts in Tuvan language (ECTTL, which is another stage of adding Tuvan texts to the database and marking up the corpus. ECTTL is a collaborative project by researchers from Tuvan State University (Research and Education Center of Turkic Studies and Department of Information Technologies. Semantic markup of Tuvan lexis will come as a search engine and reference system which will help users find text snippets containing words with desired meanings in ECTTL. The first stage of this process is setting up databases of basic lexemes of Tuvan language. All meaningful lexemes were classified into the following semantic groups: humans, animals, objects, natural objects and phenomena, and abstract concepts. All Tuvan object nouns, as well as both descriptive and relative adjectives, were assigned to one of these lexico-semantic classes. Each class, sub-class and descriptor is tagged in Tuvan, Russian and English; these tags, in turn, will help automatize searching. The databases of meaningful lexemes of Tuvan language will also outline their lexical combinations. The automatized system will contain information on semantic combinations of adjectives with nouns, adverbs with verbs, nouns with verbs, as well as on the combinations which are semantically incompatible.

  17. Intended and unintended consequences of China's zero markup drug policy.

    Science.gov (United States)

    Yi, Hongmei; Miller, Grant; Zhang, Linxiu; Li, Shaoping; Rozelle, Scott

    2015-08-01

    Since economic liberalization in the late 1970s, China's health care providers have grown heavily reliant on revenue from drugs, which they both prescribe and sell. To curb abuse and to promote the availability, safety, and appropriate use of essential drugs, China introduced its national essential drug list in 2009 and implemented a zero markup policy designed to decouple provider compensation from drug prescription and sales. We collected and analyzed representative data from China's township health centers and their catchment-area populations both before and after the reform. We found large reductions in drug revenue, as intended by policy makers. However, we also found a doubling of inpatient care that appeared to be driven by supply, instead of demand. Thus, the reform had an important unintended consequence: China's health care providers have sought new, potentially inappropriate, forms of revenue. Project HOPE—The People-to-People Health Foundation, Inc.

  18. Automated Text Markup for Information Retrieval from an Electronic Textbook of Infectious Disease

    Science.gov (United States)

    Berrios, Daniel C.; Kehler, Andrew; Kim, David K.; Yu, Victor L.; Fagan, Lawrence M.

    1998-01-01

    The information needs of practicing clinicians frequently require textbook or journal searches. Making these sources available in electronic form improves the speed of these searches, but precision (i.e., the fraction of relevant to total documents retrieved) remains low. Improving the traditional keyword search by transforming search terms into canonical concepts does not improve search precision greatly. Kim et al. have designed and built a prototype system (MYCIN II) for computer-based information retrieval from a forthcoming electronic textbook of infectious disease. The system requires manual indexing by experts in the form of complex text markup. However, this mark-up process is time consuming (about 3 person-hours to generate, review, and transcribe the index for each of 218 chapters). We have designed and implemented a system to semiautomate the markup process. The system, information extraction for semiautomated indexing of documents (ISAID), uses query models and existing information-extraction tools to provide support for any user, including the author of the source material, to mark up tertiary information sources quickly and accurately.

  19. The Petri Net Markup Language : concepts, technology, and tools

    NARCIS (Netherlands)

    Billington, J.; Christensen, S.; Hee, van K.M.; Kindler, E.; Kummer, O.; Petrucci, L.; Post, R.D.J.; Stehno, C.; Weber, M.; Aalst, van der W.M.P.; Best, E.

    2003-01-01

    The Petri Net Markup Language (PNML) is an XML-based interchange format for Petri nets. In order to support different versions of Petri nets and, in particular, future versions of Petri nets, PNML allows the definition of Petri net types.Due to this flexibility, PNML is a starting point for a

  20. Informatics in radiology: automated structured reporting of imaging findings using the AIM standard and XML.

    Science.gov (United States)

    Zimmerman, Stefan L; Kim, Woojin; Boonn, William W

    2011-01-01

    Quantitative and descriptive imaging data are a vital component of the radiology report and are frequently of paramount importance to the ordering physician. Unfortunately, current methods of recording these data in the report are both inefficient and error prone. In addition, the free-text, unstructured format of a radiology report makes aggregate analysis of data from multiple reports difficult or even impossible without manual intervention. A structured reporting work flow has been developed that allows quantitative data created at an advanced imaging workstation to be seamlessly integrated into the radiology report with minimal radiologist intervention. As an intermediary step between the workstation and the reporting software, quantitative and descriptive data are converted into an extensible markup language (XML) file in a standardized format specified by the Annotation and Image Markup (AIM) project of the National Institutes of Health Cancer Biomedical Informatics Grid. The AIM standard was created to allow image annotation data to be stored in a uniform machine-readable format. These XML files containing imaging data can also be stored on a local database for data mining and analysis. This structured work flow solution has the potential to improve radiologist efficiency, reduce errors, and facilitate storage of quantitative and descriptive imaging data for research. Copyright © RSNA, 2011.

  1. CytometryML: a markup language for analytical cytology

    Science.gov (United States)

    Leif, Robert C.; Leif, Stephanie H.; Leif, Suzanne B.

    2003-06-01

    Cytometry Markup Language, CytometryML, is a proposed new analytical cytology data standard. CytometryML is a set of XML schemas for encoding both flow cytometry and digital microscopy text based data types. CytometryML schemas reference both DICOM (Digital Imaging and Communications in Medicine) codes and FCS keywords. These schemas provide representations for the keywords in FCS 3.0 and will soon include DICOM microscopic image data. Flow Cytometry Standard (FCS) list-mode has been mapped to the DICOM Waveform Information Object. A preliminary version of a list mode binary data type, which does not presently exist in DICOM, has been designed. This binary type is required to enhance the storage and transmission of flow cytometry and digital microscopy data. Index files based on Waveform indices will be used to rapidly locate the cells present in individual subsets. DICOM has the advantage of employing standard file types, TIF and JPEG, for Digital Microscopy. Using an XML schema based representation means that standard commercial software packages such as Excel and MathCad can be used to analyze, display, and store analytical cytometry data. Furthermore, by providing one standard for both DICOM data and analytical cytology data, it eliminates the need to create and maintain special purpose interfaces for analytical cytology data thereby integrating the data into the larger DICOM and other clinical communities. A draft version of CytometryML is available at www.newportinstruments.com.

  2. ArdenML: The Arden Syntax Markup Language (or Arden Syntax: It's Not Just Text Any More!)

    Science.gov (United States)

    Sailors, R. Matthew

    2001-01-01

    It is no longer necessary to think of Arden Syntax as simply a text-based knowledge base format. The development of ArdenML (Arden Syntax Markup Language), an XML-based markup language allows structured access to most of the maintenance and library categories without the need to write or buy a compiler may lead to the development of simple commercial and freeware tools for processing Arden Syntax Medical Logic Modules (MLMs)

  3. Impact of the zero-markup drug policy on hospitalisation expenditure in western rural China: an interrupted time series analysis.

    Science.gov (United States)

    Yang, Caijun; Shen, Qian; Cai, Wenfang; Zhu, Wenwen; Li, Zongjie; Wu, Lina; Fang, Yu

    2017-02-01

    To assess the long-term effects of the introduction of China's zero-markup drug policy on hospitalisation expenditure and hospitalisation expenditures after reimbursement. An interrupted time series was used to evaluate the impact of the zero-markup drug policy on hospitalisation expenditure and hospitalisation expenditure after reimbursement at primary health institutions in Fufeng County of Shaanxi Province, western China. Two regression models were developed. Monthly average hospitalisation expenditure and monthly average hospitalisation expenditure after reimbursement in primary health institutions were analysed covering the period 2009 through to 2013. For the monthly average hospitalisation expenditure, the increasing trend was slowed down after the introduction of the zero-markup drug policy (coefficient = -16.49, P = 0.009). For the monthly average hospitalisation expenditure after reimbursement, the increasing trend was slowed down after the introduction of the zero-markup drug policy (coefficient = -10.84, P = 0.064), and a significant decrease in the intercept was noted after the second intervention of changes in reimbursement schemes of the new rural cooperative medical insurance (coefficient = -220.64, P markup drug policy in western China. However, hospitalisation expenditure and hospitalisation expenditure after reimbursement were still increasing. More effective policies are needed to prevent these costs from continuing to rise. © 2016 John Wiley & Sons Ltd.

  4. GOATS Image Projection Component

    Science.gov (United States)

    Haber, Benjamin M.; Green, Joseph J.

    2011-01-01

    When doing mission analysis and design of an imaging system in orbit around the Earth, answering the fundamental question of imaging performance requires an understanding of the image products that will be produced by the imaging system. GOATS software represents a series of MATLAB functions to provide for geometric image projections. Unique features of the software include function modularity, a standard MATLAB interface, easy-to-understand first-principles-based analysis, and the ability to perform geometric image projections of framing type imaging systems. The software modules are created for maximum analysis utility, and can all be used independently for many varied analysis tasks, or used in conjunction with other orbit analysis tools.

  5. Root system markup language: toward a unified root architecture description language.

    Science.gov (United States)

    Lobet, Guillaume; Pound, Michael P; Diener, Julien; Pradal, Christophe; Draye, Xavier; Godin, Christophe; Javaux, Mathieu; Leitner, Daniel; Meunier, Félicien; Nacry, Philippe; Pridmore, Tony P; Schnepf, Andrea

    2015-03-01

    The number of image analysis tools supporting the extraction of architectural features of root systems has increased in recent years. These tools offer a handy set of complementary facilities, yet it is widely accepted that none of these software tools is able to extract in an efficient way the growing array of static and dynamic features for different types of images and species. We describe the Root System Markup Language (RSML), which has been designed to overcome two major challenges: (1) to enable portability of root architecture data between different software tools in an easy and interoperable manner, allowing seamless collaborative work; and (2) to provide a standard format upon which to base central repositories that will soon arise following the expanding worldwide root phenotyping effort. RSML follows the XML standard to store two- or three-dimensional image metadata, plant and root properties and geometries, continuous functions along individual root paths, and a suite of annotations at the image, plant, or root scale at one or several time points. Plant ontologies are used to describe botanical entities that are relevant at the scale of root system architecture. An XML schema describes the features and constraints of RSML, and open-source packages have been developed in several languages (R, Excel, Java, Python, and C#) to enable researchers to integrate RSML files into popular research workflow. © 2015 American Society of Plant Biologists. All Rights Reserved.

  6. FuGEFlow: data model and markup language for flow cytometry

    Directory of Open Access Journals (Sweden)

    Manion Frank J

    2009-06-01

    . Additional project documentation, including reusable design patterns and a guide for setting up a development environment, was contributed back to the FuGE project. Conclusion We have shown that an extension of FuGE can be used to transform minimum information requirements in natural language to markup language in XML. Extending FuGE required significant effort, but in our experiences the benefits outweighed the costs. The FuGEFlow is expected to play a central role in describing flow cytometry experiments and ultimately facilitating data exchange including public flow cytometry repositories currently under development.

  7. FuGEFlow: data model and markup language for flow cytometry.

    Science.gov (United States)

    Qian, Yu; Tchuvatkina, Olga; Spidlen, Josef; Wilkinson, Peter; Gasparetto, Maura; Jones, Andrew R; Manion, Frank J; Scheuermann, Richard H; Sekaly, Rafick-Pierre; Brinkman, Ryan R

    2009-06-16

    Flow cytometry technology is widely used in both health care and research. The rapid expansion of flow cytometry applications has outpaced the development of data storage and analysis tools. Collaborative efforts being taken to eliminate this gap include building common vocabularies and ontologies, designing generic data models, and defining data exchange formats. The Minimum Information about a Flow Cytometry Experiment (MIFlowCyt) standard was recently adopted by the International Society for Advancement of Cytometry. This standard guides researchers on the information that should be included in peer reviewed publications, but it is insufficient for data exchange and integration between computational systems. The Functional Genomics Experiment (FuGE) formalizes common aspects of comprehensive and high throughput experiments across different biological technologies. We have extended FuGE object model to accommodate flow cytometry data and metadata. We used the MagicDraw modelling tool to design a UML model (Flow-OM) according to the FuGE extension guidelines and the AndroMDA toolkit to transform the model to a markup language (Flow-ML). We mapped each MIFlowCyt term to either an existing FuGE class or to a new FuGEFlow class. The development environment was validated by comparing the official FuGE XSD to the schema we generated from the FuGE object model using our configuration. After the Flow-OM model was completed, the final version of the Flow-ML was generated and validated against an example MIFlowCyt compliant experiment description. The extension of FuGE for flow cytometry has resulted in a generic FuGE-compliant data model (FuGEFlow), which accommodates and links together all information required by MIFlowCyt. The FuGEFlow model can be used to build software and databases using FuGE software toolkits to facilitate automated exchange and manipulation of potentially large flow cytometry experimental data sets. Additional project documentation, including

  8. A standard MIGS/MIMS compliant XML Schema: toward the development of the Genomic Contextual Data Markup Language (GCDML).

    Science.gov (United States)

    Kottmann, Renzo; Gray, Tanya; Murphy, Sean; Kagan, Leonid; Kravitz, Saul; Lombardot, Thierry; Field, Dawn; Glöckner, Frank Oliver

    2008-06-01

    The Genomic Contextual Data Markup Language (GCDML) is a core project of the Genomic Standards Consortium (GSC) that implements the "Minimum Information about a Genome Sequence" (MIGS) specification and its extension, the "Minimum Information about a Metagenome Sequence" (MIMS). GCDML is an XML Schema for generating MIGS/MIMS compliant reports for data entry, exchange, and storage. When mature, this sample-centric, strongly-typed schema will provide a diverse set of descriptors for describing the exact origin and processing of a biological sample, from sampling to sequencing, and subsequent analysis. Here we describe the need for such a project, outline design principles required to support the project, and make an open call for participation in defining the future content of GCDML. GCDML is freely available, and can be downloaded, along with documentation, from the GSC Web site (http://gensc.org).

  9. DEMAND FOR AND SUPPLY OF MARK-UP AND PLS FUNDS IN ISLAMIC BANKING: SOME ALTERNATIVE EXPLANATIONS

    OpenAIRE

    KHAN, TARIQULLAH

    1995-01-01

    Profit and loss-sharing (PLS) and bai’ al murabahah lil amir bil shira (mark-up) are the two parent principles of Islamic financing. The use of PLS is limited and that of mark-up overwhelming in the operations of the Islamic banks. Several studies provide different explanations for this phenomenon. The dominant among these is the moral hazard hypothesis. Some alternative explanations are given in the present paper. The discussion is based on both demand (user of funds) and supply (bank) side ...

  10. ART-ML: a new markup language for modelling and representation of biological processes in cardiovascular diseases.

    Science.gov (United States)

    Karvounis, E C; Exarchos, T P; Fotiou, E; Sakellarios, A I; Iliopoulou, D; Koutsouris, D; Fotiadis, D I

    2013-01-01

    With an ever increasing number of biological models available on the internet, a standardized modelling framework is required to allow information to be accessed and visualized. In this paper we propose a novel Extensible Markup Language (XML) based format called ART-ML that aims at supporting the interoperability and the reuse of models of geometry, blood flow, plaque progression and stent modelling, exported by any cardiovascular disease modelling software. ART-ML has been developed and tested using ARTool. ARTool is a platform for the automatic processing of various image modalities of coronary and carotid arteries. The images and their content are fused to develop morphological models of the arteries in 3D representations. All the above described procedures integrate disparate data formats, protocols and tools. ART-ML proposes a representation way, expanding ARTool, for interpretability of the individual resources, creating a standard unified model for the description of data and, consequently, a format for their exchange and representation that is machine independent. More specifically, ARTool platform incorporates efficient algorithms which are able to perform blood flow simulations and atherosclerotic plaque evolution modelling. Integration of data layers between different modules within ARTool are based upon the interchange of information included in the ART-ML model repository. ART-ML provides a markup representation that enables the representation and management of embedded models within the cardiovascular disease modelling platform, the storage and interchange of well-defined information. The corresponding ART-ML model incorporates all relevant information regarding geometry, blood flow, plaque progression and stent modelling procedures. All created models are stored in a model repository database which is accessible to the research community using efficient web interfaces, enabling the interoperability of any cardiovascular disease modelling software

  11. iPad: Semantic annotation and markup of radiological images.

    Science.gov (United States)

    Rubin, Daniel L; Rodriguez, Cesar; Shah, Priyanka; Beaulieu, Chris

    2008-11-06

    Radiological images contain a wealth of information,such as anatomy and pathology, which is often not explicit and computationally accessible. Information schemes are being developed to describe the semantic content of images, but such schemes can be unwieldy to operationalize because there are few tools to enable users to capture structured information easily as part of the routine research workflow. We have created iPad, an open source tool enabling researchers and clinicians to create semantic annotations on radiological images. iPad hides the complexity of the underlying image annotation information model from users, permitting them to describe images and image regions using a graphical interface that maps their descriptions to structured ontologies semi-automatically. Image annotations are saved in a variety of formats,enabling interoperability among medical records systems, image archives in hospitals, and the Semantic Web. Tools such as iPad can help reduce the burden of collecting structured information from images, and it could ultimately enable researchers and physicians to exploit images on a very large scale and glean the biological and physiological significance of image content.

  12. Developing a Markup Language for Encoding Graphic Content in Plan Documents

    Science.gov (United States)

    Li, Jinghuan

    2009-01-01

    While deliberating and making decisions, participants in urban development processes need easy access to the pertinent content scattered among different plans. A Planning Markup Language (PML) has been proposed to represent the underlying structure of plans in an XML-compliant way. However, PML currently covers only textual information and lacks…

  13. Resolving Controlled Vocabulary in DITA Markup: A Case Example in Agroforestry

    Science.gov (United States)

    Zschocke, Thomas

    2012-01-01

    Purpose: This paper aims to address the issue of matching controlled vocabulary on agroforestry from knowledge organization systems (KOS) and incorporating these terms in DITA markup. The paper has been selected for an extended version from MTSR'11. Design/methodology/approach: After a general description of the steps taken to harmonize controlled…

  14. The gel electrophoresis markup language (GelML) from the Proteomics Standards Initiative.

    Science.gov (United States)

    Gibson, Frank; Hoogland, Christine; Martinez-Bartolomé, Salvador; Medina-Aunon, J Alberto; Albar, Juan Pablo; Babnigg, Gyorgy; Wipat, Anil; Hermjakob, Henning; Almeida, Jonas S; Stanislaus, Romesh; Paton, Norman W; Jones, Andrew R

    2010-09-01

    The Human Proteome Organisation's Proteomics Standards Initiative has developed the GelML (gel electrophoresis markup language) data exchange format for representing gel electrophoresis experiments performed in proteomics investigations. The format closely follows the reporting guidelines for gel electrophoresis, which are part of the Minimum Information About a Proteomics Experiment (MIAPE) set of modules. GelML supports the capture of metadata (such as experimental protocols) and data (such as gel images) resulting from gel electrophoresis so that laboratories can be compliant with the MIAPE Gel Electrophoresis guidelines, while allowing such data sets to be exchanged or downloaded from public repositories. The format is sufficiently flexible to capture data from a broad range of experimental processes, and complements other PSI formats for MS data and the results of protein and peptide identifications to capture entire gel-based proteome workflows. GelML has resulted from the open standardisation process of PSI consisting of both public consultation and anonymous review of the specifications.

  15. Experimental Applications of Automatic Test Markup Language (ATML)

    Science.gov (United States)

    Lansdowne, Chatwin A.; McCartney, Patrick; Gorringe, Chris

    2012-01-01

    The authors describe challenging use-cases for Automatic Test Markup Language (ATML), and evaluate solutions. The first case uses ATML Test Results to deliver active features to support test procedure development and test flow, and bridging mixed software development environments. The second case examines adding attributes to Systems Modelling Language (SysML) to create a linkage for deriving information from a model to fill in an ATML document set. Both cases are outside the original concept of operations for ATML but are typical when integrating large heterogeneous systems with modular contributions from multiple disciplines.

  16. Evolving a lingua franca and associated software infrastructure for computational systems biology: the Systems Biology Markup Language (SBML) project.

    Science.gov (United States)

    Hucka, M; Finney, A; Bornstein, B J; Keating, S M; Shapiro, B E; Matthews, J; Kovitz, B L; Schilstra, M J; Funahashi, A; Doyle, J C; Kitano, H

    2004-06-01

    Biologists are increasingly recognising that computational modelling is crucial for making sense of the vast quantities of complex experimental data that are now being collected. The systems biology field needs agreed-upon information standards if models are to be shared, evaluated and developed cooperatively. Over the last four years, our team has been developing the Systems Biology Markup Language (SBML) in collaboration with an international community of modellers and software developers. SBML has become a de facto standard format for representing formal, quantitative and qualitative models at the level of biochemical reactions and regulatory networks. In this article, we summarise the current and upcoming versions of SBML and our efforts at developing software infrastructure for supporting and broadening its use. We also provide a brief overview of the many SBML-compatible software tools available today.

  17. Simulation Experiment Description Markup Language (SED-ML) Level 1 Version 2.

    Science.gov (United States)

    Bergmann, Frank T; Cooper, Jonathan; Le Novère, Nicolas; Nickerson, David; Waltemath, Dagmar

    2015-09-04

    The number, size and complexity of computational models of biological systems are growing at an ever increasing pace. It is imperative to build on existing studies by reusing and adapting existing models and parts thereof. The description of the structure of models is not sufficient to enable the reproduction of simulation results. One also needs to describe the procedures the models are subjected to, as recommended by the Minimum Information About a Simulation Experiment (MIASE) guidelines. This document presents Level 1 Version 2 of the Simulation Experiment Description Markup Language (SED-ML), a computer-readable format for encoding simulation and analysis experiments to apply to computational models. SED-ML files are encoded in the Extensible Markup Language (XML) and can be used in conjunction with any XML-based model encoding format, such as CellML or SBML. A SED-ML file includes details of which models to use, how to modify them prior to executing a simulation, which simulation and analysis procedures to apply, which results to extract and how to present them. Level 1 Version 2 extends the format by allowing the encoding of repeated and chained procedures.

  18. A methodology for evaluation of a markup-based specification of clinical guidelines.

    Science.gov (United States)

    Shalom, Erez; Shahar, Yuval; Taieb-Maimon, Meirav; Lunenfeld, Eitan

    2008-11-06

    We introduce a three-phase, nine-step methodology for specification of clinical guidelines (GLs) by expert physicians, clinical editors, and knowledge engineers, and for quantitative evaluation of the specification's quality. We applied this methodology to a particular framework for incremental GL structuring (mark-up) and to GLs in three clinical domains with encouraging results.

  19. Geospatial Visualization of Scientific Data Through Keyhole Markup Language

    Science.gov (United States)

    Wernecke, J.; Bailey, J. E.

    2008-12-01

    The development of virtual globes has provided a fun and innovative tool for exploring the surface of the Earth. However, it has been the paralleling maturation of Keyhole Markup Language (KML) that has created a new medium and perspective through which to visualize scientific datasets. Originally created by Keyhole Inc., and then acquired by Google in 2004, in 2007 KML was given over to the Open Geospatial Consortium (OGC). It became an OGC international standard on 14 April 2008, and has subsequently been adopted by all major geobrowser developers (e.g., Google, Microsoft, ESRI, NASA) and many smaller ones (e.g., Earthbrowser). By making KML a standard at a relatively young stage in its evolution, developers of the language are seeking to avoid the issues that plagued the early World Wide Web and development of Hypertext Markup Language (HTML). The popularity and utility of Google Earth, in particular, has been enhanced by KML features such as the Smithsonian volcano layer and the dynamic weather layers. Through KML, users can view real-time earthquake locations (USGS), view animations of polar sea-ice coverage (NSIDC), or read about the daily activities of chimpanzees (Jane Goodall Institute). Perhaps even more powerful is the fact that any users can create, edit, and share their own KML, with no or relatively little knowledge of manipulating computer code. We present an overview of the best current scientific uses of KML and a guide to how scientists can learn to use KML themselves.

  20. Development of Markup Language for Medical Record Charting: A Charting Language.

    Science.gov (United States)

    Jung, Won-Mo; Chae, Younbyoung; Jang, Bo-Hyoung

    2015-01-01

    Nowadays a lot of trials for collecting electronic medical records (EMRs) exist. However, structuring data format for EMR is an especially labour-intensive task for practitioners. Here we propose a new mark-up language for medical record charting (called Charting Language), which borrows useful properties from programming languages. Thus, with Charting Language, the text data described in dynamic situation can be easily used to extract information.

  1. Kernel based subspace projection of hyperspectral images

    DEFF Research Database (Denmark)

    Larsen, Rasmus; Nielsen, Allan Aasbjerg; Arngren, Morten

    In hyperspectral image analysis an exploratory approach to analyse the image data is to conduct subspace projections. As linear projections often fail to capture the underlying structure of the data, we present kernel based subspace projections of PCA and Maximum Autocorrelation Factors (MAF...

  2. Field Markup Language: biological field representation in XML.

    Science.gov (United States)

    Chang, David; Lovell, Nigel H; Dokos, Socrates

    2007-01-01

    With an ever increasing number of biological models available on the internet, a standardized modeling framework is required to allow information to be accessed or visualized. Based on the Physiome Modeling Framework, the Field Markup Language (FML) is being developed to describe and exchange field information for biological models. In this paper, we describe the basic features of FML, its supporting application framework and its ability to incorporate CellML models to construct tissue-scale biological models. As a typical application example, we present a spatially-heterogeneous cardiac pacemaker model which utilizes both FML and CellML to describe and solve the underlying equations of electrical activation and propagation.

  3. Standard generalized markup language: A guide for transmitting encoded bibliographic records

    International Nuclear Information System (INIS)

    1994-09-01

    This document provides the guidance necessary to transmit to DOE's Office of Scientific and Technical Information (OSTI) an encoded bibliographic record that conforms to International Standard ISO 8879, Information Processing -- Text and office systems -- Standard Generalized Markup Language (SGML). Included in this document are element and attribute tag definitions, sample bibliographic records, the bibliographic document type definition, and instructions on how to transmit a bibliographic record electronically to OSTI

  4. Standard generalized markup language: A guide for transmitting encoded bibliographic records

    Energy Technology Data Exchange (ETDEWEB)

    1994-09-01

    This document provides the guidance necessary to transmit to DOE`s Office of Scientific and Technical Information (OSTI) an encoded bibliographic record that conforms to International Standard ISO 8879, Information Processing -- Text and office systems -- Standard Generalized Markup Language (SGML). Included in this document are element and attribute tag definitions, sample bibliographic records, the bibliographic document type definition, and instructions on how to transmit a bibliographic record electronically to OSTI.

  5. Nuclear Fuel Assembly Assessment Project and Image Categorization

    Energy Technology Data Exchange (ETDEWEB)

    Lindsey, C.S.; Lindblad, T.; Waldemark, K. [Royal Inst. of Tech., Stockholm (Sweden); Hildingsson, Lars [Swedish Nuclear Power Inspectorate, Stockholm (Sweden)

    1998-07-01

    A project has been underway to add digital imaging and processing to the inspection of nuclear fuel by the International Atomic Energy Agency. The ultimate goals are to provide the inspector not only with the advantages of Ccd imaging, such as high sensitivity and digital image enhancements, but also with an intelligent agent that can analyze the images and provide useful information about the fuel assemblies in real time. The project is still in the early stages and several interesting sub-projects have been inspired. Here we give first a review of the work on the fuel assembly image analysis and then give a brief status report on one of these sub-projects that concerns automatic categorization of fuel assembly images. The technique could be of benefit to the general challenge of image categorization

  6. Projection x-space magnetic particle imaging.

    Science.gov (United States)

    Goodwill, Patrick W; Konkle, Justin J; Zheng, Bo; Saritas, Emine U; Conolly, Steven M

    2012-05-01

    Projection magnetic particle imaging (MPI) can improve imaging speed by over 100-fold over traditional 3-D MPI. In this work, we derive the 2-D x-space signal equation, 2-D image equation, and introduce the concept of signal fading and resolution loss for a projection MPI imager. We then describe the design and construction of an x-space projection MPI scanner with a field gradient of 2.35 T/m across a 10 cm magnet free bore. The system has an expected resolution of 3.5 × 8.0 mm using Resovist tracer, and an experimental resolution of 3.8 × 8.4 mm resolution. The system images 2.5 cm × 5.0 cm partial field-of views (FOVs) at 10 frames/s, and acquires a full field-of-view of 10 cm × 5.0 cm in 4 s. We conclude by imaging a resolution phantom, a complex "Cal" phantom, mice injected with Resovist tracer, and experimentally confirm the theoretically predicted x-space spatial resolution.

  7. Gaussian Process Regression (GPR) Representation in Predictive Model Markup Language (PMML).

    Science.gov (United States)

    Park, J; Lechevalier, D; Ak, R; Ferguson, M; Law, K H; Lee, Y-T T; Rachuri, S

    2017-01-01

    This paper describes Gaussian process regression (GPR) models presented in predictive model markup language (PMML). PMML is an extensible-markup-language (XML) -based standard language used to represent data-mining and predictive analytic models, as well as pre- and post-processed data. The previous PMML version, PMML 4.2, did not provide capabilities for representing probabilistic (stochastic) machine-learning algorithms that are widely used for constructing predictive models taking the associated uncertainties into consideration. The newly released PMML version 4.3, which includes the GPR model, provides new features: confidence bounds and distribution for the predictive estimations. Both features are needed to establish the foundation for uncertainty quantification analysis. Among various probabilistic machine-learning algorithms, GPR has been widely used for approximating a target function because of its capability of representing complex input and output relationships without predefining a set of basis functions, and predicting a target output with uncertainty quantification. GPR is being employed to various manufacturing data-analytics applications, which necessitates representing this model in a standardized form for easy and rapid employment. In this paper, we present a GPR model and its representation in PMML. Furthermore, we demonstrate a prototype using a real data set in the manufacturing domain.

  8. A primer on the Petri Net Markup Language and ISO/IEC 15909-2

    DEFF Research Database (Denmark)

    Hillah, L. M.; Kindler, Ekkart; Kordon, F.

    2009-01-01

    Standard, defines a transfer format for high-level nets. The transfer format defined in Part 2 of ISO/IEC 15909 is (or is based on) the \\emph{Petri Net Markup Language} (PNML), which was originally introduced as an interchange format for different kinds of Petri nets. In ISO/IEC 15909-2, however...

  9. Projecting Images on a Sphere

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — A system for projecting images on an object with a reflective surface. A plurality of image projectors are spaced around the object and synchronized such that each...

  10. QUESTION ANSWERING SYSTEM BERBASIS ARTIFICIAL INTELLIGENCE MARKUP LANGUAGE SEBAGAI MEDIA INFORMASI

    OpenAIRE

    Fajrin Azwary; Fatma Indriani; Dodon T. Nugrahadi

    2016-01-01

    Artificial intelligence technology nowadays, can be processed with a variety of forms, such as chatbot, and the various methods, one of them using Artificial Intelligence Markup Language (AIML). AIML using template matching, by comparing the specific patterns in the database. AIML template design process begins with determining the necessary information, then formed into questions, these questions adapted to AIML pattern. From the results of the study, can be known that the Question-Answering...

  11. SBRML: a markup language for associating systems biology data with models.

    Science.gov (United States)

    Dada, Joseph O; Spasić, Irena; Paton, Norman W; Mendes, Pedro

    2010-04-01

    Research in systems biology is carried out through a combination of experiments and models. Several data standards have been adopted for representing models (Systems Biology Markup Language) and various types of relevant experimental data (such as FuGE and those of the Proteomics Standards Initiative). However, until now, there has been no standard way to associate a model and its entities to the corresponding datasets, or vice versa. Such a standard would provide a means to represent computational simulation results as well as to frame experimental data in the context of a particular model. Target applications include model-driven data analysis, parameter estimation, and sharing and archiving model simulations. We propose the Systems Biology Results Markup Language (SBRML), an XML-based language that associates a model with several datasets. Each dataset is represented as a series of values associated with model variables, and their corresponding parameter values. SBRML provides a flexible way of indexing the results to model parameter values, which supports both spreadsheet-like data and multidimensional data cubes. We present and discuss several examples of SBRML usage in applications such as enzyme kinetics, microarray gene expression and various types of simulation results. The XML Schema file for SBRML is available at http://www.comp-sys-bio.org/SBRML under the Academic Free License (AFL) v3.0.

  12. Pathology data integration with eXtensible Markup Language.

    Science.gov (United States)

    Berman, Jules J

    2005-02-01

    It is impossible to overstate the importance of XML (eXtensible Markup Language) as a data organization tool. With XML, pathologists can annotate all of their data (clinical and anatomic) in a format that can transform every pathology report into a database, without compromising narrative structure. The purpose of this manuscript is to provide an overview of XML for pathologists. Examples will demonstrate how pathologists can use XML to annotate individual data elements and to structure reports in a common format that can be merged with other XML files or queried using standard XML tools. This manuscript gives pathologists a glimpse into how XML allows pathology data to be linked to other types of biomedical data and reduces our dependence on centralized proprietary databases.

  13. Photoacoustic projection imaging using an all-optical detector array

    Science.gov (United States)

    Bauer-Marschallinger, J.; Felbermayer, K.; Berer, T.

    2018-02-01

    We present a prototype for all-optical photoacoustic projection imaging. By generating projection images, photoacoustic information of large volumes can be retrieved with less effort compared to common photoacoustic computed tomography where many detectors and/or multiple measurements are required. In our approach, an array of 60 integrating line detectors is used to acquire photoacoustic waves. The line detector array consists of fiber-optic MachZehnder interferometers, distributed on a cylindrical surface. From the measured variation of the optical path lengths of the interferometers, induced by photoacoustic waves, a photoacoustic projection image can be reconstructed. The resulting images represent the projection of the three-dimensional spatial light absorbance within the imaged object onto a two-dimensional plane, perpendicular to the line detector array. The fiber-optic detectors achieve a noise-equivalent pressure of 24 Pascal at a 10 MHz bandwidth. We present the operational principle, the structure of the array, and resulting images. The system can acquire high-resolution projection images of large volumes within a short period of time. Imaging large volumes at high frame rates facilitates monitoring of dynamic processes.

  14. Variance in parametric images: direct estimation from parametric projections

    International Nuclear Information System (INIS)

    Maguire, R.P.; Leenders, K.L.; Spyrou, N.M.

    2000-01-01

    Recent work has shown that it is possible to apply linear kinetic models to dynamic projection data in PET in order to calculate parameter projections. These can subsequently be back-projected to form parametric images - maps of parameters of physiological interest. Critical to the application of these maps, to test for significant changes between normal and pathophysiology, is an assessment of the statistical uncertainty. In this context, parametric images also include simple integral images from, e.g., [O-15]-water used to calculate statistical parametric maps (SPMs). This paper revisits the concept of parameter projections and presents a more general formulation of the parameter projection derivation as well as a method to estimate parameter variance in projection space, showing which analysis methods (models) can be used. Using simulated pharmacokinetic image data we show that a method based on an analysis in projection space inherently calculates the mathematically rigorous pixel variance. This results in an estimation which is as accurate as either estimating variance in image space during model fitting, or estimation by comparison across sets of parametric images - as might be done between individuals in a group pharmacokinetic PET study. The method based on projections has, however, a higher computational efficiency, and is also shown to be more precise, as reflected in smooth variance distribution images when compared to the other methods. (author)

  15. Ontology aided modeling of organic reaction mechanisms with flexible and fragment based XML markup procedures.

    Science.gov (United States)

    Sankar, Punnaivanam; Aghila, Gnanasekaran

    2007-01-01

    The mechanism models for primary organic reactions encoding the structural fragments undergoing substitution, addition, elimination, and rearrangements are developed. In the proposed models, each and every structural component of mechanistic pathways is represented with flexible and fragment based markup technique in XML syntax. A significant feature of the system is the encoding of the electron movements along with the other components like charges, partial charges, half bonded species, lone pair electrons, free radicals, reaction arrows, etc. needed for a complete representation of reaction mechanism. The rendering of reaction schemes described with the proposed methodology is achieved with a concise XML extension language interoperating with the structure markup. The reaction scheme is visualized as 2D graphics in a browser by converting them into SVG documents enabling the desired layouts normally perceived by the chemists conventionally. An automatic representation of the complex patterns of the reaction mechanism is achieved by reusing the knowledge in chemical ontologies and developing artificial intelligence components in terms of axioms.

  16. Free Trade Agreements and Firm-Product Markups in Chilean Manufacturing

    DEFF Research Database (Denmark)

    Lamorgese, A.R.; Linarello, A.; Warzynski, Frederic Michel Patrick

    In this paper, we use detailed information about firms' product portfolio to study how trade liberalization affects prices, markups and productivity. We document these effects using firm product level data in Chilean manufacturing following two major trade agreements with the EU and the US....... The dataset provides information about the value and quantity of each good produced by the firm, as well as the amount of exports. One additional and unique characteristic of our dataset is that it provides a firm-product level measure of the unit average cost. We use this information to compute a firm...

  17. Fast image matching algorithm based on projection characteristics

    Science.gov (United States)

    Zhou, Lijuan; Yue, Xiaobo; Zhou, Lijun

    2011-06-01

    Based on analyzing the traditional template matching algorithm, this paper identified the key factors restricting the speed of matching and put forward a brand new fast matching algorithm based on projection. Projecting the grayscale image, this algorithm converts the two-dimensional information of the image into one-dimensional one, and then matches and identifies through one-dimensional correlation, meanwhile, because of normalization has been done, when the image brightness or signal amplitude increasing in proportion, it could also perform correct matching. Experimental results show that the projection characteristics based image registration method proposed in this article could greatly improve the matching speed, which ensuring the matching accuracy as well.

  18. Do state minimum markup/price laws work? Evidence from retail scanner data and TUS-CPS.

    Science.gov (United States)

    Huang, Jidong; Chriqui, Jamie F; DeLong, Hillary; Mirza, Maryam; Diaz, Megan C; Chaloupka, Frank J

    2016-10-01

    Minimum markup/price laws (MPLs) have been proposed as an alternative non-tax pricing strategy to reduce tobacco use and access. However, the empirical evidence on the effectiveness of MPLs in increasing cigarette prices is very limited. This study aims to fill this critical gap by examining the association between MPLs and cigarette prices. State MPLs were compiled from primary legal research databases and were linked to cigarette prices constructed from the Nielsen retail scanner data and the self-reported cigarette prices from the Tobacco Use Supplement to the Current Population Survey. Multivariate regression analyses were conducted to examine the association between MPLs and the major components of MPLs and cigarette prices. The presence of MPLs was associated with higher cigarette prices. In addition, cigarette prices were higher, above and beyond the higher prices resulting from MPLs, in states that prohibit below-cost combination sales; do not allow any distributing party to use trade discounts to reduce the base cost of cigarettes; prohibit distributing parties from meeting the price of a competitor, and prohibit distributing below-cost coupons to the consumer. Moreover, states that had total markup rates >24% were associated with significantly higher cigarette prices. MPLs are an effective way to increase cigarette prices. The impact of MPLs can be further strengthened by imposing greater markup rates and by prohibiting coupon distribution, competitor price matching, and use of below-cost combination sales and trade discounts. Published by the BMJ Publishing Group Limited. For permission to use (where not already granted under a licence) please go to http://www.bmj.com/company/products-services/rights-and-licensing/.

  19. AllerML: markup language for allergens.

    Science.gov (United States)

    Ivanciuc, Ovidiu; Gendel, Steven M; Power, Trevor D; Schein, Catherine H; Braun, Werner

    2011-06-01

    Many concerns have been raised about the potential allergenicity of novel, recombinant proteins into food crops. Guidelines, proposed by WHO/FAO and EFSA, include the use of bioinformatics screening to assess the risk of potential allergenicity or cross-reactivities of all proteins introduced, for example, to improve nutritional value or promote crop resistance. However, there are no universally accepted standards that can be used to encode data on the biology of allergens to facilitate using data from multiple databases in this screening. Therefore, we developed AllerML a markup language for allergens to assist in the automated exchange of information between databases and in the integration of the bioinformatics tools that are used to investigate allergenicity and cross-reactivity. As proof of concept, AllerML was implemented using the Structural Database of Allergenic Proteins (SDAP; http://fermi.utmb.edu/SDAP/) database. General implementation of AllerML will promote automatic flow of validated data that will aid in allergy research and regulatory analysis. Copyright © 2011 Elsevier Inc. All rights reserved.

  20. PIML: the Pathogen Information Markup Language.

    Science.gov (United States)

    He, Yongqun; Vines, Richard R; Wattam, Alice R; Abramochkin, Georgiy V; Dickerman, Allan W; Eckart, J Dana; Sobral, Bruno W S

    2005-01-01

    A vast amount of information about human, animal and plant pathogens has been acquired, stored and displayed in varied formats through different resources, both electronically and otherwise. However, there is no community standard format for organizing this information or agreement on machine-readable format(s) for data exchange, thereby hampering interoperation efforts across information systems harboring such infectious disease data. The Pathogen Information Markup Language (PIML) is a free, open, XML-based format for representing pathogen information. XSLT-based visual presentations of valid PIML documents were developed and can be accessed through the PathInfo website or as part of the interoperable web services federation known as ToolBus/PathPort. Currently, detailed PIML documents are available for 21 pathogens deemed of high priority with regard to public health and national biological defense. A dynamic query system allows simple queries as well as comparisons among these pathogens. Continuing efforts are being taken to include other groups' supporting PIML and to develop more PIML documents. All the PIML-related information is accessible from http://www.vbi.vt.edu/pathport/pathinfo/

  1. Earth Science Markup Language: Transitioning From Design to Application

    Science.gov (United States)

    Moe, Karen; Graves, Sara; Ramachandran, Rahul

    2002-01-01

    The primary objective of the proposed Earth Science Markup Language (ESML) research is to transition from design to application. The resulting schema and prototype software will foster community acceptance for the "define once, use anywhere" concept central to ESML. Supporting goals include: 1. Refinement of the ESML schema and software libraries in cooperation with the user community. 2. Application of the ESML schema and software libraries to a variety of Earth science data sets and analysis tools. 3. Development of supporting prototype software for enhanced ease of use. 4. Cooperation with standards bodies in order to assure ESML is aligned with related metadata standards as appropriate. 5. Widespread publication of the ESML approach, schema, and software.

  2. Firm Dynamics and Markup Variations: Implications for Sunspot Equilibria and Endogenous Economic Fluctuation

    OpenAIRE

    Nir Jaimovich

    2007-01-01

    This paper analyzes how the interaction between firms’ entry-and-exit decisions and variations in competition gives rise to self-fulfilling, expectation-driven fluctuations in aggregate economic activity and in measured total factor productivity (TFP). The analysis is based on a dynamic general equilibrium model in which net business formation is endogenously procyclical and leads to endogenous countercyclical variations in markups. This interaction leads to indeterminacy in which economic fl...

  3. Discriminative Projection Selection Based Face Image Hashing

    Science.gov (United States)

    Karabat, Cagatay; Erdogan, Hakan

    Face image hashing is an emerging method used in biometric verification systems. In this paper, we propose a novel face image hashing method based on a new technique called discriminative projection selection. We apply the Fisher criterion for selecting the rows of a random projection matrix in a user-dependent fashion. Moreover, another contribution of this paper is to employ a bimodal Gaussian mixture model at the quantization step. Our simulation results on three different databases demonstrate that the proposed method has superior performance in comparison to previously proposed random projection based methods.

  4. On the Power of Fuzzy Markup Language

    CERN Document Server

    Loia, Vincenzo; Lee, Chang-Shing; Wang, Mei-Hui

    2013-01-01

    One of the most successful methodology that arose from the worldwide diffusion of Fuzzy Logic is Fuzzy Control. After the first attempts dated in the seventies, this methodology has been widely exploited for controlling many industrial components and systems. At the same time, and very independently from Fuzzy Logic or Fuzzy Control, the birth of the Web has impacted upon almost all aspects of computing discipline. Evolution of Web, Web 2.0 and Web 3.0 has been making scenarios of ubiquitous computing much more feasible;  consequently information technology has been thoroughly integrated into everyday objects and activities. What happens when Fuzzy Logic meets Web technology? Interesting results might come out, as you will discover in this book. Fuzzy Mark-up Language is a son of this synergistic view, where some technological issues of Web are re-interpreted taking into account the transparent notion of Fuzzy Control, as discussed here.  The concept of a Fuzzy Control that is conceived and modeled in terms...

  5. A data model and database for high-resolution pathology analytical image informatics.

    Science.gov (United States)

    Wang, Fusheng; Kong, Jun; Cooper, Lee; Pan, Tony; Kurc, Tahsin; Chen, Wenjin; Sharma, Ashish; Niedermayr, Cristobal; Oh, Tae W; Brat, Daniel; Farris, Alton B; Foran, David J; Saltz, Joel

    2011-01-01

    The systematic analysis of imaged pathology specimens often results in a vast amount of morphological information at both the cellular and sub-cellular scales. While microscopy scanners and computerized analysis are capable of capturing and analyzing data rapidly, microscopy image data remain underutilized in research and clinical settings. One major obstacle which tends to reduce wider adoption of these new technologies throughout the clinical and scientific communities is the challenge of managing, querying, and integrating the vast amounts of data resulting from the analysis of large digital pathology datasets. This paper presents a data model, which addresses these challenges, and demonstrates its implementation in a relational database system. This paper describes a data model, referred to as Pathology Analytic Imaging Standards (PAIS), and a database implementation, which are designed to support the data management and query requirements of detailed characterization of micro-anatomic morphology through many interrelated analysis pipelines on whole-slide images and tissue microarrays (TMAs). (1) Development of a data model capable of efficiently representing and storing virtual slide related image, annotation, markup, and feature information. (2) Development of a database, based on the data model, capable of supporting queries for data retrieval based on analysis and image metadata, queries for comparison of results from different analyses, and spatial queries on segmented regions, features, and classified objects. The work described in this paper is motivated by the challenges associated with characterization of micro-scale features for comparative and correlative analyses involving whole-slides tissue images and TMAs. Technologies for digitizing tissues have advanced significantly in the past decade. Slide scanners are capable of producing high-magnification, high-resolution images from whole slides and TMAs within several minutes. Hence, it is becoming

  6. A data model and database for high-resolution pathology analytical image informatics

    Directory of Open Access Journals (Sweden)

    Fusheng Wang

    2011-01-01

    Full Text Available Background: The systematic analysis of imaged pathology specimens often results in a vast amount of morphological information at both the cellular and sub-cellular scales. While microscopy scanners and computerized analysis are capable of capturing and analyzing data rapidly, microscopy image data remain underutilized in research and clinical settings. One major obstacle which tends to reduce wider adoption of these new technologies throughout the clinical and scientific communities is the challenge of managing, querying, and integrating the vast amounts of data resulting from the analysis of large digital pathology datasets. This paper presents a data model, which addresses these challenges, and demonstrates its implementation in a relational database system. Context: This paper describes a data model, referred to as Pathology Analytic Imaging Standards (PAIS, and a database implementation, which are designed to support the data management and query requirements of detailed characterization of micro-anatomic morphology through many interrelated analysis pipelines on whole-slide images and tissue microarrays (TMAs. Aims: (1 Development of a data model capable of efficiently representing and storing virtual slide related image, annotation, markup, and feature information. (2 Development of a database, based on the data model, capable of supporting queries for data retrieval based on analysis and image metadata, queries for comparison of results from different analyses, and spatial queries on segmented regions, features, and classified objects. Settings and Design: The work described in this paper is motivated by the challenges associated with characterization of micro-scale features for comparative and correlative analyses involving whole-slides tissue images and TMAs. Technologies for digitizing tissues have advanced significantly in the past decade. Slide scanners are capable of producing high-magnification, high-resolution images from whole

  7. Reconstruction of a cone-beam CT image via forward iterative projection matching

    International Nuclear Information System (INIS)

    Brock, R. Scott; Docef, Alen; Murphy, Martin J.

    2010-01-01

    Purpose: To demonstrate the feasibility of reconstructing a cone-beam CT (CBCT) image by deformably altering a prior fan-beam CT (FBCT) image such that it matches the anatomy portrayed in the CBCT projection data set. Methods: A prior FBCT image of the patient is assumed to be available as a source image. A CBCT projection data set is obtained and used as a target image set. A parametrized deformation model is applied to the source FBCT image, digitally reconstructed radiographs (DRRs) that emulate the CBCT projection image geometry are calculated and compared to the target CBCT projection data, and the deformation model parameters are adjusted iteratively until the DRRs optimally match the CBCT projection data set. The resulting deformed FBCT image is hypothesized to be an accurate representation of the patient's anatomy imaged by the CBCT system. The process is demonstrated via numerical simulation. A known deformation is applied to a prior FBCT image and used to create a synthetic set of CBCT target projections. The iterative projection matching process is then applied to reconstruct the deformation represented in the synthetic target projections; the reconstructed deformation is then compared to the known deformation. The sensitivity of the process to the number of projections and the DRR/CBCT projection mismatch is explored by systematically adding noise to and perturbing the contrast of the target projections relative to the iterated source DRRs and by reducing the number of projections. Results: When there is no noise or contrast mismatch in the CBCT projection images, a set of 64 projections allows the known deformed CT image to be reconstructed to within a nRMS error of 1% and the known deformation to within a nRMS error of 7%. A CT image nRMS error of less than 4% is maintained at noise levels up to 3% of the mean projection intensity, at which the deformation error is 13%. At 1% noise level, the number of projections can be reduced to 8 while maintaining

  8. Image reconstruction technique using projection data from neutron tomography system

    Directory of Open Access Journals (Sweden)

    Waleed Abd el Bar

    2015-12-01

    Full Text Available Neutron tomography is a very powerful technique for nondestructive evaluation of heavy industrial components as well as for soft hydrogenous materials enclosed in heavy metals which are usually difficult to image using X-rays. Due to the properties of the image acquisition system, the projection images are distorted by several artifacts, and these reduce the quality of the reconstruction. In order to eliminate these harmful effects the projection images should be corrected before reconstruction. This paper gives a description of a filter back projection (FBP technique, which is used for reconstruction of projected data obtained from transmission measurements by neutron tomography system We demonstrated the use of spatial Discrete Fourier Transform (DFT and the 2D Inverse DFT in the formulation of the method, and outlined the theory of reconstruction of a 2D neutron image from a sequence of 1D projections taken at different angles between 0 and π in MATLAB environment. Projections are generated by applying the Radon transform to the original image at different angles.

  9. Medical imaging projects meet at CERN

    CERN Multimedia

    CERN Bulletin

    2013-01-01

    ENTERVISION, the Research Training Network in 3D Digital Imaging for Cancer Radiation Therapy, successfully passed its mid-term review held at CERN on 11 January. This multidisciplinary project aims at qualifying experts in medical imaging techniques for improved hadron therapy.   ENTERVISION provides training in physics, medicine, electronics, informatics, radiobiology and engineering, as well as a wide range of soft skills, to 16 researchers of different backgrounds and nationalities. The network is funded by the European Commission within the Marie Curie Initial Training Network, and relies on the EU-funded research project ENVISION to provide a training platform for the Marie Curie researchers. The two projects hold their annual meetings jointly, allowing the young researchers to meet senior scientists and to have a full picture of the latest developments in the field beyond their individual research project. ENVISION and ENTERVISION are both co-ordinated by CERN, and the Laboratory hosts t...

  10. Gene Fusion Markup Language: a prototype for exchanging gene fusion data.

    Science.gov (United States)

    Kalyana-Sundaram, Shanker; Shanmugam, Achiraman; Chinnaiyan, Arul M

    2012-10-16

    An avalanche of next generation sequencing (NGS) studies has generated an unprecedented amount of genomic structural variation data. These studies have also identified many novel gene fusion candidates with more detailed resolution than previously achieved. However, in the excitement and necessity of publishing the observations from this recently developed cutting-edge technology, no community standardization approach has arisen to organize and represent the data with the essential attributes in an interchangeable manner. As transcriptome studies have been widely used for gene fusion discoveries, the current non-standard mode of data representation could potentially impede data accessibility, critical analyses, and further discoveries in the near future. Here we propose a prototype, Gene Fusion Markup Language (GFML) as an initiative to provide a standard format for organizing and representing the significant features of gene fusion data. GFML will offer the advantage of representing the data in a machine-readable format to enable data exchange, automated analysis interpretation, and independent verification. As this database-independent exchange initiative evolves it will further facilitate the formation of related databases, repositories, and analysis tools. The GFML prototype is made available at http://code.google.com/p/gfml-prototype/. The Gene Fusion Markup Language (GFML) presented here could facilitate the development of a standard format for organizing, integrating and representing the significant features of gene fusion data in an inter-operable and query-able fashion that will enable biologically intuitive access to gene fusion findings and expedite functional characterization. A similar model is envisaged for other NGS data analyses.

  11. Reconstruction of CT images by the Bayes- back projection method

    CERN Document Server

    Haruyama, M; Takase, M; Tobita, H

    2002-01-01

    In the course of research on quantitative assay of non-destructive measurement of radioactive waste, the have developed a unique program based on the Bayesian theory for reconstruction of transmission computed tomography (TCT) image. The reconstruction of cross-section images in the CT technology usually employs the Filtered Back Projection method. The new imaging reconstruction program reported here is based on the Bayesian Back Projection method, and it has a function of iterative improvement images by every step of measurement. Namely, this method has the capability of prompt display of a cross-section image corresponding to each angled projection data from every measurement. Hence, it is possible to observe an improved cross-section view by reflecting each projection data in almost real time. From the basic theory of Baysian Back Projection method, it can be not only applied to CT types of 1st, 2nd, and 3rd generation. This reported deals with a reconstruction program of cross-section images in the CT of ...

  12. Qualitative and quantitative analysis of reconstructed images using projections with noises

    International Nuclear Information System (INIS)

    Lopes, R.T.; Assis, J.T. de

    1988-01-01

    The reconstruction of a two-dimencional image from one-dimensional projections in an analytic algorithm ''convolution method'' is simulated on a microcomputer. In this work it was analysed the effects caused in the reconstructed image in function of the number of projections and noise level added to the projection data. Qualitative and quantitative (distortion and image noise) comparison were done with the original image and the reconstructed images. (author) [pt

  13. Optimized image acquisition for breast tomosynthesis in projection and reconstruction space

    International Nuclear Information System (INIS)

    Chawla, Amarpreet S.; Lo, Joseph Y.; Baker, Jay A.; Samei, Ehsan

    2009-01-01

    Breast tomosynthesis has been an exciting new development in the field of breast imaging. While the diagnostic improvement via tomosynthesis is notable, the full potential of tomosynthesis has not yet been realized. This may be attributed to the dependency of the diagnostic quality of tomosynthesis on multiple variables, each of which needs to be optimized. Those include dose, number of angular projections, and the total angular span of those projections. In this study, the authors investigated the effects of these acquisition parameters on the overall diagnostic image quality of breast tomosynthesis in both the projection and reconstruction space. Five mastectomy specimens were imaged using a prototype tomosynthesis system. 25 angular projections of each specimen were acquired at 6.2 times typical single-view clinical dose level. Images at lower dose levels were then simulated using a noise modification routine. Each projection image was supplemented with 84 simulated 3 mm 3D lesions embedded at the center of 84 nonoverlapping ROIs. The projection images were then reconstructed using a filtered backprojection algorithm at different combinations of acquisition parameters to investigate which of the many possible combinations maximizes the performance. Performance was evaluated in terms of a Laguerre-Gauss channelized Hotelling observer model-based measure of lesion detectability. The analysis was also performed without reconstruction by combining the model results from projection images using Bayesian decision fusion algorithm. The effect of acquisition parameters on projection images and reconstructed slices were then compared to derive an optimization rule for tomosynthesis. The results indicated that projection images yield comparable but higher performance than reconstructed images. Both modes, however, offered similar trends: Performance improved with an increase in the total acquisition dose level and the angular span. Using a constant dose level and angular

  14. Tomographic image via background subtraction using an x-ray projection image and a priori computed tomography

    International Nuclear Information System (INIS)

    Zhang Jin; Yi Byongyong; Lasio, Giovanni; Suntharalingam, Mohan; Yu, Cedric

    2009-01-01

    Kilovoltage x-ray projection images (kV images for brevity) are increasingly available in image guided radiotherapy (IGRT) for patient positioning. These images are two-dimensional (2D) projections of a three-dimensional (3D) object along the x-ray beam direction. Projecting a 3D object onto a plane may lead to ambiguities in the identification of anatomical structures and to poor contrast in kV images. Therefore, the use of kV images in IGRT is mainly limited to bony landmark alignments. This work proposes a novel subtraction technique that isolates a slice of interest (SOI) from a kV image with the assistance of a priori information from a previous CT scan. The method separates structural information within a preselected SOI by suppressing contributions to the unprocessed projection from out-of-SOI-plane structures. Up to a five-fold increase in the contrast-to-noise ratios (CNRs) was observed in selected regions of the isolated SOI, when compared to the original unprocessed kV image. The tomographic image via background subtraction (TIBS) technique aims to provide a quick snapshot of the slice of interest with greatly enhanced image contrast over conventional kV x-ray projections for fast and accurate image guidance of radiation therapy. With further refinements, TIBS could, in principle, provide real-time tumor localization using gantry-mounted x-ray imaging systems without the need for implanted markers.

  15. The basics of CrossRef extensible markup language

    Directory of Open Access Journals (Sweden)

    Rachael Lammey

    2014-08-01

    Full Text Available CrossRef is an association of scholarly publishers that develops shared infrastructure to support more effective scholarly communications. Launched in 2000, CrossRef’s citation-linking network today covers over 68 million journal articles and other content items (books chapters, data, theses, and technical reports from thousands of scholarly and professional publishers around the globe. CrossRef has over 4,000 member publishers who join as members in order to avail of a number of CrossRef services, reference linking via the Digital Object Identifier (DOI being the core service. To deposit CrossRef DOIs, publishers and editors need to become familiar with the basics of extensible markup language (XML. This article will give an introduction to CrossRef XML and what publishers need to do in order to start to deposit DOIs with CrossRef and thus ensure their publications are discoverable and can be linked to consistently in an online environment.

  16. Accelerated Optical Projection Tomography Applied to In Vivo Imaging of Zebrafish.

    Directory of Open Access Journals (Sweden)

    Teresa Correia

    Full Text Available Optical projection tomography (OPT provides a non-invasive 3-D imaging modality that can be applied to longitudinal studies of live disease models, including in zebrafish. Current limitations include the requirement of a minimum number of angular projections for reconstruction of reasonable OPT images using filtered back projection (FBP, which is typically several hundred, leading to acquisition times of several minutes. It is highly desirable to decrease the number of required angular projections to decrease both the total acquisition time and the light dose to the sample. This is particularly important to enable longitudinal studies, which involve measurements of the same fish at different time points. In this work, we demonstrate that the use of an iterative algorithm to reconstruct sparsely sampled OPT data sets can provide useful 3-D images with 50 or fewer projections, thereby significantly decreasing the minimum acquisition time and light dose while maintaining image quality. A transgenic zebrafish embryo with fluorescent labelling of the vasculature was imaged to acquire densely sampled (800 projections and under-sampled data sets of transmitted and fluorescence projection images. The under-sampled OPT data sets were reconstructed using an iterative total variation-based image reconstruction algorithm and compared against FBP reconstructions of the densely sampled data sets. To illustrate the potential for quantitative analysis following rapid OPT data acquisition, a Hessian-based method was applied to automatically segment the reconstructed images to select the vasculature network. Results showed that 3-D images of the zebrafish embryo and its vasculature of sufficient visual quality for quantitative analysis can be reconstructed using the iterative algorithm from only 32 projections-achieving up to 28 times improvement in imaging speed and leading to total acquisition times of a few seconds.

  17. [Managing digital medical imaging projects in healthcare services: lessons learned].

    Science.gov (United States)

    Rojas de la Escalera, D

    2013-01-01

    Medical imaging is one of the most important diagnostic instruments in clinical practice. The technological development of digital medical imaging has enabled healthcare services to undertake large scale projects that require the participation and collaboration of many professionals of varied backgrounds and interests as well as substantial investments in infrastructures. Rather than focusing on systems for dealing with digital medical images, this article deals with the management of projects for implementing these systems, reviewing various organizational, technological, and human factors that are critical to ensure the success of these projects and to guarantee the compatibility and integration of digital medical imaging systems with other health information systems. To this end, the author relates several lessons learned from a review of the literature and the author's own experience in the technical coordination of digital medical imaging projects. Copyright © 2012 SERAM. Published by Elsevier Espana. All rights reserved.

  18. A segmentation algorithm based on image projection for complex text layout

    Science.gov (United States)

    Zhu, Wangsheng; Chen, Qin; Wei, Chuanyi; Li, Ziyang

    2017-10-01

    Segmentation algorithm is an important part of layout analysis, considering the efficiency advantage of the top-down approach and the particularity of the object, a breakdown of projection layout segmentation algorithm. Firstly, the algorithm will algorithm first partitions the text image, and divided into several columns, then for each column scanning projection, the text image is divided into several sub regions through multiple projection. The experimental results show that, this method inherits the projection itself and rapid calculation speed, but also can avoid the effect of arc image information page segmentation, and also can accurate segmentation of the text image layout is complex.

  19. Extensible Markup Language: How Might It Alter the Software Documentation Process and the Role of the Technical Communicator?

    Science.gov (United States)

    Battalio, John T.

    2002-01-01

    Describes the influence that Extensible Markup Language (XML) will have on the software documentation process and subsequently on the curricula of advanced undergraduate and master's programs in technical communication. Recommends how curricula of advanced undergraduate and master's programs in technical communication ought to change in order to…

  20. Generating Systems Biology Markup Language Models from the Synthetic Biology Open Language.

    Science.gov (United States)

    Roehner, Nicholas; Zhang, Zhen; Nguyen, Tramy; Myers, Chris J

    2015-08-21

    In the context of synthetic biology, model generation is the automated process of constructing biochemical models based on genetic designs. This paper discusses the use cases for model generation in genetic design automation (GDA) software tools and introduces the foundational concepts of standards and model annotation that make this process useful. Finally, this paper presents an implementation of model generation in the GDA software tool iBioSim and provides an example of generating a Systems Biology Markup Language (SBML) model from a design of a 4-input AND sensor written in the Synthetic Biology Open Language (SBOL).

  1. MatchGUI: A Graphical MATLAB-Based Tool for Automatic Image Co-Registration

    Science.gov (United States)

    Ansar, Adnan I.

    2011-01-01

    MatchGUI software, based on MATLAB, automatically matches two images and displays the match result by superimposing one image on the other. A slider bar allows focus to shift between the two images. There are tools for zoom, auto-crop to overlap region, and basic image markup. Given a pair of ortho-rectified images (focused primarily on Mars orbital imagery for now), this software automatically co-registers the imagery so that corresponding image pixels are aligned. MatchGUI requires minimal user input, and performs a registration over scale and inplane rotation fully automatically

  2. imzML: Imaging Mass Spectrometry Markup Language: A common data format for mass spectrometry imaging.

    Science.gov (United States)

    Römpp, Andreas; Schramm, Thorsten; Hester, Alfons; Klinkert, Ivo; Both, Jean-Pierre; Heeren, Ron M A; Stöckli, Markus; Spengler, Bernhard

    2011-01-01

    Imaging mass spectrometry is the method of scanning a sample of interest and generating an "image" of the intensity distribution of a specific analyte. The data sets consist of a large number of mass spectra which are usually acquired with identical settings. Existing data formats are not sufficient to describe an MS imaging experiment completely. The data format imzML was developed to allow the flexible and efficient exchange of MS imaging data between different instruments and data analysis software.For this purpose, the MS imaging data is divided in two separate files. The mass spectral data is stored in a binary file to ensure efficient storage. All metadata (e.g., instrumental parameters, sample details) are stored in an XML file which is based on the standard data format mzML developed by HUPO-PSI. The original mzML controlled vocabulary was extended to include specific parameters of imaging mass spectrometry (such as x/y position and spatial resolution). The two files (XML and binary) are connected by offset values in the XML file and are unambiguously linked by a universally unique identifier. The resulting datasets are comparable in size to the raw data and the separate metadata file allows flexible handling of large datasets.Several imaging MS software tools already support imzML. This allows choosing from a (growing) number of processing tools. One is no longer limited to proprietary software, but is able to use the processing software which is best suited for a specific question or application. On the other hand, measurements from different instruments can be compared within one software application using identical settings for data processing. All necessary information for evaluating and implementing imzML can be found at http://www.imzML.org .

  3. WE-E-18A-11: Fluoro-Tomographic Images From Projections of On-Board Imager (OBI) While Gantry Is Moving

    Energy Technology Data Exchange (ETDEWEB)

    Yi, B; Hu, E; Yu, C; Lasio, G [Univ. of Maryland School Of Medicine, Baltimore, MD (United States)

    2014-06-15

    Purpose: A method to generate a series of fluoro-tomographic images (FTI) of the slice of interest (SOI) from the projection images of the On-board imager (OBI) while gantry is moving is developed and tested. Methods: Tomographic image via background subtraction, TIBS has been published by our group. TIBS uses a priori anatomical information from a previous CT scan to isolate a SOI from a planar kV image by factoring out the attenuations by tissues outside the SOI (background). We extended the idea to 4D TIBS, which enables to generate from the projection of different gantry angles. A set of background images for different angles are prepared. A background image at a given gantry angle is subtracted from the projection image at the same angle to generate a TIBS image. Then the TIBS image is converted to a reference angle. The 4D TIBS is the set of TIBS that originated from gantry angles other than the reference angle. Projection images of lung patients for CBCT acquisition are used to test the 4D TIBS. Results: Fluoroscopic images of a coronal plane of lung patients are acquired from the CBCT projections at different gantry angles and times. Change of morphology of hilar vessels due to breathing and heart beating are visible in the coronal plane, which are generated from the set of the projection images at gantry angles other than antero-posterior. Breathing surrogate or sorting process is not needed. Unlike tomosynthesis, FTI from 4D TIBS maintains the independence of each of the projections thereby reveals temporal variations within the SOI. Conclusion: FTI, fluoroscopic imaging of a SOI with x-ray projections, directly generated from the x-ray projection images at different gantry angles is tested with a lung case and proven feasible. This technique can be used for on-line imaging of moving targets. NIH Grant R01CA133539.

  4. Color correction of projected image on color-screen for mobile beam-projector

    Science.gov (United States)

    Son, Chang-Hwan; Sung, Soo-Jin; Ha, Yeong-Ho

    2008-01-01

    With the current trend of digital convergence in mobile phones, mobile manufacturers are researching how to develop a mobile beam-projector to cope with the limitations of a small screen size and to offer a better feeling of movement while watching movies or satellite broadcasting. However, mobile beam-projectors may project an image on arbitrary surfaces, such as a colored wall and paper, not on a white screen mainly used in an office environment. Thus, color correction method for the projected image is proposed to achieve good image quality irrespective of the surface colors. Initially, luminance values of original image transformed into the YCbCr space are changed to compensate for spatially nonuniform luminance distribution of arbitrary surface, depending on the pixel values of surface image captured by mobile camera. Next, the chromaticity values for each surface and white-screen image are calculated using the ratio of the sum of three RGB values to one another. Then their chromaticity ratios are multiplied by converted original image through an inverse YCbCr matrix to reduce an influence of modulating the appearance of projected image due to spatially different reflectance on the surface. By projecting corrected original image on a texture pattern or single color surface, the image quality of projected image can be improved more, as well as that of projected image on white screen.

  5. Tomography of images with poisson miose: pre-processing of projections

    International Nuclear Information System (INIS)

    Furuie, S.S.

    1989-01-01

    This work present an alternative approach in order to reconstruct images with low signal to noise ratio. Basically it consist of smoothing projections taking into account that the noise is Poisson. These filtered projections are used to reconstruct the original image, applying direct Fourier method. This approach is compared with convolution back projection and EM (Expectation-Maximization). (author) [pt

  6. Projection Operators and Moment Invariants to Image Blurring

    Czech Academy of Sciences Publication Activity Database

    Flusser, Jan; Suk, Tomáš; Boldyš, Jiří; Zitová, Barbara

    2015-01-01

    Roč. 37, č. 4 (2015), s. 786-802 ISSN 0162-8828 R&D Projects: GA ČR GA13-29225S; GA ČR GAP103/11/1552 Institutional support: RVO:67985556 Keywords : Blurred image * N-fold rotation symmetry * projection operators * image moments * moment invariants * blur invariants * object recognition Subject RIV: JD - Computer Applications, Robotics Impact factor: 6.077, year: 2015 http://library.utia.cas.cz/separaty/2014/ZOI/flusser-0434521.pdf

  7. Simulation Experiment Description Markup Language (SED-ML Level 1 Version 3 (L1V3

    Directory of Open Access Journals (Sweden)

    Bergmann Frank T.

    2018-03-01

    Full Text Available The creation of computational simulation experiments to inform modern biological research poses challenges to reproduce, annotate, archive, and share such experiments. Efforts such as SBML or CellML standardize the formal representation of computational models in various areas of biology. The Simulation Experiment Description Markup Language (SED-ML describes what procedures the models are subjected to, and the details of those procedures. These standards, together with further COMBINE standards, describe models sufficiently well for the reproduction of simulation studies among users and software tools. The Simulation Experiment Description Markup Language (SED-ML is an XML-based format that encodes, for a given simulation experiment, (i which models to use; (ii which modifications to apply to models before simulation; (iii which simulation procedures to run on each model; (iv how to post-process the data; and (v how these results should be plotted and reported. SED-ML Level 1 Version 1 (L1V1 implemented support for the encoding of basic time course simulations. SED-ML L1V2 added support for more complex types of simulations, specifically repeated tasks and chained simulation procedures. SED-ML L1V3 extends L1V2 by means to describe which datasets and subsets thereof to use within a simulation experiment.

  8. Simulation Experiment Description Markup Language (SED-ML) Level 1 Version 3 (L1V3).

    Science.gov (United States)

    Bergmann, Frank T; Cooper, Jonathan; König, Matthias; Moraru, Ion; Nickerson, David; Le Novère, Nicolas; Olivier, Brett G; Sahle, Sven; Smith, Lucian; Waltemath, Dagmar

    2018-03-19

    The creation of computational simulation experiments to inform modern biological research poses challenges to reproduce, annotate, archive, and share such experiments. Efforts such as SBML or CellML standardize the formal representation of computational models in various areas of biology. The Simulation Experiment Description Markup Language (SED-ML) describes what procedures the models are subjected to, and the details of those procedures. These standards, together with further COMBINE standards, describe models sufficiently well for the reproduction of simulation studies among users and software tools. The Simulation Experiment Description Markup Language (SED-ML) is an XML-based format that encodes, for a given simulation experiment, (i) which models to use; (ii) which modifications to apply to models before simulation; (iii) which simulation procedures to run on each model; (iv) how to post-process the data; and (v) how these results should be plotted and reported. SED-ML Level 1 Version 1 (L1V1) implemented support for the encoding of basic time course simulations. SED-ML L1V2 added support for more complex types of simulations, specifically repeated tasks and chained simulation procedures. SED-ML L1V3 extends L1V2 by means to describe which datasets and subsets thereof to use within a simulation experiment.

  9. GEOREFERENCED IMAGE SYSTEM WITH DRONES

    Directory of Open Access Journals (Sweden)

    Héctor A. Pérez-Sánchez

    2017-07-01

    Full Text Available This paper has as general purpose develop and implementation of a system that allows the generation of flight routes for a drone, the acquisition of geographic location information (GPS during the flight and taking photographs of points of interest for creating georeferenced images, same that will be used to generate KML files (Keyhole Markup Language for the representation of geographical data in three dimensions to be displayed on the Google Earth tool.

  10. Chemical Markup, XML, and the World Wide Web. 7. CMLSpect, an XML vocabulary for spectral data.

    Science.gov (United States)

    Kuhn, Stefan; Helmus, Tobias; Lancashire, Robert J; Murray-Rust, Peter; Rzepa, Henry S; Steinbeck, Christoph; Willighagen, Egon L

    2007-01-01

    CMLSpect is an extension of Chemical Markup Language (CML) for managing spectral and other analytical data. It is designed to be flexible enough to contain a wide variety of spectral data. The paper describes the CMLElements used and gives practical examples for common types of spectra. In addition it demonstrates how different views of the data can be expressed and what problems still exist.

  11. Image-projection ion-beam lithography

    International Nuclear Information System (INIS)

    Miller, P.A.

    1989-01-01

    Image-projection ion-beam lithography is an attractive alternative for submicron patterning because it may provide high throughput; it uses demagnification to gain advantages in reticle fabrication, inspection, and lifetime; and it enjoys the precise deposition characteristics of ions which cause essentially no collateral damage. This lithographic option involves extracting low-mass ions (e.g., He + ) from a plasma source, transmitting the ions at low voltage through a stencil reticle, and then accelerating and focusing the ions electrostatically onto a resist-coated wafer. While the advantages of this technology have been demonstrated experimentally by the work of IMS (Austria), many difficulties still impede extension of the technology to the high-volume production of microelectronic devices. We report a computational study of a lithography system designed to address problem areas in field size, telecentricity, and chromatic and geometric aberration. We present a novel ion-column-design approach and conceptual ion-source and column designs which address these issues. We find that image-projection ion-beam technology should in principle meet high-volume-production requirements. The technical success of our present relatively compact-column design requires that a glow-discharge-based ion source (or equivalent cold source) be developed and that moderate further improvement in geometric aberration levels be obtained. Our system requires that image predistortion be employed during reticle fabrication to overcome distortion due to residual image nonlinearity and space-charge forces. This constitutes a software data preparation step, as do correcting for distortions in electron lithography columns and performing proximity-effect corrections. Areas needing further fundamental work are identified

  12. Computerization of guidelines: towards a "guideline markup language".

    Science.gov (United States)

    Dart, T; Xu, Y; Chatellier, G; Degoulet, P

    2001-01-01

    Medical decision making is one of the most difficult daily tasks for physicians. Guidelines have been designed to reduce variance between physicians in daily practice, to improve patient outcomes and to control costs. In fact, few physicians use guidelines in daily practice. A way to ease the use of guidelines is to implement computerised guidelines (computer reminders). We present in this paper a method of computerising guidelines. Our objectives were: 1) to propose a generic model that can be instantiated for any specific guidelines; 2) to use eXtensible Markup Language (XML) as a guideline representation language to instantiate the generic model for a specific guideline. Our model is an object representation of a clinical algorithm, it has been validated by running two different guidelines issued by a French official Agency. In spite of some limitations, we found that this model is expressive enough to represent complex guidelines devoted to diabetes and hypertension management. We conclude that XML can be used as a description format to structure guidelines and as an interface between paper-based guidelines and computer applications.

  13. Pharmacometrics Markup Language (PharmML): Opening New Perspectives for Model Exchange in Drug Development

    Science.gov (United States)

    Swat, MJ; Moodie, S; Wimalaratne, SM; Kristensen, NR; Lavielle, M; Mari, A; Magni, P; Smith, MK; Bizzotto, R; Pasotti, L; Mezzalana, E; Comets, E; Sarr, C; Terranova, N; Blaudez, E; Chan, P; Chard, J; Chatel, K; Chenel, M; Edwards, D; Franklin, C; Giorgino, T; Glont, M; Girard, P; Grenon, P; Harling, K; Hooker, AC; Kaye, R; Keizer, R; Kloft, C; Kok, JN; Kokash, N; Laibe, C; Laveille, C; Lestini, G; Mentré, F; Munafo, A; Nordgren, R; Nyberg, HB; Parra-Guillen, ZP; Plan, E; Ribba, B; Smith, G; Trocóniz, IF; Yvon, F; Milligan, PA; Harnisch, L; Karlsson, M; Hermjakob, H; Le Novère, N

    2015-01-01

    The lack of a common exchange format for mathematical models in pharmacometrics has been a long-standing problem. Such a format has the potential to increase productivity and analysis quality, simplify the handling of complex workflows, ensure reproducibility of research, and facilitate the reuse of existing model resources. Pharmacometrics Markup Language (PharmML), currently under development by the Drug Disease Model Resources (DDMoRe) consortium, is intended to become an exchange standard in pharmacometrics by providing means to encode models, trial designs, and modeling steps. PMID:26225259

  14. Pharmacometrics Markup Language (PharmML): Opening New Perspectives for Model Exchange in Drug Development.

    Science.gov (United States)

    Swat, M J; Moodie, S; Wimalaratne, S M; Kristensen, N R; Lavielle, M; Mari, A; Magni, P; Smith, M K; Bizzotto, R; Pasotti, L; Mezzalana, E; Comets, E; Sarr, C; Terranova, N; Blaudez, E; Chan, P; Chard, J; Chatel, K; Chenel, M; Edwards, D; Franklin, C; Giorgino, T; Glont, M; Girard, P; Grenon, P; Harling, K; Hooker, A C; Kaye, R; Keizer, R; Kloft, C; Kok, J N; Kokash, N; Laibe, C; Laveille, C; Lestini, G; Mentré, F; Munafo, A; Nordgren, R; Nyberg, H B; Parra-Guillen, Z P; Plan, E; Ribba, B; Smith, G; Trocóniz, I F; Yvon, F; Milligan, P A; Harnisch, L; Karlsson, M; Hermjakob, H; Le Novère, N

    2015-06-01

    The lack of a common exchange format for mathematical models in pharmacometrics has been a long-standing problem. Such a format has the potential to increase productivity and analysis quality, simplify the handling of complex workflows, ensure reproducibility of research, and facilitate the reuse of existing model resources. Pharmacometrics Markup Language (PharmML), currently under development by the Drug Disease Model Resources (DDMoRe) consortium, is intended to become an exchange standard in pharmacometrics by providing means to encode models, trial designs, and modeling steps.

  15. A methodology to annotate systems biology markup language models with the synthetic biology open language.

    Science.gov (United States)

    Roehner, Nicholas; Myers, Chris J

    2014-02-21

    Recently, we have begun to witness the potential of synthetic biology, noted here in the form of bacteria and yeast that have been genetically engineered to produce biofuels, manufacture drug precursors, and even invade tumor cells. The success of these projects, however, has often failed in translation and application to new projects, a problem exacerbated by a lack of engineering standards that combine descriptions of the structure and function of DNA. To address this need, this paper describes a methodology to connect the systems biology markup language (SBML) to the synthetic biology open language (SBOL), existing standards that describe biochemical models and DNA components, respectively. Our methodology involves first annotating SBML model elements such as species and reactions with SBOL DNA components. A graph is then constructed from the model, with vertices corresponding to elements within the model and edges corresponding to the cause-and-effect relationships between these elements. Lastly, the graph is traversed to assemble the annotating DNA components into a composite DNA component, which is used to annotate the model itself and can be referenced by other composite models and DNA components. In this way, our methodology can be used to build up a hierarchical library of models annotated with DNA components. Such a library is a useful input to any future genetic technology mapping algorithm that would automate the process of composing DNA components to satisfy a behavioral specification. Our methodology for SBML-to-SBOL annotation is implemented in the latest version of our genetic design automation (GDA) software tool, iBioSim.

  16. Discriminating Projections for Estimating Face Age in Wild Images

    Energy Technology Data Exchange (ETDEWEB)

    Tokola, Ryan A [ORNL; Bolme, David S [ORNL; Ricanek, Karl [ORNL; Barstow, Del R [ORNL; Boehnen, Chris Bensing [ORNL

    2014-01-01

    We introduce a novel approach to estimating the age of a human from a single uncontrolled image. Current face age estimation algorithms work well in highly controlled images, and some are robust to changes in illumination, but it is usually assumed that images are close to frontal. This bias is clearly seen in the datasets that are commonly used to evaluate age estimation, which either entirely or mostly consist of frontal images. Using pose-specific projections, our algorithm maps image features into a pose-insensitive latent space that is discriminative with respect to age. Age estimation is then performed using a multi-class SVM. We show that our approach outperforms other published results on the Images of Groups dataset, which is the only age-related dataset with a non-trivial number of off-axis face images, and that we are competitive with recent age estimation algorithms on the mostly-frontal FG-NET dataset. We also experimentally demonstrate that our feature projections introduce insensitivity to pose.

  17. The carbohydrate sequence markup language (CabosML): an XML description of carbohydrate structures.

    Science.gov (United States)

    Kikuchi, Norihiro; Kameyama, Akihiko; Nakaya, Shuuichi; Ito, Hiromi; Sato, Takashi; Shikanai, Toshihide; Takahashi, Yoriko; Narimatsu, Hisashi

    2005-04-15

    Bioinformatics resources for glycomics are very poor as compared with those for genomics and proteomics. The complexity of carbohydrate sequences makes it difficult to define a common language to represent them, and the development of bioinformatics tools for glycomics has not progressed. In this study, we developed a carbohydrate sequence markup language (CabosML), an XML description of carbohydrate structures. The language definition (XML Schema) and an experimental database of carbohydrate structures using an XML database management system are available at http://www.phoenix.hydra.mki.co.jp/CabosDemo.html kikuchi@hydra.mki.co.jp.

  18. Image/patient registration from (partial) projection data by the Fourier phase matching method

    International Nuclear Information System (INIS)

    Weiguo Lu; You, J.

    1999-01-01

    A technique for 2D or 3D image/patient registration, PFPM (projection based Fourier phase matching method), is proposed. This technique provides image/patient registration directly from sequential tomographic projection data. The method can also deal with image files by generating 2D Radon transforms slice by slice. The registration in projection space is done by calculating a Fourier invariant (FI) descriptor for each one-dimensional projection datum, and then registering the FI descriptor by the Fourier phase matching (FPM) method. The algorithm has been tested on both synthetic and experimental data. When dealing with translated, rotated and uniformly scaled 2D image registration, the performance of the PFPM method is comparable to that of the IFPM (image based Fourier phase matching) method in robustness, efficiency, insensitivity to the offset between images, and registration time. The advantages of the former are that subpixel resolution is feasible, and it is more insensitive to image noise due to the averaging effect of the projection acquisition. Furthermore, the PFPM method offers the ability to generalize to 3D image/patient registration and to register partial projection data. By applying patient registration directly from tomographic projection data, image reconstruction is not needed in the therapy set-up verification, thus reducing computational time and artefacts. In addition, real time registration is feasible. Registration from partial projection data meets the geometry and dose requirements in many application cases and makes dynamic set-up verification possible in tomotherapy. (author)

  19. The "New Oxford English Dictionary" Project.

    Science.gov (United States)

    Fawcett, Heather

    1993-01-01

    Describes the conversion of the 22,000-page Oxford English Dictionary to an electronic version incorporating a modified Standard Generalized Markup Language (SGML) syntax. Explains that the database designers chose structured markup because it supports users' data searching needs, allows textual components to be extracted or modified, and allows…

  20. Invisibility cloak with image projection capability.

    Science.gov (United States)

    Banerjee, Debasish; Ji, Chengang; Iizuka, Hideo

    2016-12-13

    Investigations of invisibility cloaks have been led by rigorous theories and such cloak structures, in general, require extreme material parameters. Consequently, it is challenging to realize them, particularly in the full visible region. Due to the insensitivity of human eyes to the polarization and phase of light, cloaking a large object in the full visible region has been recently realized by a simplified theory. Here, we experimentally demonstrate a device concept where a large object can be concealed in a cloak structure and at the same time any images can be projected through it by utilizing a distinctively different approach; the cloaking via one polarization and the image projection via the other orthogonal polarization. Our device structure consists of commercially available optical components such as polarizers and mirrors, and therefore, provides a significant further step towards practical application scenarios such as transparent devices and see-through displays.

  1. An Attempt to Construct a Database of Photographic Data of Radiolarian Fossils with the Hypertext Markup Language

    OpenAIRE

    磯貝, 芳徳; 水谷, 伸治郎; Yoshinori, Isogai; Shinjiro, Mizutani

    1998-01-01

    放散虫化石の走査型電子顕微鏡写真のコレクションを,Hypertext Markup Languageを用いてデータベース化した.このデータベースは約千枚の放散虫化石の写真を現時点でもっており,化石名,地質学的年代,発掘地名など多様な視点から検索することができる.このデータベースの構築によって,計算機やデータベースについて特別な技術を持っていない通常の研究者が,自身のデータベースを自らの手で構築しようとするとき,Hypertext Markup Languageが有効であることを示した.さらにインターネットを経由して,誰でもこのデータベースを利用できる点は,Hypertext Markup Languageを用いたデータベースの特筆するき特徴である.データベース構築の過程を記述し,現況を報告する.さらに当データベース構築の背景にある考えや問題点について議論する....

  2. Dual scan CT image recovery from truncated projections

    Science.gov (United States)

    Sarkar, Shubhabrata; Wahi, Pankaj; Munshi, Prabhat

    2017-12-01

    There are computerized tomography (CT) scanners available commercially for imaging small objects and they are often categorized as mini-CT X-ray machines. One major limitation of these machines is their inability to scan large objects with good image quality because of the truncation of projection data. An algorithm is proposed in this work which enables such machines to scan large objects while maintaining the quality of the recovered image.

  3. Object-Image Correspondence for Algebraic Curves under Projections

    Directory of Open Access Journals (Sweden)

    Joseph M. Burdis

    2013-03-01

    Full Text Available We present a novel algorithm for deciding whether a given planar curve is an image of a given spatial curve, obtained by a central or a parallel projection with unknown parameters. The motivation comes from the problem of establishing a correspondence between an object and an image, taken by a camera with unknown position and parameters. A straightforward approach to this problem consists of setting up a system of conditions on the projection parameters and then checking whether or not this system has a solution. The computational advantage of the algorithm presented here, in comparison to algorithms based on the straightforward approach, lies in a significant reduction of a number of real parameters that need to be eliminated in order to establish existence or non-existence of a projection that maps a given spatial curve to a given planar curve. Our algorithm is based on projection criteria that reduce the projection problem to a certain modification of the equivalence problem of planar curves under affine and projective transformations. To solve the latter problem we make an algebraic adaptation of signature construction that has been used to solve the equivalence problems for smooth curves. We introduce a notion of a classifying set of rational differential invariants and produce explicit formulas for such invariants for the actions of the projective and the affine groups on the plane.

  4. Quantitative imaging studies with PET VI. Project II

    International Nuclear Information System (INIS)

    Copper, M.; Chen, C.T.; Yasillo, N.; Gatley, J.; Ortega, C.; DeJesus, O.; Friedman, A.

    1985-01-01

    This project is focused upon the development of hardware and software to improve PET image analysis and upon clinical applications of PET. In this report the laboratory's progress in various attenuation correction methods for brain imaging are described. The use of time-of-flight information for image reconstruction is evaluated. The location of dopamine D1 and D2 receptors in brain was found to be largely in the basal ganghia. 1 tab. (DT)

  5. Using Extensible Markup Language (XML) for the Single Source Delivery of Educational Resources by Print and Online: A Case Study

    Science.gov (United States)

    Walsh, Lucas

    2007-01-01

    This article seeks to provide an introduction to Extensible Markup Language (XML) by looking at its use in a single source publishing approach to the provision of teaching resources in both hardcopy and online. Using the development of the International Baccalaureate Organisation's online Economics Subject Guide as a practical example, this…

  6. The Van Sant AVHRR image projected onto a rhombicosidodecahedron

    Science.gov (United States)

    Baron, Michael; Morain, Stan

    1996-03-01

    IDEATION, a design and development corporation, Santa Fe, New Mexico, has modeled Tom Van Sant's ``The Earth From Space'' image to a rhombicosidodecahedron. ``The Earth from Space'' image, produced by the Geosphere® Project in Santa Monica, California, was developed from hundreds of AVHRR pictures and published as a Mercator projection. IDEATION, utilizing a digitized Robinson Projection, fitted the image to foldable, paper components which, when interconnected by means of a unique tabular system, results in a rhombicosidodecahedron representation of the Earth exposing 30 square, 20 triangular, and 12 pentagonal faces. Because the resulting model is not spherical, the borders of the represented features were rectified to match the intersecting planes of the model's faces. The resulting product will be licensed and commercially produced for use by elementary and secondary students. Market research indicates the model will be used in both the demonstration of geometric principles and the teaching of fundamental spatial relations of the Earth's lands and oceans.

  7. Survey of on-road image projection with pixel light systems

    Science.gov (United States)

    Rizvi, Sadiq; Knöchelmann, Marvin; Ley, Peer-Phillip; Lachmayer, Roland

    2017-12-01

    HID, LED and laser-based high resolution automotive headlamps, as of late known as `pixel light systems', are at the forefront of the developing technologies paving the way for autonomous driving. In addition to light distribution capabilities that outperform Adaptive Front Lighting and Matrix Beam systems, pixel light systems provide the possibility of image projection directly onto the street. The underlying objective is to improve the driving experience, in any given scenario, in terms of safety, comfort and interaction for all road users. The focus of this work is to conduct a short survey on this state-of-the-art image projection functionality. A holistic research regarding the image projection functionality can be divided into three major categories: scenario selection, technological development and evaluation design. Consequently, the work presented in this paper is divided into three short studies. Section 1 provides a brief introduction to pixel light systems and a justification for the approach adopted for this study. Section 2 deals with the selection of scenarios (and driving maneuvers) where image projection can play a critical role. Section 3 discusses high power LED and LED array based prototypes that are currently under development. Section 4 demonstrates results from an experiment conducted to evaluate the illuminance of an image space projected using a pixel light system prototype developed at the Institute of Product Development (IPeG). Findings from this work can help to identify and advance future research work relating to: further development of pixel light systems, scenario planning, examination of optimal light sources, behavioral response studies etc.

  8. Correction of projective distortion in long-image-sequence mosaics without prior information

    Science.gov (United States)

    Yang, Chenhui; Mao, Hongwei; Abousleman, Glen; Si, Jennie

    2010-04-01

    Image mosaicking is the process of piecing together multiple video frames or still images from a moving camera to form a wide-area or panoramic view of the scene being imaged. Mosaics have widespread applications in many areas such as security surveillance, remote sensing, geographical exploration, agricultural field surveillance, virtual reality, digital video, and medical image analysis, among others. When mosaicking a large number of still images or video frames, the quality of the resulting mosaic is compromised by projective distortion. That is, during the mosaicking process, the image frames that are transformed and pasted to the mosaic become significantly scaled down and appear out of proportion with respect to the mosaic. As more frames continue to be transformed, important target information in the frames can be lost since the transformed frames become too small, which eventually leads to the inability to continue further. Some projective distortion correction techniques make use of prior information such as GPS information embedded within the image, or camera internal and external parameters. Alternatively, this paper proposes a new algorithm to reduce the projective distortion without using any prior information whatsoever. Based on the analysis of the projective distortion, we approximate the projective matrix that describes the transformation between image frames using an affine model. Using singular value decomposition, we can deduce the affine model scaling factor that is usually very close to 1. By resetting the image scale of the affine model to 1, the transformed image size remains unchanged. Even though the proposed correction introduces some error in the image matching, this error is typically acceptable and more importantly, the final mosaic preserves the original image size after transformation. We demonstrate the effectiveness of this new correction algorithm on two real-world unmanned air vehicle (UAV) sequences. The proposed method is

  9. cluML: A markup language for clustering and cluster validity assessment of microarray data.

    Science.gov (United States)

    Bolshakova, Nadia; Cunningham, Pádraig

    2005-01-01

    cluML is a new markup language for microarray data clustering and cluster validity assessment. The XML-based format has been designed to address some of the limitations observed in traditional formats, such as inability to store multiple clustering (including biclustering) and validation results within a dataset. cluML is an effective tool to support biomedical knowledge representation in gene expression data analysis. Although cluML was developed for DNA microarray analysis applications, it can be effectively used for the representation of clustering and for the validation of other biomedical and physical data that has no limitations.

  10. A Converter from the Systems Biology Markup Language to the Synthetic Biology Open Language.

    Science.gov (United States)

    Nguyen, Tramy; Roehner, Nicholas; Zundel, Zach; Myers, Chris J

    2016-06-17

    Standards are important to synthetic biology because they enable exchange and reproducibility of genetic designs. This paper describes a procedure for converting between two standards: the Systems Biology Markup Language (SBML) and the Synthetic Biology Open Language (SBOL). SBML is a standard for behavioral models of biological systems at the molecular level. SBOL describes structural and basic qualitative behavioral aspects of a biological design. Converting SBML to SBOL enables a consistent connection between behavioral and structural information for a biological design. The conversion process described in this paper leverages Systems Biology Ontology (SBO) annotations to enable inference of a designs qualitative function.

  11. An accelerated threshold-based back-projection algorithm for Compton camera image reconstruction

    International Nuclear Information System (INIS)

    Mundy, Daniel W.; Herman, Michael G.

    2011-01-01

    Purpose: Compton camera imaging (CCI) systems are currently under investigation for radiotherapy dose reconstruction and verification. The ability of such a system to provide real-time images during dose delivery will be limited by the computational speed of the image reconstruction algorithm. In this work, the authors present a fast and simple method by which to generate an initial back-projected image from acquired CCI data, suitable for use in a filtered back-projection algorithm or as a starting point for iterative reconstruction algorithms, and compare its performance to the current state of the art. Methods: Each detector event in a CCI system describes a conical surface that includes the true point of origin of the detected photon. Numerical image reconstruction algorithms require, as a first step, the back-projection of each of these conical surfaces into an image space. The algorithm presented here first generates a solution matrix for each slice of the image space by solving the intersection of the conical surface with the image plane. Each element of the solution matrix is proportional to the distance of the corresponding voxel from the true intersection curve. A threshold function was developed to extract those pixels sufficiently close to the true intersection to generate a binary intersection curve. This process is repeated for each image plane for each CCI detector event, resulting in a three-dimensional back-projection image. The performance of this algorithm was tested against a marching algorithm known for speed and accuracy. Results: The threshold-based algorithm was found to be approximately four times faster than the current state of the art with minimal deficit to image quality, arising from the fact that a generically applicable threshold function cannot provide perfect results in all situations. The algorithm fails to extract a complete intersection curve in image slices near the detector surface for detector event cones having axes nearly

  12. Improvement of image quality using interpolated projection data estimation method in SPECT

    International Nuclear Information System (INIS)

    Takaki, Akihiro; Soma, Tsutomu; Murase, Kenya; Kojima, Akihiro; Asao, Kimie; Kamada, Shinya; Matsumoto, Masanori

    2009-01-01

    General data acquisition for single photon emission computed tomography (SPECT) is performed in 90 or 60 directions, with a coarse pitch of approximately 4-6 deg for a rotation of 360 deg or 180 deg, using a gamma camera. No data between adjacent projections will be sampled under these circumstances. The aim of the study was to develop a method to improve SPECT image quality by generating lacking projection data through interpolation of data obtained with a coarse pitch such as 6 deg. The projection data set at each individual degree in 360 directions was generated by a weighted average interpolation method from the projection data acquired with a coarse sampling angle (interpolated projection data estimation processing method, IPDE method). The IPDE method was applied to the numerical digital phantom data, actual phantom data and clinical brain data with Tc-99m ethyle cysteinate dimer (ECD). All SPECT images were reconstructed by the filtered back-projection method and compared with the original SPECT images. The results confirmed that streak artifacts decreased by apparently increasing a sampling number in SPECT after interpolation and also improved signal-to-noise (S/N) ratio of the root mean square uncertainty value. Furthermore, the normalized mean square error values, compared with standard images, had similar ones after interpolation. Moreover, the contrast and concentration ratios increased their effects after interpolation. These results indicate that effective improvement of image quality can be expected with interpolation. Thus, image quality and the ability to depict images can be improved while maintaining the present acquisition time and image quality. In addition, this can be achieved more effectively than at present even if the acquisition time is reduced. (author)

  13. The semantics of Chemical Markup Language (CML for computational chemistry : CompChem

    Directory of Open Access Journals (Sweden)

    Phadungsukanan Weerapong

    2012-08-01

    Full Text Available Abstract This paper introduces a subdomain chemistry format for storing computational chemistry data called CompChem. It has been developed based on the design, concepts and methodologies of Chemical Markup Language (CML by adding computational chemistry semantics on top of the CML Schema. The format allows a wide range of ab initio quantum chemistry calculations of individual molecules to be stored. These calculations include, for example, single point energy calculation, molecular geometry optimization, and vibrational frequency analysis. The paper also describes the supporting infrastructure, such as processing software, dictionaries, validation tools and database repositories. In addition, some of the challenges and difficulties in developing common computational chemistry dictionaries are discussed. The uses of CompChem are illustrated by two practical applications.

  14. The semantics of Chemical Markup Language (CML) for computational chemistry : CompChem.

    Science.gov (United States)

    Phadungsukanan, Weerapong; Kraft, Markus; Townsend, Joe A; Murray-Rust, Peter

    2012-08-07

    : This paper introduces a subdomain chemistry format for storing computational chemistry data called CompChem. It has been developed based on the design, concepts and methodologies of Chemical Markup Language (CML) by adding computational chemistry semantics on top of the CML Schema. The format allows a wide range of ab initio quantum chemistry calculations of individual molecules to be stored. These calculations include, for example, single point energy calculation, molecular geometry optimization, and vibrational frequency analysis. The paper also describes the supporting infrastructure, such as processing software, dictionaries, validation tools and database repositories. In addition, some of the challenges and difficulties in developing common computational chemistry dictionaries are discussed. The uses of CompChem are illustrated by two practical applications.

  15. Kernel based subspace projection of near infrared hyperspectral images of maize kernels

    DEFF Research Database (Denmark)

    Larsen, Rasmus; Arngren, Morten; Hansen, Per Waaben

    2009-01-01

    In this paper we present an exploratory analysis of hyper- spectral 900-1700 nm images of maize kernels. The imaging device is a line scanning hyper spectral camera using a broadband NIR illumi- nation. In order to explore the hyperspectral data we compare a series of subspace projection methods ......- tor transform outperform the linear methods as well as kernel principal components in producing interesting projections of the data.......In this paper we present an exploratory analysis of hyper- spectral 900-1700 nm images of maize kernels. The imaging device is a line scanning hyper spectral camera using a broadband NIR illumi- nation. In order to explore the hyperspectral data we compare a series of subspace projection methods...... including principal component analysis and maximum autocorrelation factor analysis. The latter utilizes the fact that interesting phenomena in images exhibit spatial autocorrelation. However, linear projections often fail to grasp the underlying variability on the data. Therefore we propose to use so...

  16. Multi-example feature-constrained back-projection method for image super-resolution

    Institute of Scientific and Technical Information of China (English)

    Junlei Zhang; Dianguang Gai; Xin Zhang; Xuemei Li

    2017-01-01

    Example-based super-resolution algorithms,which predict unknown high-resolution image information using a relationship model learnt from known high- and low-resolution image pairs, have attracted considerable interest in the field of image processing. In this paper, we propose a multi-example feature-constrained back-projection method for image super-resolution. Firstly, we take advantage of a feature-constrained polynomial interpolation method to enlarge the low-resolution image. Next, we consider low-frequency images of different resolutions to provide an example pair. Then, we use adaptive k NN search to find similar patches in the low-resolution image for every image patch in the high-resolution low-frequency image, leading to a regression model between similar patches to be learnt. The learnt model is applied to the low-resolution high-frequency image to produce high-resolution high-frequency information. An iterative back-projection algorithm is used as the final step to determine the final high-resolution image.Experimental results demonstrate that our method improves the visual quality of the high-resolution image.

  17. Lessons in scientific data interoperability: XML and the eMinerals project.

    Science.gov (United States)

    White, T O H; Bruin, R P; Chiang, G-T; Dove, M T; Tyer, R P; Walker, A M

    2009-03-13

    A collaborative environmental eScience project produces a broad range of data, notable as much for its diversity, in source and format, as its quantity. We find that extensible markup language (XML) and associated technologies are invaluable in managing this deluge of data. We describe Fo X, a toolkit for allowing Fortran codes to read and write XML, thus allowing existing scientific tools to be easily re-used in an XML-centric workflow.

  18. Augmented reality-guided neurosurgery: accuracy and intraoperative application of an image projection technique.

    Science.gov (United States)

    Besharati Tabrizi, Leila; Mahvash, Mehran

    2015-07-01

    An augmented reality system has been developed for image-guided neurosurgery to project images with regions of interest onto the patient's head, skull, or brain surface in real time. The aim of this study was to evaluate system accuracy and to perform the first intraoperative application. Images of segmented brain tumors in different localizations and sizes were created in 10 cases and were projected to a head phantom using a video projector. Registration was performed using 5 fiducial markers. After each registration, the distance of the 5 fiducial markers from the visualized tumor borders was measured on the virtual image and on the phantom. The difference was considered a projection error. Moreover, the image projection technique was intraoperatively applied in 5 patients and was compared with a standard navigation system. Augmented reality visualization of the tumors succeeded in all cases. The mean time for registration was 3.8 minutes (range 2-7 minutes). The mean projection error was 0.8 ± 0.25 mm. There were no significant differences in accuracy according to the localization and size of the tumor. Clinical feasibility and reliability of the augmented reality system could be proved intraoperatively in 5 patients (projection error 1.2 ± 0.54 mm). The augmented reality system is accurate and reliable for the intraoperative projection of images to the head, skull, and brain surface. The ergonomic advantage of this technique improves the planning of neurosurgical procedures and enables the surgeon to use direct visualization for image-guided neurosurgery.

  19. Intensity-based bayesian framework for image reconstruction from sparse projection data

    International Nuclear Information System (INIS)

    Rashed, E.A.; Kudo, Hiroyuki

    2009-01-01

    This paper presents a Bayesian framework for iterative image reconstruction from projection data measured over a limited number of views. The classical Nyquist sampling rule yields the minimum number of projection views required for accurate reconstruction. However, challenges exist in many medical and industrial imaging applications in which the projection data is undersampled. Classical analytical reconstruction methods such as filtered backprojection (FBP) are not a good choice for use in such cases because the data undersampling in the angular range introduces aliasing and streak artifacts that degrade lesion detectability. In this paper, we propose a Bayesian framework for maximum likelihood-expectation maximization (ML-EM)-based iterative reconstruction methods that incorporates a priori knowledge obtained from expected intensity information. The proposed framework is based on the fact that, in tomographic imaging, it is often possible to expect a set of intensity values of the reconstructed object with relatively high accuracy. The image reconstruction cost function is modified to include the l 1 norm distance to the a priori known information. The proposed method has the potential to regularize the solution to reduce artifacts without missing lesions that cannot be expected from the a priori information. Numerical studies showed a significant improvement in image quality and lesion detectability under the condition of highly undersampled projection data. (author)

  20. Combinational pixel-by-pixel and object-level classifying, segmenting, and agglomerating in performing quantitative image analysis that distinguishes between healthy non-cancerous and cancerous cell nuclei and delineates nuclear, cytoplasm, and stromal material objects from stained biological tissue materials

    Science.gov (United States)

    Boucheron, Laura E

    2013-07-16

    Quantitative object and spatial arrangement-level analysis of tissue are detailed using expert (pathologist) input to guide the classification process. A two-step method is disclosed for imaging tissue, by classifying one or more biological materials, e.g. nuclei, cytoplasm, and stroma, in the tissue into one or more identified classes on a pixel-by-pixel basis, and segmenting the identified classes to agglomerate one or more sets of identified pixels into segmented regions. Typically, the one or more biological materials comprises nuclear material, cytoplasm material, and stromal material. The method further allows a user to markup the image subsequent to the classification to re-classify said materials. The markup is performed via a graphic user interface to edit designated regions in the image.

  1. Color image quality in projection displays: a case study

    Science.gov (United States)

    Strand, Monica; Hardeberg, Jon Y.; Nussbaum, Peter

    2005-01-01

    Recently the use of projection displays has increased dramatically in different applications such as digital cinema, home theatre, and business and educational presentations. Even if the color image quality of these devices has improved significantly over the years, it is still a common situation for users of projection displays that the projected colors differ significantly from the intended ones. This study presented in this paper attempts to analyze the color image quality of a large set of projection display devices, particularly investigating the variations in color reproduction. As a case study, a set of 14 projectors (LCD and DLP technology) at Gjovik University College have been tested under four different conditions: dark and light room, with and without using an ICC-profile. To find out more about the importance of the illumination conditions in a room, and the degree of improvement when using an ICC-profile, the results from the measurements was processed and analyzed. Eye-One Beamer from GretagMacbeth was used to make the profiles. The color image quality was evaluated both visually and by color difference calculations. The results from the analysis indicated large visual and colorimetric differences between the projectors. Our DLP projectors have generally smaller color gamut than LCD projectors. The color gamuts of older projectors are significantly smaller than that of newer ones. The amount of ambient light reaching the screen is of great importance for the visual impression. If too much reflections and other ambient light reaches the screen, the projected image gets pale and has low contrast. When using a profile, the differences in colors between the projectors gets smaller and the colors appears more correct. For one device, the average ΔE*ab color difference when compared to a relative white reference was reduced from 22 to 11, for another from 13 to 6. Blue colors have the largest variations among the projection displays and makes them

  2. Adaptive fringe-pattern projection for image saturation avoidance in 3D surface-shape measurement.

    Science.gov (United States)

    Li, Dong; Kofman, Jonathan

    2014-04-21

    In fringe-projection 3D surface-shape measurement, image saturation results in incorrect intensities in captured images of fringe patterns, leading to phase and measurement errors. An adaptive fringe-pattern projection (AFPP) method was developed to adapt the maximum input gray level in projected fringe patterns to the local reflectivity of an object surface being measured. The AFPP method demonstrated improved 3D measurement accuracy by avoiding image saturation in highly-reflective surface regions while maintaining high intensity modulation across the entire surface. The AFPP method can avoid image saturation and handle varying surface reflectivity, using only two prior rounds of fringe-pattern projection and image capture to generate the adapted fringe patterns.

  3. Fluorescence guided lymph node biopsy in large animals using direct image projection device

    Science.gov (United States)

    Ringhausen, Elizabeth; Wang, Tylon; Pitts, Jonathan; Akers, Walter J.

    2016-03-01

    The use of fluorescence imaging for aiding oncologic surgery is a fast growing field in biomedical imaging, revolutionizing open and minimally invasive surgery practices. We have designed, constructed, and tested a system for fluorescence image acquisition and direct display on the surgical field for fluorescence guided surgery. The system uses a near-infrared sensitive CMOS camera for image acquisition, a near-infra LED light source for excitation, and DLP digital projector for projection of fluorescence image data onto the operating field in real time. Instrument control was implemented in Matlab for image capture, processing of acquired data and alignment of image parameters with the projected pattern. Accuracy of alignment was evaluated statistically to demonstrate sensitivity to small objects and alignment throughout the imaging field. After verification of accurate alignment, feasibility for clinical application was demonstrated in large animal models of sentinel lymph node biopsy. Indocyanine green was injected subcutaneously in Yorkshire pigs at various locations to model sentinel lymph node biopsy in gynecologic cancers, head and neck cancer, and melanoma. Fluorescence was detected by the camera system during operations and projected onto the imaging field, accurately identifying tissues containing the fluorescent tracer at up to 15 frames per second. Fluorescence information was projected as binary green regions after thresholding and denoising raw intensity data. Promising results with this initial clinical scale prototype provided encouraging results for the feasibility of optical projection of acquired luminescence during open oncologic surgeries.

  4. WE-G-BRF-04: Robust Real-Time Volumetric Imaging Based On One Single Projection

    International Nuclear Information System (INIS)

    Xu, Y; Yan, H; Ouyang, L; Wang, J; Jiang, S; Jia, X; Zhou, L

    2014-01-01

    Purpose: Real-time volumetric imaging is highly desirable to provide instantaneous image guidance for lung radiation therapy. This study proposes a scheme to achieve this goal using one single projection by utilizing sparse learning and a principal component analysis (PCA) based lung motion model. Methods: A patient-specific PCA-based lung motion model is first constructed by analyzing deformable vector fields (DVFs) between a reference image and 4DCT images at each phase. At the training stage, we “learn” the relationship between the DVFs and the projection using sparse learning. Specifically, we first partition the projections into patches, and then apply sparse learning to automatically identify patches that best correlate with the principal components of the DVFs. Once the relationship is established, at the application stage, we first employ a patchbased intensity correction method to overcome the problem of different intensity scale between the calculated projection in the training stage and the measured projection in the application stage. The corrected projection image is then fed to the trained model to derive a DVF, which is applied to the reference image, yielding a volumetric image corresponding to the projection. We have validated our method through a NCAT phantom simulation case and one experiment case. Results: Sparse learning can automatically select those patches containing motion information, such as those around diaphragm. For the simulation case, over 98% of the lung region pass the generalized gamma test (10HU/1mm), indicating combined accuracy in both intensity and spatial domain. For the experimental case, the average tumor localization errors projected to the imager are 0.68 mm and 0.4 mm on the axial and tangential direction, respectively. Conclusion: The proposed method is capable of accurately generating a volumetric image using one single projection. It will potentially offer real-time volumetric image guidance to facilitate lung

  5. WE-G-BRF-04: Robust Real-Time Volumetric Imaging Based On One Single Projection

    Energy Technology Data Exchange (ETDEWEB)

    Xu, Y [UT Southwestern Medical Center, Dallas, TX (United States); Southern Medical University, Guangzhou (China); Yan, H; Ouyang, L; Wang, J; Jiang, S; Jia, X [UT Southwestern Medical Center, Dallas, TX (United States); Zhou, L [Southern Medical University, Guangzhou (China)

    2014-06-15

    Purpose: Real-time volumetric imaging is highly desirable to provide instantaneous image guidance for lung radiation therapy. This study proposes a scheme to achieve this goal using one single projection by utilizing sparse learning and a principal component analysis (PCA) based lung motion model. Methods: A patient-specific PCA-based lung motion model is first constructed by analyzing deformable vector fields (DVFs) between a reference image and 4DCT images at each phase. At the training stage, we “learn” the relationship between the DVFs and the projection using sparse learning. Specifically, we first partition the projections into patches, and then apply sparse learning to automatically identify patches that best correlate with the principal components of the DVFs. Once the relationship is established, at the application stage, we first employ a patchbased intensity correction method to overcome the problem of different intensity scale between the calculated projection in the training stage and the measured projection in the application stage. The corrected projection image is then fed to the trained model to derive a DVF, which is applied to the reference image, yielding a volumetric image corresponding to the projection. We have validated our method through a NCAT phantom simulation case and one experiment case. Results: Sparse learning can automatically select those patches containing motion information, such as those around diaphragm. For the simulation case, over 98% of the lung region pass the generalized gamma test (10HU/1mm), indicating combined accuracy in both intensity and spatial domain. For the experimental case, the average tumor localization errors projected to the imager are 0.68 mm and 0.4 mm on the axial and tangential direction, respectively. Conclusion: The proposed method is capable of accurately generating a volumetric image using one single projection. It will potentially offer real-time volumetric image guidance to facilitate lung

  6. Direct image reconstruction with limited angle projection data for computerized tomography

    International Nuclear Information System (INIS)

    Inouye, T.

    1980-01-01

    Discussions are made on the minimum angle range for projection data necessary to reconstruct the complete CT image. As is easily shown from the image reconstruction theorem, the lack of projection angle provides no data for the Fourier transformed function of the object on the corresponding angular directions, where the projections are missing. In a normal situation, the Fourier transformed function of an object image holds an analytic characteristic with respect to two-dimensional orthogonal parameters. This characteristic enables uniquely prolonging the function outside the obtained region employing a sort of analytic continuation with respect to both parameters. In the method reported here, an object pattern, which is confined within a finite range, is shifted to a specified region to have complete orthogonal function expansions without changing the projection angle directions. These orthogonal functions are analytically extended to the missing projection angle range and the whole function is determined. This method does not include any estimation process, whose effectiveness is often seriously jeopardized by the presence of a slight fluctuation component. Computer simulations were carried out to demonstrate the effectiveness of the method

  7. Mark-up bancário, conflito distributivo e utilização da capacidade produtiva: uma macrodinâmica pós-keynesiana

    Directory of Open Access Journals (Sweden)

    Lima Gilberto Tadeu

    2003-01-01

    Full Text Available Elabora-se um modelo macrodinâmico pós-keynesiano de utilização da capacidade, distribuição e inflação por conflito, no qual a oferta de moeda de crédito é endógena. A taxa nominal de juros é determinada pela aplicação de um mark-up sobre a taxa básica fixada pela autoridade monetária. Ao longo do tempo, o mark-up bancário varia com a taxa de lucro sobre o capital físico, enquanto a taxa básica varia com excessos de demanda que não são acomodáveis pela utilização da capacidade. São analisados os casos em que a demanda é suficiente ou não para gerar a plena utilização da capacidade.

  8. Parametric image reconstruction using spectral analysis of PET projection data

    International Nuclear Information System (INIS)

    Meikle, Steven R.; Matthews, Julian C.; Cunningham, Vincent J.; Bailey, Dale L.; Livieratos, Lefteris; Jones, Terry; Price, Pat

    1998-01-01

    Spectral analysis is a general modelling approach that enables calculation of parametric images from reconstructed tracer kinetic data independent of an assumed compartmental structure. We investigated the validity of applying spectral analysis directly to projection data motivated by the advantages that: (i) the number of reconstructions is reduced by an order of magnitude and (ii) iterative reconstruction becomes practical which may improve signal-to-noise ratio (SNR). A dynamic software phantom with typical 2-[ 11 C]thymidine kinetics was used to compare projection-based and image-based methods and to assess bias-variance trade-offs using iterative expectation maximization (EM) reconstruction. We found that the two approaches are not exactly equivalent due to properties of the non-negative least-squares algorithm. However, the differences are small ( 1 and, to a lesser extent, VD). The optimal number of EM iterations was 15-30 with up to a two-fold improvement in SNR over filtered back projection. We conclude that projection-based spectral analysis with EM reconstruction yields accurate parametric images with high SNR and has potential application to a wide range of positron emission tomography ligands. (author)

  9. Extensions to the Dynamic Aerospace Vehicle Exchange Markup Language

    Science.gov (United States)

    Brian, Geoffrey J.; Jackson, E. Bruce

    2011-01-01

    The Dynamic Aerospace Vehicle Exchange Markup Language (DAVE-ML) is a syntactical language for exchanging flight vehicle dynamic model data. It provides a framework for encoding entire flight vehicle dynamic model data packages for exchange and/or long-term archiving. Version 2.0.1 of DAVE-ML provides much of the functionality envisioned for exchanging aerospace vehicle data; however, it is limited in only supporting scalar time-independent data. Additional functionality is required to support vector and matrix data, abstracting sub-system models, detailing dynamics system models (both discrete and continuous), and defining a dynamic data format (such as time sequenced data) for validation of dynamics system models and vehicle simulation packages. Extensions to DAVE-ML have been proposed to manage data as vectors and n-dimensional matrices, and record dynamic data in a compatible form. These capabilities will improve the clarity of data being exchanged, simplify the naming of parameters, and permit static and dynamic data to be stored using a common syntax within a single file; thereby enhancing the framework provided by DAVE-ML for exchanging entire flight vehicle dynamic simulation models.

  10. Integrated variable projection approach (IVAPA) for parallel magnetic resonance imaging.

    Science.gov (United States)

    Zhang, Qiao; Sheng, Jinhua

    2012-10-01

    Parallel magnetic resonance imaging (pMRI) is a fast method which requires algorithms for the reconstructing image from a small number of measured k-space lines. The accurate estimation of the coil sensitivity functions is still a challenging problem in parallel imaging. The joint estimation of the coil sensitivity functions and the desired image has recently been proposed to improve the situation by iteratively optimizing both the coil sensitivity functions and the image reconstruction. It regards both the coil sensitivities and the desired images as unknowns to be solved for jointly. In this paper, we propose an integrated variable projection approach (IVAPA) for pMRI, which integrates two individual processing steps (coil sensitivity estimation and image reconstruction) into a single processing step to improve the accuracy of the coil sensitivity estimation using the variable projection approach. The method is demonstrated to be able to give an optimal solution with considerably reduced artifacts for high reduction factors and a low number of auto-calibration signal (ACS) lines, and our implementation has a fast convergence rate. The performance of the proposed method is evaluated using a set of in vivo experiment data. Copyright © 2012 Elsevier Ltd. All rights reserved.

  11. Cross Cultural Images: The ETSU/NAU Special Photography Project.

    Science.gov (United States)

    Montgomery, Donna; Sluss, Dorothy; Lewis, Jamie; Vervelde, Peggy; Prater, Greg; Minner, Sam

    Recreation is a significant part of a full and rich life but is frequently overlooked in relation to handicapped children. A project called Cross-Cultural Images aimed to improve the quality of life for handicapped children by teaching them avocational photography skills. The project involved mildly handicapped children aged 7-11 in Appalachia, on…

  12. Automatically Generating a Distributed 3D Battlespace Using USMTF and XML-MTF Air Tasking Order, Extensible Markup Language (XML) and Virtual Reality Modeling Language (VRML)

    National Research Council Canada - National Science Library

    Murray, Mark

    2000-01-01

    .... To more effectively exchange and share data, the Defense Information Systems Agency (DISA), the lead agency for the USMTF, is actively engaged in extending the USMTF standard with a new data sharing technology called Extensible Markup Language (XML...

  13. Information and image integration: project spectrum

    Science.gov (United States)

    Blaine, G. James; Jost, R. Gilbert; Martin, Lori; Weiss, David A.; Lehmann, Ron; Fritz, Kevin

    1998-07-01

    The BJC Health System (BJC) and the Washington University School of Medicine (WUSM) formed a technology alliance with industry collaborators to develop and implement an integrated, advanced clinical information system. The industry collaborators include IBM, Kodak, SBC and Motorola. The activity, called Project Spectrum, provides an integrated clinical repository for the multiple hospital facilities of the BJC. The BJC System consists of 12 acute care hospitals serving over one million patients in Missouri and Illinois. An interface engine manages transactions from each of the hospital information systems, lab systems and radiology information systems. Data is normalized to provide a consistent view for the primary care physician. Access to the clinical repository is supported by web-based server/browser technology which delivers patient data to the physician's desktop. An HL7 based messaging system coordinates the acquisition and management of radiological image data and sends image keys to the clinical data repository. Access to the clinical chart browser currently provides radiology reports, laboratory data, vital signs and transcribed medical reports. A chart metaphor provides tabs for the selection of the clinical record for review. Activation of the radiology tab facilitates a standardized view of radiology reports and provides an icon used to initiate retrieval of available radiology images. The selection of the image icon spawns an image browser plug-in and utilizes the image key from the clinical repository to access the image server for the requested image data. The Spectrum system is collecting clinical data from five hospital systems and imaging data from two hospitals. Domain specific radiology imaging systems support the acquisition and primary interpretation of radiology exams. The spectrum clinical workstations are deployed to over 200 sites utilizing local area networks and ISDN connectivity.

  14. Treating metadata as annotations: separating the content markup from the content

    Directory of Open Access Journals (Sweden)

    Fredrik Paulsson

    2007-11-01

    Full Text Available The use of digital learning resources creates an increasing need for semantic metadata, describing the whole resource, as well as parts of resources. Traditionally, schemas such as Text Encoding Initiative (TEI have been used to add semantic markup for parts of resources. This is not sufficient for use in a ”metadata ecology”, where metadata is distributed, coherent to different Application Profiles, and added by different actors. A new methodology, where metadata is “pointed in” as annotations, using XPointers, and RDF is proposed. A suggestion for how such infrastructure can be implemented, using existing open standards for metadata, and for the web is presented. We argue that such methodology and infrastructure is necessary to realize the decentralized metadata infrastructure needed for a “metadata ecology".

  15. Coalescence measurements for evolving foams monitored by real-time projection imaging

    International Nuclear Information System (INIS)

    Myagotin, A; Helfen, L; Baumbach, T

    2009-01-01

    Real-time radiographic projection imaging together with novel spatio-temporal image analysis is presented to be a powerful technique for the quantitative analysis of coalescence processes accompanying the generation and temporal evolution of foams and emulsions. Coalescence events can be identified as discontinuities in a spatio-temporal image representing a sequence of projection images. Detection, identification of intensity and localization of the discontinuities exploit a violation criterion of the Fourier shift theorem and are based on recursive spatio-temporal image partitioning. The proposed method is suited for automated measurements of discontinuity rates (i.e., discontinuity intensity per unit time), so that large series of radiographs can be analyzed without user intervention. The application potential is demonstrated by the quantification of coalescence during the formation and decay of metal foams monitored by real-time x-ray radiography

  16. Data dictionary services in XNAT and the Human Connectome Project

    Science.gov (United States)

    Herrick, Rick; McKay, Michael; Olsen, Timothy; Horton, William; Florida, Mark; Moore, Charles J.; Marcus, Daniel S.

    2014-01-01

    The XNAT informatics platform is an open source data management tool used by biomedical imaging researchers around the world. An important feature of XNAT is its highly extensible architecture: users of XNAT can add new data types to the system to capture the imaging and phenotypic data generated in their studies. Until recently, XNAT has had limited capacity to broadcast the meaning of these data extensions to users, other XNAT installations, and other software. We have implemented a data dictionary service for XNAT, which is currently being used on ConnectomeDB, the Human Connectome Project (HCP) public data sharing website. The data dictionary service provides a framework to define key relationships between data elements and structures across the XNAT installation. This includes not just core data representing medical imaging data or subject or patient evaluations, but also taxonomical structures, security relationships, subject groups, and research protocols. The data dictionary allows users to define metadata for data structures and their properties, such as value types (e.g., textual, integers, floats) and valid value templates, ranges, or field lists. The service provides compatibility and integration with other research data management services by enabling easy migration of XNAT data to standards-based formats such as the Resource Description Framework (RDF), JavaScript Object Notation (JSON), and Extensible Markup Language (XML). It also facilitates the conversion of XNAT's native data schema into standard neuroimaging vocabularies and structures. PMID:25071542

  17. Reconstruction of tomographic image from x-ray projections of a few views

    International Nuclear Information System (INIS)

    Kobayashi, Fujio; Yamaguchi, Shoichiro

    1982-01-01

    Computer tomographs have progressed rapidly, and in the latest high performance types, the photographing time has been shortened to less than 5 sec, but the clear images of hearts have not yet been obtained. The X-ray tomographs used so far irradiate X-ray from many directions and measure the projected data, but by limiting projection direction to a small number, it was planned to shorter the X-ray photographing time and to reduce X-ray exposure as the objective of this study. In this paper, a method is proposed, by which tomographic images are reconstructed from projected data in a small number of direction by generalized inverse matrix penalty method. This method is the calculation method newly devised by the authors for this purpose. It is a kind of the nonlinear planning method added with the restrictive condition using a generalized inverse matrix, and it is characterized by the simple calculation procedure and rapid convergence. Moreover, the effect on reconstructed images when errors are included in projected data was examined. Also, the simple computer simulation to reconstruct tomographic images using the projected data in four directions was performed, and the usefulness of this method was confirmed. It contributes to the development of superhigh speed tomographs in future. (Kako, I.)

  18. A study of images of Projective Angles of pulmonary veins

    Energy Technology Data Exchange (ETDEWEB)

    Wang Jue [Beijing Anzhen Hospital, Beijing (China); Zhaoqi, Zhang [Beijing Anzhen Hospital, Beijing (China)], E-mail: zhaoqi5000@vip.sohu.com; Yu Wei; Miao Cuilian; Yan Zixu; Zhao Yike [Beijing Anzhen Hospital, Beijing (China)

    2009-09-15

    Aims: In images of magnetic resonance and computed tomography (CT) there are visible angles between pulmonary veins and the coronary, transversal or sagittal section of body. In this study these angles are measured and defined as Projective Angles of pulmonary veins. Several possible influential factors and characters of distribution are studied and analyzed for a better understanding of this imaging anatomic character of pulmonary veins. And it could be the anatomic base of adjusting correctly the angle of the central X-ray of the angiography of pulmonary veins undergoing the catheter ablation of atrial fibrillation (AF). Method: Images of contrast enhanced magnetic resonance angiography (CEMRA) and contrast enhanced computer tomography (CECT) of the left atrium and pulmonary veins of 137 health objects and patients with atrial fibrillation (AF) are processed with the technique of post-processing, and Projective Angles to the coronary and transversal sections are measured and analyzed statistically. Result: Project Angles of pulmonary veins are one of real and steady imaging anatomic characteristics of pulmonary veins. The statistical distribution of variables is relatively concentrated, with a fairly good representation of average value. It is possible to improve the angle of the central X-ray according to the average value in the selective angiography of pulmonary veins undergoing the catheter ablation of AF.

  19. Kinematic Measurement of Knee Prosthesis from Single-Plane Projection Images

    Science.gov (United States)

    Hirokawa, Shunji; Ariyoshi, Shogo; Takahashi, Kenji; Maruyama, Koichi

    In this paper, the measurement of 3D motion from 2D perspective projections of knee prosthesis is described. The technique reported by Banks and Hodge was further developed in this study. The estimation was performed in two steps. The first-step estimation was performed on the assumption of orthogonal projection. Then, the second-step estimation was subsequently carried out based upon the perspective projection to accomplish more accurate estimation. The simulation results have demonstrated that the technique archived sufficient accuracies of position/orientation estimation for prosthetic kinematics. Then we applied our algorithm to the CCD images, thereby examining the influences of various artifacts, possibly incorporated through an imaging process, on the estimation accuracies. We found that accuracies in the experiment were influenced mainly by the geometric discrepancies between the prosthesis component and computer generated model and by the spacial inconsistencies between the coordinate axes of the positioner and that of the computer model. However, we verified that our algorithm could achieve proper and consistent estimation even for the CCD images.

  20. Modeling of the positioning system and visual mark-up of historical cadastral maps

    Directory of Open Access Journals (Sweden)

    Tomislav Jakopec

    2013-03-01

    Full Text Available The aim of the paper is to present of the possibilities of positioning and visual markup of historical cadastral maps onto Google maps using open source software. The corpus is stored in the Croatian State Archives in Zagreb, in the Maps Archive for Croatia and Slavonia. It is part of cadastral documentation that consists of cadastral material from the period of first cadastral survey conducted in the Kingdom of Croatia and Slavonia from 1847 to 1877, and which is used extensively according to the data provided by the customer service of the Croatian State Archives. User needs on the one side and the possibilities of innovative implementation of ICT on the other have motivated the development of the system which would use digital copies of original cadastral maps and connect them with systems like Google maps, and thus both protect the original materials and open up new avenues of research related to the use of originals. With this aim in mind, two cadastral map presentation models have been created. Firstly, there is a detailed display of the original, which enables its viewing using dynamic zooming. Secondly, the interactive display is facilitated through blending the cadastral maps with Google maps, which resulted in establishing links between the coordinates of the digital and original plans through transformation. The transparency of the original can be changed, and the user can intensify the visibility of the underlying layer (Google map or the top layer (cadastral map, which enables direct insight into parcel dynamics over a longer time-span. The system also allows for the mark-up of cadastral maps, which can lead to the development of the cumulative index of all terms found on cadastral maps. The paper is an example of the implementation of ICT for providing new services, strengthening cooperation with the interested public and related institutions, familiarizing the public with the archival material, and offering new possibilities for

  1. Real-time volumetric image reconstruction and 3D tumor localization based on a single x-ray projection image for lung cancer radiotherapy.

    Science.gov (United States)

    Li, Ruijiang; Jia, Xun; Lewis, John H; Gu, Xuejun; Folkerts, Michael; Men, Chunhua; Jiang, Steve B

    2010-06-01

    To develop an algorithm for real-time volumetric image reconstruction and 3D tumor localization based on a single x-ray projection image for lung cancer radiotherapy. Given a set of volumetric images of a patient at N breathing phases as the training data, deformable image registration was performed between a reference phase and the other N-1 phases, resulting in N-1 deformation vector fields (DVFs). These DVFs can be represented efficiently by a few eigenvectors and coefficients obtained from principal component analysis (PCA). By varying the PCA coefficients, new DVFs can be generated, which, when applied on the reference image, lead to new volumetric images. A volumetric image can then be reconstructed from a single projection image by optimizing the PCA coefficients such that its computed projection matches the measured one. The 3D location of the tumor can be derived by applying the inverted DVF on its position in the reference image. The algorithm was implemented on graphics processing units (GPUs) to achieve real-time efficiency. The training data were generated using a realistic and dynamic mathematical phantom with ten breathing phases. The testing data were 360 cone beam projections corresponding to one gantry rotation, simulated using the same phantom with a 50% increase in breathing amplitude. The average relative image intensity error of the reconstructed volumetric images is 6.9% +/- 2.4%. The average 3D tumor localization error is 0.8 +/- 0.5 mm. On an NVIDIA Tesla C1060 GPU card, the average computation time for reconstructing a volumetric image from each projection is 0.24 s (range: 0.17 and 0.35 s). The authors have shown the feasibility of reconstructing volumetric images and localizing tumor positions in 3D in near real-time from a single x-ray image.

  2. Automatically Generating a Distributed 3D Virtual Battlespace Using USMTF and XML-MTF Air Tasking Orders, Extensible Markup Language (XML) and Virtual Reality Modeling Language (VRML)

    National Research Council Canada - National Science Library

    Murray, Mark

    2000-01-01

    .... To more effectively exchange and share data, the Defense Information Systems Agency (DISA), the lead agency for the USMTF, is actively engaged in extending the USMTF standard with a new data sharing technology called Extensible Markup Language (XML...

  3. National Land Imaging Requirements (NLIR) Pilot Project summary report: summary of moderate resolution imaging user requirements

    Science.gov (United States)

    Vadnais, Carolyn; Stensaas, Gregory

    2014-01-01

    Under the National Land Imaging Requirements (NLIR) Project, the U.S. Geological Survey (USGS) is developing a functional capability to obtain, characterize, manage, maintain and prioritize all Earth observing (EO) land remote sensing user requirements. The goal is a better understanding of community needs that can be supported with land remote sensing resources, and a means to match needs with appropriate solutions in an effective and efficient way. The NLIR Project is composed of two components. The first component is focused on the development of the Earth Observation Requirements Evaluation System (EORES) to capture, store and analyze user requirements, whereas, the second component is the mechanism and processes to elicit and document the user requirements that will populate the EORES. To develop the second component, the requirements elicitation methodology was exercised and refined through a pilot project conducted from June to September 2013. The pilot project focused specifically on applications and user requirements for moderate resolution imagery (5–120 meter resolution) as the test case for requirements development. The purpose of this summary report is to provide a high-level overview of the requirements elicitation process that was exercised through the pilot project and an early analysis of the moderate resolution imaging user requirements acquired to date to support ongoing USGS sustainable land imaging study needs. The pilot project engaged a limited set of Federal Government users from the operational and research communities and therefore the information captured represents only a subset of all land imaging user requirements. However, based on a comparison of results, trends, and analysis, the pilot captured a strong baseline of typical applications areas and user needs for moderate resolution imagery. Because these results are preliminary and represent only a sample of users and application areas, the information from this report should only

  4. Assessing natural hazard risk using images and data

    Science.gov (United States)

    Mccullough, H. L.; Dunbar, P. K.; Varner, J. D.; Mungov, G.

    2012-12-01

    Photographs and other visual media provide valuable pre- and post-event data for natural hazard assessment. Scientific research, mitigation, and forecasting rely on visual data for risk analysis, inundation mapping and historic records. Instrumental data only reveal a portion of the whole story; photographs explicitly illustrate the physical and societal impacts from the event. Visual data is rapidly increasing as the availability of portable high resolution cameras and video recorders becomes more attainable. Incorporating these data into archives ensures a more complete historical account of events. Integrating natural hazards data, such as tsunami, earthquake and volcanic eruption events, socio-economic information, and tsunami deposits and runups along with images and photographs enhances event comprehension. Global historic databases at NOAA's National Geophysical Data Center (NGDC) consolidate these data, providing the user with easy access to a network of information. NGDC's Natural Hazards Image Database (ngdc.noaa.gov/hazardimages) was recently improved to provide a more efficient and dynamic user interface. It uses the Google Maps API and Keyhole Markup Language (KML) to provide geographic context to the images and events. Descriptive tags, or keywords, have been applied to each image, enabling easier navigation and discovery. In addition, the Natural Hazards Map Viewer (maps.ngdc.noaa.gov/viewers/hazards) provides the ability to search and browse data layers on a Mercator-projection globe with a variety of map backgrounds. This combination of features creates a simple and effective way to enhance our understanding of hazard events and risks using imagery.

  5. Image reconstruction from multiple fan-beam projections

    International Nuclear Information System (INIS)

    Jelinek, J.; Overton, T.R.

    1984-01-01

    Special-purpose third-generation fan-beam CT systems can be greatly simplified by limiting the number of detectors, but this requires a different mode of data collection to provide a set of projections appropriate to the required spatial resolution in the reconstructed image. Repeated rotation of the source-detector fan, combined with shift of the detector array and perhaps offset of the source with respect to the fan's axis after each 360 0 rotation(cycle), provides a fairly general pattern of projection space filling. The authors' investigated the problem of optimal data-collection geometry for a multiple-rotation fan-beam scanner and of corresponding reconstruction algorithm

  6. Light at Night Markup Language (LANML): XML Technology for Light at Night Monitoring Data

    Science.gov (United States)

    Craine, B. L.; Craine, E. R.; Craine, E. M.; Crawford, D. L.

    2013-05-01

    Light at Night Markup Language (LANML) is a standard, based upon XML, useful in acquiring, validating, transporting, archiving and analyzing multi-dimensional light at night (LAN) datasets of any size. The LANML standard can accommodate a variety of measurement scenarios including single spot measures, static time-series, web based monitoring networks, mobile measurements, and airborne measurements. LANML is human-readable, machine-readable, and does not require a dedicated parser. In addition LANML is flexible; ensuring future extensions of the format will remain backward compatible with analysis software. The XML technology is at the heart of communicating over the internet and can be equally useful at the desktop level, making this standard particularly attractive for web based applications, educational outreach and efficient collaboration between research groups.

  7. Fluorescence In Situ Hybridization (FISH Signal Analysis Using Automated Generated Projection Images

    Directory of Open Access Journals (Sweden)

    Xingwei Wang

    2012-01-01

    Full Text Available Fluorescence in situ hybridization (FISH tests provide promising molecular imaging biomarkers to more accurately and reliably detect and diagnose cancers and genetic disorders. Since current manual FISH signal analysis is low-efficient and inconsistent, which limits its clinical utility, developing automated FISH image scanning systems and computer-aided detection (CAD schemes has been attracting research interests. To acquire high-resolution FISH images in a multi-spectral scanning mode, a huge amount of image data with the stack of the multiple three-dimensional (3-D image slices is generated from a single specimen. Automated preprocessing these scanned images to eliminate the non-useful and redundant data is important to make the automated FISH tests acceptable in clinical applications. In this study, a dual-detector fluorescence image scanning system was applied to scan four specimen slides with FISH-probed chromosome X. A CAD scheme was developed to detect analyzable interphase cells and map the multiple imaging slices recorded FISH-probed signals into the 2-D projection images. CAD scheme was then applied to each projection image to detect analyzable interphase cells using an adaptive multiple-threshold algorithm, identify FISH-probed signals using a top-hat transform, and compute the ratios between the normal and abnormal cells. To assess CAD performance, the FISH-probed signals were also independently visually detected by an observer. The Kappa coefficients for agreement between CAD and observer ranged from 0.69 to 1.0 in detecting/counting FISH signal spots in four testing samples. The study demonstrated the feasibility of automated FISH signal analysis that applying a CAD scheme to the automated generated 2-D projection images.

  8. Visual Interpretation with Three-Dimensional Annotations (VITA): three-dimensional image interpretation tool for radiological reporting.

    Science.gov (United States)

    Roy, Sharmili; Brown, Michael S; Shih, George L

    2014-02-01

    This paper introduces a software framework called Visual Interpretation with Three-Dimensional Annotations (VITA) that is able to automatically generate three-dimensional (3D) visual summaries based on radiological annotations made during routine exam reporting. VITA summaries are in the form of rotating 3D volumes where radiological annotations are highlighted to place important clinical observations into a 3D context. The rendered volume is produced as a Digital Imaging and Communications in Medicine (DICOM) object and is automatically added to the study for archival in Picture Archiving and Communication System (PACS). In addition, a video summary (e.g., MPEG4) can be generated for sharing with patients and for situations where DICOM viewers are not readily available to referring physicians. The current version of VITA is compatible with ClearCanvas; however, VITA can work with any PACS workstation that has a structured annotation implementation (e.g., Extendible Markup Language, Health Level 7, Annotation and Image Markup) and is able to seamlessly integrate into the existing reporting workflow. In a survey with referring physicians, the vast majority strongly agreed that 3D visual summaries improve the communication of the radiologists' reports and aid communication with patients.

  9. SBML-PET-MPI: a parallel parameter estimation tool for Systems Biology Markup Language based models.

    Science.gov (United States)

    Zi, Zhike

    2011-04-01

    Parameter estimation is crucial for the modeling and dynamic analysis of biological systems. However, implementing parameter estimation is time consuming and computationally demanding. Here, we introduced a parallel parameter estimation tool for Systems Biology Markup Language (SBML)-based models (SBML-PET-MPI). SBML-PET-MPI allows the user to perform parameter estimation and parameter uncertainty analysis by collectively fitting multiple experimental datasets. The tool is developed and parallelized using the message passing interface (MPI) protocol, which provides good scalability with the number of processors. SBML-PET-MPI is freely available for non-commercial use at http://www.bioss.uni-freiburg.de/cms/sbml-pet-mpi.html or http://sites.google.com/site/sbmlpetmpi/.

  10. A service protocol for post-processing of medical images on the mobile device

    Science.gov (United States)

    He, Longjun; Ming, Xing; Xu, Lang; Liu, Qian

    2014-03-01

    With computing capability and display size growing, the mobile device has been used as a tool to help clinicians view patient information and medical images anywhere and anytime. It is uneasy and time-consuming for transferring medical images with large data size from picture archiving and communication system to mobile client, since the wireless network is unstable and limited by bandwidth. Besides, limited by computing capability, memory and power endurance, it is hard to provide a satisfactory quality of experience for radiologists to handle some complex post-processing of medical images on the mobile device, such as real-time direct interactive three-dimensional visualization. In this work, remote rendering technology is employed to implement the post-processing of medical images instead of local rendering, and a service protocol is developed to standardize the communication between the render server and mobile client. In order to make mobile devices with different platforms be able to access post-processing of medical images, the Extensible Markup Language is taken to describe this protocol, which contains four main parts: user authentication, medical image query/ retrieval, 2D post-processing (e.g. window leveling, pixel values obtained) and 3D post-processing (e.g. maximum intensity projection, multi-planar reconstruction, curved planar reformation and direct volume rendering). And then an instance is implemented to verify the protocol. This instance can support the mobile device access post-processing of medical image services on the render server via a client application or on the web page.

  11. Coding practice of the Journal Article Tag Suite extensible markup language

    Directory of Open Access Journals (Sweden)

    Sun Huh

    2014-08-01

    Full Text Available In general, the Journal Article Tag Suite (JATS extensible markup language (XML coding is processed automatically by an XML filtering program. In this article, the basic tagging in JATS is explained in terms of coding practice. A text editor that supports UTF-8 encoding is necessary to input JATS XML data that works in every language. Any character representable in Unicode can be used in JATS XML, and commonly available web browsers can be used to view JATS XML files. JATS XML files can refer to document type definitions, extensible stylesheet language files, and cascading style sheets, but they must specify the locations of those files. Tools for validating JATS XML files are available via the web sites of PubMed Central and ScienceCentral. Once these files are uploaded to a web server, they can be accessed from all over the world by anyone with a browser. Encoding an example article in JATS XML may help editors in deciding on the adoption of JATS XML.

  12. Standardized Semantic Markup for Reference Terminologies, Thesauri and Coding Systems: Benefits for distributed E-Health Applications.

    Science.gov (United States)

    Hoelzer, Simon; Schweiger, Ralf K; Liu, Raymond; Rudolf, Dirk; Rieger, Joerg; Dudeck, Joachim

    2005-01-01

    With the introduction of the ICD-10 as the standard for diagnosis, the development of an electronic representation of its complete content, inherent semantics and coding rules is necessary. Our concept refers to current efforts of the CEN/TC 251 to establish a European standard for hierarchical classification systems in healthcare. We have developed an electronic representation of the ICD-10 with the extensible Markup Language (XML) that facilitates the integration in current information systems or coding software taking into account different languages and versions. In this context, XML offers a complete framework of related technologies and standard tools for processing that helps to develop interoperable applications.

  13. SBML-SAT: a systems biology markup language (SBML) based sensitivity analysis tool.

    Science.gov (United States)

    Zi, Zhike; Zheng, Yanan; Rundell, Ann E; Klipp, Edda

    2008-08-15

    It has long been recognized that sensitivity analysis plays a key role in modeling and analyzing cellular and biochemical processes. Systems biology markup language (SBML) has become a well-known platform for coding and sharing mathematical models of such processes. However, current SBML compatible software tools are limited in their ability to perform global sensitivity analyses of these models. This work introduces a freely downloadable, software package, SBML-SAT, which implements algorithms for simulation, steady state analysis, robustness analysis and local and global sensitivity analysis for SBML models. This software tool extends current capabilities through its execution of global sensitivity analyses using multi-parametric sensitivity analysis, partial rank correlation coefficient, SOBOL's method, and weighted average of local sensitivity analyses in addition to its ability to handle systems with discontinuous events and intuitive graphical user interface. SBML-SAT provides the community of systems biologists a new tool for the analysis of their SBML models of biochemical and cellular processes.

  14. Reproducible computational biology experiments with SED-ML--the Simulation Experiment Description Markup Language.

    Science.gov (United States)

    Waltemath, Dagmar; Adams, Richard; Bergmann, Frank T; Hucka, Michael; Kolpakov, Fedor; Miller, Andrew K; Moraru, Ion I; Nickerson, David; Sahle, Sven; Snoep, Jacky L; Le Novère, Nicolas

    2011-12-15

    The increasing use of computational simulation experiments to inform modern biological research creates new challenges to annotate, archive, share and reproduce such experiments. The recently published Minimum Information About a Simulation Experiment (MIASE) proposes a minimal set of information that should be provided to allow the reproduction of simulation experiments among users and software tools. In this article, we present the Simulation Experiment Description Markup Language (SED-ML). SED-ML encodes in a computer-readable exchange format the information required by MIASE to enable reproduction of simulation experiments. It has been developed as a community project and it is defined in a detailed technical specification and additionally provides an XML schema. The version of SED-ML described in this publication is Level 1 Version 1. It covers the description of the most frequent type of simulation experiments in the area, namely time course simulations. SED-ML documents specify which models to use in an experiment, modifications to apply on the models before using them, which simulation procedures to run on each model, what analysis results to output, and how the results should be presented. These descriptions are independent of the underlying model implementation. SED-ML is a software-independent format for encoding the description of simulation experiments; it is not specific to particular simulation tools. Here, we demonstrate that with the growing software support for SED-ML we can effectively exchange executable simulation descriptions. With SED-ML, software can exchange simulation experiment descriptions, enabling the validation and reuse of simulation experiments in different tools. Authors of papers reporting simulation experiments can make their simulation protocols available for other scientists to reproduce the results. Because SED-ML is agnostic about exact modeling language(s) used, experiments covering models from different fields of research

  15. Reproducible computational biology experiments with SED-ML - The Simulation Experiment Description Markup Language

    Science.gov (United States)

    2011-01-01

    Background The increasing use of computational simulation experiments to inform modern biological research creates new challenges to annotate, archive, share and reproduce such experiments. The recently published Minimum Information About a Simulation Experiment (MIASE) proposes a minimal set of information that should be provided to allow the reproduction of simulation experiments among users and software tools. Results In this article, we present the Simulation Experiment Description Markup Language (SED-ML). SED-ML encodes in a computer-readable exchange format the information required by MIASE to enable reproduction of simulation experiments. It has been developed as a community project and it is defined in a detailed technical specification and additionally provides an XML schema. The version of SED-ML described in this publication is Level 1 Version 1. It covers the description of the most frequent type of simulation experiments in the area, namely time course simulations. SED-ML documents specify which models to use in an experiment, modifications to apply on the models before using them, which simulation procedures to run on each model, what analysis results to output, and how the results should be presented. These descriptions are independent of the underlying model implementation. SED-ML is a software-independent format for encoding the description of simulation experiments; it is not specific to particular simulation tools. Here, we demonstrate that with the growing software support for SED-ML we can effectively exchange executable simulation descriptions. Conclusions With SED-ML, software can exchange simulation experiment descriptions, enabling the validation and reuse of simulation experiments in different tools. Authors of papers reporting simulation experiments can make their simulation protocols available for other scientists to reproduce the results. Because SED-ML is agnostic about exact modeling language(s) used, experiments covering models from

  16. A projection graphic display for the computer aided analysis of bubble chamber images

    International Nuclear Information System (INIS)

    Solomos, E.

    1979-01-01

    A projection graphic display for aiding the analysis of bubble chamber photographs has been developed by the Instrumentation Group of EF Division at CERN. The display image is generated on a very high brightness cathode ray tube and projected on to the table of the scanning-measuring machines as a superposition to the image of the bubble chamber. The display can send messages to the operator and aid the measurement by indicating directly on the chamber image the tracks which are measured correctly or not. (orig.)

  17. A digital imaging teaching file by using the internet, HTML and personal computers

    International Nuclear Information System (INIS)

    Chun, Tong Jin; Jeon, Eun Ju; Baek, Ho Gil; Kang, Eun Joo; Baik, Seung Kug; Choi, Han Yong; Kim, Bong Ki

    1996-01-01

    A film-based teaching file takes up space and the need to search through such a file places limits on the extent to which it is likely to be used. Furthermore it is not easy for doctors in a medium-sized hospital to experience a variety of cases, and so for these reasons we created an easy-to-use digital imaging teaching file with HTML(Hypertext Markup Language) and downloaded images via World Wide Web(WWW) services on the Internet. This was suitable for use by computer novices. We used WWW internet services as a resource for various images and three different IMB-PC compatible computers(386DX, 486DX-II, and Pentium) in downloading the images and in developing a digitalized teaching file. These computers were connected with the Internet through a high speed dial-up modem(28.8Kbps) and to navigate the Internet. Twinsock and Netscape were used. 3.0, Korean word processing software, was used to create HTML(Hypertext Markup Language) files and the downloaded images were linked to the HTML files. In this way, a digital imaging teaching file program was created. Access to a Web service via the Internet required a high speed computer(at least 486DX II with 8MB RAM) for comfortabel use; this also ensured that the quality of downloaded images was not degraded during downloading and that these were good enough to use as a teaching file. The time needed to retrieve the text and related images depends on the size of the file, the speed of the network, and the network traffic at the time of connection. For computer novices, a digital image teaching file using HTML is easy to use. Our method of creating a digital imaging teaching file by using Internet and HTML would be easy to create and radiologists with little computer experience who want to study various digital radiologic imaging cases would find it easy to use

  18. A method for volumetric imaging in radiotherapy using single x-ray projection

    International Nuclear Information System (INIS)

    Xu, Yuan; Yan, Hao; Ouyang, Luo; Wang, Jing; Jiang, Steve B.; Jia, Xun; Zhou, Linghong; Cervino, Laura

    2015-01-01

    Purpose: It is an intriguing problem to generate an instantaneous volumetric image based on the corresponding x-ray projection. The purpose of this study is to develop a new method to achieve this goal via a sparse learning approach. Methods: To extract motion information hidden in projection images, the authors partitioned a projection image into small rectangular patches. The authors utilized a sparse learning method to automatically select patches that have a high correlation with principal component analysis (PCA) coefficients of a lung motion model. A model that maps the patch intensity to the PCA coefficients was built along with the patch selection process. Based on this model, a measured projection can be used to predict the PCA coefficients, which are then further used to generate a motion vector field and hence a volumetric image. The authors have also proposed an intensity baseline correction method based on the partitioned projection, in which the first and the second moments of pixel intensities at a patch in a simulated projection image are matched with those in a measured one via a linear transformation. The proposed method has been validated in both simulated data and real phantom data. Results: The algorithm is able to identify patches that contain relevant motion information such as the diaphragm region. It is found that an intensity baseline correction step is important to remove the systematic error in the motion prediction. For the simulation case, the sparse learning model reduced the prediction error for the first PCA coefficient to 5%, compared to the 10% error when sparse learning was not used, and the 95th percentile error for the predicted motion vector was reduced from 2.40 to 0.92 mm. In the phantom case with a regular tumor motion, the predicted tumor trajectory was successfully reconstructed with a 0.82 mm error for tumor center localization compared to a 1.66 mm error without using the sparse learning method. When the tumor motion

  19. Design and Development of a New Multi-Projection X-Ray System for Chest Imaging

    Science.gov (United States)

    Chawla, Amarpreet S.; Boyce, Sarah; Washington, Lacey; McAdams, H. Page; Samei, Ehsan

    2009-02-01

    Overlapping anatomical structures may confound the detection of abnormal pathology, including lung nodules, in conventional single-projection chest radiography. To minimize this fundamental limiting factor, a dedicated digital multi-projection system for chest imaging was recently developed at the Radiology Department of Duke University. We are reporting the design of the multi-projection imaging system and its initial performance in an ongoing clinical trial. The system is capable of acquiring multiple full-field projections of the same patient along both the horizontal and vertical axes at variable speeds and acquisition frame rates. These images acquired in rapid succession from slightly different angles about the posterior-anterior (PA) orientation can be correlated to minimize the influence of overlying anatomy. The developed system has been tested for repeatability and motion blur artifacts to investigate its robustness for clinical trials. Excellent geometrical consistency was found in the tube motion, with positional errors for clinical settings within 1%. The effect of tube-motion on the image quality measured in terms of impact on the modulation transfer function (MTF) was found to be minimal. The system was deemed clinic-ready and a clinical trial was subsequently launched. The flexibility of image acquisition built into the system provides a unique opportunity to easily modify it for different clinical applications, including tomosynthesis, correlation imaging (CI), and stereoscopic imaging.

  20. Projection correction for the pixel-by-pixel basis in diffraction enhanced imaging

    International Nuclear Information System (INIS)

    Huang Zhifeng; Kang Kejun; Li Zheng

    2006-01-01

    Theories and methods of x-ray diffraction enhanced imaging (DEI) and computed tomography of the DEI (DEI-CT) have been investigated recently. But the phenomenon of projection offsets which may affect the accuracy of the results of extraction methods of refraction-angle images and reconstruction algorithms of the DEI-CT is seldom of concern. This paper focuses on it. Projection offsets are revealed distinctly according to the equivalent rectilinear propagation model of the DEI. Then, an effective correction method using the equivalent positions of projection data is presented to eliminate the errors induced by projection offsets. The correction method is validated by a computer simulation experiment and extraction methods or reconstruction algorithms based on the corrected data can give more accurate results. The limitations of the correction method are discussed at the end

  1. THE IMAGE REGISTRATION OF FOURIER-MELLIN BASED ON THE COMBINATION OF PROJECTION AND GRADIENT PREPROCESSING

    Directory of Open Access Journals (Sweden)

    D. Gao

    2017-09-01

    Full Text Available Image registration is one of the most important applications in the field of image processing. The method of Fourier Merlin transform, which has the advantages of high precision and good robustness to change in light and shade, partial blocking, noise influence and so on, is widely used. However, not only this method can’t obtain the unique mutual power pulse function for non-parallel image pairs, even part of image pairs also can’t get the mutual power function pulse. In this paper, an image registration method based on Fourier-Mellin transformation in the view of projection-gradient preprocessing is proposed. According to the projection conformational equation, the method calculates the matrix of image projection transformation to correct the tilt image; then, gradient preprocessing and Fourier-Mellin transformation are performed on the corrected image to obtain the registration parameters. Eventually, the experiment results show that the method makes the image registration of Fourier-Mellin transformation not only applicable to the registration of the parallel image pairs, but also to the registration of non-parallel image pairs. What’s more, the better registration effect can be obtained

  2. Decoding using back-project algorithm from coded image in ICF

    International Nuclear Information System (INIS)

    Jiang shaoen; Liu Zhongli; Zheng Zhijian; Tang Daoyuan

    1999-01-01

    The principle of the coded imaging and its decoding in inertial confinement fusion is described simply. The authors take ring aperture microscope for example and use back-project (BP) algorithm to decode the coded image. The decoding program has been performed for numerical simulation. Simulations of two models are made, and the results show that the accuracy of BP algorithm is high and effect of reconstruction is good. Thus, it indicates that BP algorithm is applicable to decoding for coded image in ICF experiments

  3. Facilitating NCAR Data Discovery by Connecting Related Resources

    Science.gov (United States)

    Rosati, A.

    2012-12-01

    Linking datasets, creators, and users by employing the proper standards helps to increase the impact of funded research. In order for users to find a dataset, it must first be named. Data citations play the important role of giving datasets a persistent presence by assigning a formal "name" and location. This project focuses on the next step of the "name-find-use" sequence: enhancing discoverability of NCAR data by connecting related resources on the web. By examining metadata schemas that document datasets, I examined how Semantic Web approaches can help to ensure the widest possible range of data users. The focus was to move from search engine optimization (SEO) to information connectivity. Two main markup types are very visible in the Semantic Web and applicable to scientific dataset discovery: The Open Archives Initiative-Object Reuse and Exchange (OAI-ORE - www.openarchives.org) and Microdata (HTML5 and www.schema.org). My project creates pilot aggregations of related resources using both markup types for three case studies: The North American Regional Climate Change Assessment Program (NARCCAP) dataset and related publications, the Palmer Drought Severity Index (PSDI) animation and image files from NCAR's Visualization Lab (VisLab), and the multidisciplinary data types and formats from the Advanced Cooperative Arctic Data and Information Service (ACADIS). This project documents the differences between these markups and how each creates connectedness on the web. My recommendations point toward the most efficient and effective markup schema for aggregating resources within the three case studies based on the following assessment criteria: ease of use, current state of support and adoption of technology, integration with typical web tools, available vocabularies and geoinformatic standards, interoperability with current repositories and access portals (e.g. ESG, Java), and relation to data citation tools and methods.

  4. Comparison of power spectra for tomosynthesis projections and reconstructed images

    International Nuclear Information System (INIS)

    Engstrom, Emma; Reiser, Ingrid; Nishikawa, Robert

    2009-01-01

    Burgess et al. [Med. Phys. 28, 419-437 (2001)] showed that the power spectrum of mammographic breast background follows a power law and that lesion detectability is affected by the power-law exponent β which measures the amount of structure in the background. Following the study of Burgess et al., the authors measured and compared the power-law exponent of mammographic backgrounds in tomosynthesis projections and reconstructed slices to investigate the effect of tomosynthesis imaging on background structure. Our data set consisted of 55 patient cases. For each case, regions of interest (ROIs) were extracted from both projection images and reconstructed slices. The periodogram of each ROI was computed by taking the squared modulus of the Fourier transform of the ROI. The power-law exponent was determined for each periodogram and averaged across all ROIs extracted from all projections or reconstructed slices for each patient data set. For the projections, the mean β averaged across the 55 cases was 3.06 (standard deviation of 0.21), while it was 2.87 (0.24) for the corresponding reconstructions. The difference in β for a given patient between the projection ROIs and the reconstructed ROIs averaged across the 55 cases was 0.194, which was statistically significant (p<0.001). The 95% CI for the difference between the mean value of β for the projections and reconstructions was [0.170, 0.218]. The results are consistent with the observation that the amount of breast structure in the tomosynthesis slice is reduced compared to projection mammography and that this may lead to improved lesion detectability.

  5. From whole-body counting to imaging: The computer aided collimation gamma camera project (CACAO)

    Energy Technology Data Exchange (ETDEWEB)

    Jeanguillaume, C.; Begot, S.; Quartuccio, M.; Douiri, A.; Ballongue, P

    2000-07-01

    Whole-body counting is the method of choice for in vivo detection of contamination. To extend this well established method, the possible advantages of imaging radiocontaminants are examined. The use of the CACAO project is then studied. A comparison of simulated reconstructed images obtained by the CACAO project and by a conventional gamma camera used in nuclear medicine follows. Imaging a radionuclide contaminant with a geometrical sensitivity of 10{sup -2} seems possible in the near future. (author)

  6. From whole-body counting to imaging: The computer aided collimation gamma camera project (CACAO)

    International Nuclear Information System (INIS)

    Jeanguillaume, C.; Begot, S.; Quartuccio, M.; Douiri, A.; Ballongue, P.

    2000-01-01

    Whole-body counting is the method of choice for in vivo detection of contamination. To extend this well established method, the possible advantages of imaging radiocontaminants are examined. The use of the CACAO project is then studied. A comparison of simulated reconstructed images obtained by the CACAO project and by a conventional gamma camera used in nuclear medicine follows. Imaging a radionuclide contaminant with a geometrical sensitivity of 10 -2 seems possible in the near future. (author)

  7. Quantitative estimation of brain atrophy and function with PET and MRI two-dimensional projection images

    International Nuclear Information System (INIS)

    Saito, Reiko; Uemura, Koji; Uchiyama, Akihiko; Toyama, Hinako; Ishii, Kenji; Senda, Michio

    2001-01-01

    The purpose of this paper is to estimate the extent of atrophy and the decline in brain function objectively and quantitatively. Two-dimensional (2D) projection images of three-dimensional (3D) transaxial images of positron emission tomography (PET) and magnetic resonance imaging (MRI) were made by means of the Mollweide method which keeps the area of the brain surface. A correlation image was generated between 2D projection images of MRI and cerebral blood flow (CBF) or 18 F-fluorodeoxyglucose (FDG) PET images and the sulcus was extracted from the correlation image clustered by K-means method. Furthermore, the extent of atrophy was evaluated from the extracted sulcus on 2D-projection MRI and the cerebral cortical function such as blood flow or glucose metabolic rate was assessed in the cortex excluding sulcus on 2D-projection PET image, and then the relationship between the cerebral atrophy and function was evaluated. This method was applied to the two groups, the young and the aged normal subjects, and the relationship between the age and the rate of atrophy or the cerebral blood flow was investigated. This method was also applied to FDG-PET and MRI studies in the normal controls and in patients with corticobasal degeneration. The mean rate of atrophy in the aged group was found to be higher than that in the young. The mean value and the variance of the cerebral blood flow for the young are greater than those of the aged. The sulci were similarly extracted using either CBF or FDG PET images. The purposed method using 2-D projection images of MRI and PET is clinically useful for quantitative assessment of atrophic change and functional disorder of cerebral cortex. (author)

  8. Transparent ICD and DRG coding using information technology: linking and associating information sources with the eXtensible Markup Language.

    Science.gov (United States)

    Hoelzer, Simon; Schweiger, Ralf K; Dudeck, Joachim

    2003-01-01

    With the introduction of ICD-10 as the standard for diagnostics, it becomes necessary to develop an electronic representation of its complete content, inherent semantics, and coding rules. The authors' design relates to the current efforts by the CEN/TC 251 to establish a European standard for hierarchical classification systems in health care. The authors have developed an electronic representation of ICD-10 with the eXtensible Markup Language (XML) that facilitates integration into current information systems and coding software, taking different languages and versions into account. In this context, XML provides a complete processing framework of related technologies and standard tools that helps develop interoperable applications. XML provides semantic markup. It allows domain-specific definition of tags and hierarchical document structure. The idea of linking and thus combining information from different sources is a valuable feature of XML. In addition, XML topic maps are used to describe relationships between different sources, or "semantically associated" parts of these sources. The issue of achieving a standardized medical vocabulary becomes more and more important with the stepwise implementation of diagnostically related groups, for example. The aim of the authors' work is to provide a transparent and open infrastructure that can be used to support clinical coding and to develop further software applications. The authors are assuming that a comprehensive representation of the content, structure, inherent semantics, and layout of medical classification systems can be achieved through a document-oriented approach.

  9. A two-way interface between limited Systems Biology Markup Language and R

    Directory of Open Access Journals (Sweden)

    Radivoyevitch Tomas

    2004-12-01

    Full Text Available Abstract Background Systems Biology Markup Language (SBML is gaining broad usage as a standard for representing dynamical systems as data structures. The open source statistical programming environment R is widely used by biostatisticians involved in microarray analyses. An interface between SBML and R does not exist, though one might be useful to R users interested in SBML, and SBML users interested in R. Results A model structure that parallels SBML to a limited degree is defined in R. An interface between this structure and SBML is provided through two function definitions: write.SBML( which maps this R model structure to SBML level 2, and read.SBML( which maps a limited range of SBML level 2 files back to R. A published model of purine metabolism is provided in this SBML-like format and used to test the interface. The model reproduces published time course responses before and after its mapping through SBML. Conclusions List infrastructure preexisting in R makes it well-suited for manipulating SBML models. Further developments of this SBML-R interface seem to be warranted.

  10. A two-way interface between limited Systems Biology Markup Language and R.

    Science.gov (United States)

    Radivoyevitch, Tomas

    2004-12-07

    Systems Biology Markup Language (SBML) is gaining broad usage as a standard for representing dynamical systems as data structures. The open source statistical programming environment R is widely used by biostatisticians involved in microarray analyses. An interface between SBML and R does not exist, though one might be useful to R users interested in SBML, and SBML users interested in R. A model structure that parallels SBML to a limited degree is defined in R. An interface between this structure and SBML is provided through two function definitions: write.SBML() which maps this R model structure to SBML level 2, and read.SBML() which maps a limited range of SBML level 2 files back to R. A published model of purine metabolism is provided in this SBML-like format and used to test the interface. The model reproduces published time course responses before and after its mapping through SBML. List infrastructure preexisting in R makes it well-suited for manipulating SBML models. Further developments of this SBML-R interface seem to be warranted.

  11. Text extraction method for historical Tibetan document images based on block projections

    Science.gov (United States)

    Duan, Li-juan; Zhang, Xi-qun; Ma, Long-long; Wu, Jian

    2017-11-01

    Text extraction is an important initial step in digitizing the historical documents. In this paper, we present a text extraction method for historical Tibetan document images based on block projections. The task of text extraction is considered as text area detection and location problem. The images are divided equally into blocks and the blocks are filtered by the information of the categories of connected components and corner point density. By analyzing the filtered blocks' projections, the approximate text areas can be located, and the text regions are extracted. Experiments on the dataset of historical Tibetan documents demonstrate the effectiveness of the proposed method.

  12. Descriptive Analysis on the Impacts of Universal Zero-Markup Drug Policy on a Chinese Urban Tertiary Hospital.

    Directory of Open Access Journals (Sweden)

    Wei Tian

    Full Text Available Universal Zero-Markup Drug Policy (UZMDP mandates no price mark-ups on any drug dispensed by a healthcare institution, and covers the medicines not included in the China's National Essential Medicine System. Five tertiary hospitals in Beijing, China implemented UZMDP in 2012. Its impacts on these hospitals are unknown. We described the effects of UZMDP on a participating hospital, Jishuitan Hospital, Beijing, China (JST.This retrospective longitudinal study examined the hospital-level data of JST and city-level data of tertiary hospitals of Beijing, China (BJT 2009-2015. Rank-sum tests and join-point regression analyses were used to assess absolute changes and differences in trends, respectively.In absolute terms, after the UZDMP implementation, there were increased annual patient-visits and decreased ratios of medicine-to-healthcare-charges (RMOH in JST outpatient and inpatient services; however, in outpatient service, physician work-days decreased and physician-workload and inflation-adjusted per-visit healthcare charges increased, while the inpatient physician work-days increased and inpatient mortality-rate reduced. Interestingly, the decreasing trend in inpatient mortality-rate was neutralized after UZDMP implementation. Compared with BJT and under influence of UZDMP, JST outpatient and inpatient services both had increasing trends in annual patient-visits (annual percentage changes[APC] = 8.1% and 6.5%, respectively and decreasing trends in RMOH (APC = -4.3% and -5.4%, respectively, while JST outpatient services had increasing trend in inflation-adjusted per-visit healthcare charges (APC = 3.4% and JST inpatient service had decreasing trend in inflation-adjusted per-visit medicine-charges (APC = -5.2%.Implementation of UZMDP seems to increase annual patient-visits, reduce RMOH and have different impacts on outpatient and inpatient services in a Chinese urban tertiary hospital.

  13. Descriptive Analysis on the Impacts of Universal Zero-Markup Drug Policy on a Chinese Urban Tertiary Hospital.

    Science.gov (United States)

    Tian, Wei; Yuan, Jiangfan; Yang, Dong; Zhang, Lanjing

    2016-01-01

    Universal Zero-Markup Drug Policy (UZMDP) mandates no price mark-ups on any drug dispensed by a healthcare institution, and covers the medicines not included in the China's National Essential Medicine System. Five tertiary hospitals in Beijing, China implemented UZMDP in 2012. Its impacts on these hospitals are unknown. We described the effects of UZMDP on a participating hospital, Jishuitan Hospital, Beijing, China (JST). This retrospective longitudinal study examined the hospital-level data of JST and city-level data of tertiary hospitals of Beijing, China (BJT) 2009-2015. Rank-sum tests and join-point regression analyses were used to assess absolute changes and differences in trends, respectively. In absolute terms, after the UZDMP implementation, there were increased annual patient-visits and decreased ratios of medicine-to-healthcare-charges (RMOH) in JST outpatient and inpatient services; however, in outpatient service, physician work-days decreased and physician-workload and inflation-adjusted per-visit healthcare charges increased, while the inpatient physician work-days increased and inpatient mortality-rate reduced. Interestingly, the decreasing trend in inpatient mortality-rate was neutralized after UZDMP implementation. Compared with BJT and under influence of UZDMP, JST outpatient and inpatient services both had increasing trends in annual patient-visits (annual percentage changes[APC] = 8.1% and 6.5%, respectively) and decreasing trends in RMOH (APC = -4.3% and -5.4%, respectively), while JST outpatient services had increasing trend in inflation-adjusted per-visit healthcare charges (APC = 3.4%) and JST inpatient service had decreasing trend in inflation-adjusted per-visit medicine-charges (APC = -5.2%). Implementation of UZMDP seems to increase annual patient-visits, reduce RMOH and have different impacts on outpatient and inpatient services in a Chinese urban tertiary hospital.

  14. Projection model for flame chemiluminescence tomography based on lens imaging

    Science.gov (United States)

    Wan, Minggang; Zhuang, Jihui

    2018-04-01

    For flame chemiluminescence tomography (FCT) based on lens imaging, the projection model is essential because it formulates the mathematical relation between the flame projections captured by cameras and the chemiluminescence field, and, through this relation, the field is reconstructed. This work proposed the blurry-spot (BS) model, which takes more universal assumptions and has higher accuracy than the widely applied line-of-sight model. By combining the geometrical camera model and the thin-lens equation, the BS model takes into account perspective effect of the camera lens; by combining ray-tracing technique and Monte Carlo simulation, it also considers inhomogeneous distribution of captured radiance on the image plane. Performance of these two models in FCT was numerically compared, and results showed that using the BS model could lead to better reconstruction quality in wider application ranges.

  15. Measurement of inter and intra fraction organ motion in radiotherapy using cone beam CT projection images

    International Nuclear Information System (INIS)

    Marchant, T E; Amer, A M; Moore, C J

    2008-01-01

    A method is presented for extraction of intra and inter fraction motion of seeds/markers within the patient from cone beam CT (CBCT) projection images. The position of the marker is determined on each projection image and fitted to a function describing the projection of a fixed point onto the imaging panel at different gantry angles. The fitted parameters provide the mean marker position with respect to the isocentre. Differences between the theoretical function and the actual projected marker positions are used to estimate the range of intra fraction motion and the principal motion axis in the transverse plane. The method was validated using CBCT projection images of a static marker at known locations and of a marker moving with known amplitude. The mean difference between actual and measured motion range was less than 1 mm in all directions, although errors of up to 5 mm were observed when large amplitude motion was present in an orthogonal direction. In these cases it was possible to calculate the range of motion magnitudes consistent with the observed marker trajectory. The method was shown to be feasible using clinical CBCT projections of a pancreas cancer patient

  16. Multiview Discriminative Geometry Preserving Projection for Image Classification

    Directory of Open Access Journals (Sweden)

    Ziqiang Wang

    2014-01-01

    Full Text Available In many image classification applications, it is common to extract multiple visual features from different views to describe an image. Since different visual features have their own specific statistical properties and discriminative powers for image classification, the conventional solution for multiple view data is to concatenate these feature vectors as a new feature vector. However, this simple concatenation strategy not only ignores the complementary nature of different views, but also ends up with “curse of dimensionality.” To address this problem, we propose a novel multiview subspace learning algorithm in this paper, named multiview discriminative geometry preserving projection (MDGPP for feature extraction and classification. MDGPP can not only preserve the intraclass geometry and interclass discrimination information under a single view, but also explore the complementary property of different views to obtain a low-dimensional optimal consensus embedding by using an alternating-optimization-based iterative algorithm. Experimental results on face recognition and facial expression recognition demonstrate the effectiveness of the proposed algorithm.

  17. Restoration of the analytically reconstructed OpenPET images by the method of convex projections

    Energy Technology Data Exchange (ETDEWEB)

    Tashima, Hideaki; Murayama, Hideo; Yamaya, Taiga [National Institute of Radiological Sciences, Chiba (Japan); Katsunuma, Takayuki; Suga, Mikio [Chiba Univ. (Japan). Graduate School of Engineering; Kinouchi, Shoko [National Institute of Radiological Sciences, Chiba (Japan); Chiba Univ. (Japan). Graduate School of Engineering; Obi, Takashi [Tokyo Institute of Technology (Japan). Interdisciplinary Graduate School of Science and Engineering; Kudo, Hiroyuki [Tsukuba Univ. (Japan). Graduate School of Systems and Information Engineering

    2011-07-01

    We have proposed the OpenPET geometry which has gaps between detector rings and physically opened field-of-view. The image reconstruction of the OpenPET is classified into an incomplete problem because it does not satisfy the Orlov's condition. Even so, the simulation and experimental studies have shown that applying iterative methods such as the maximum likelihood expectation maximization (ML-EM) algorithm successfully reconstruct images in the gap area. However, the imaging process of the iterative methods in the OpenPET imaging is not clear. Therefore, the aim of this study is to analytically analyze the OpenPET imaging and estimate implicit constraints involved in the iterative methods. To apply explicit constraints in the OpenPET imaging, we used the method of convex projections for restoration of the images reconstructed by the analytical way in which low-frequency components are lost. Numerical simulations showed that the similar restoration effects are involved both in the ML-EM and the method of convex projections. Therefore, the iterative methods have advantageous effect of restoring lost frequency components of the OpenPET imaging. (orig.)

  18. The markup is the model: reasoning about systems biology models in the Semantic Web era.

    Science.gov (United States)

    Kell, Douglas B; Mendes, Pedro

    2008-06-07

    Metabolic control analysis, co-invented by Reinhart Heinrich, is a formalism for the analysis of biochemical networks, and is a highly important intellectual forerunner of modern systems biology. Exchanging ideas and exchanging models are part of the international activities of science and scientists, and the Systems Biology Markup Language (SBML) allows one to perform the latter with great facility. Encoding such models in SBML allows their distributed analysis using loosely coupled workflows, and with the advent of the Internet the various software modules that one might use to analyze biochemical models can reside on entirely different computers and even on different continents. Optimization is at the core of many scientific and biotechnological activities, and Reinhart made many major contributions in this area, stimulating our own activities in the use of the methods of evolutionary computing for optimization.

  19. Measuring Brand Image Effects of Flagship Projects for Place Brands

    DEFF Research Database (Denmark)

    Zenker, Sebastian; Beckmann, Suzanne C.

    2013-01-01

    Cities invest large sums of money in ‘flagship projects’, with the aim of not only developing the city as such, but also changing the perceptions of the city brand towards a desired image. The city of Hamburg, Germany, is currently investing euro575 million in order to build a new symphony hall...... (Elbphilharmonie), euro400 million to develop the ‘International Architectural Fair’ and it is also considering candidature again for the ‘Olympic Games’ in 2024/2028. As assessing the image effects of such projects is rather difficult, this article introduces an improved version of the Brand Concept Map approach......, which was originally developed for product brands. An experimental design was used to first measure the Hamburg brand as such and then the changes in the brand perceptions after priming the participants (N=209) for one of the three different flagship projects. The findings reveal several important...

  20. Fast, accurate, and robust automatic marker detection for motion correction based on oblique kV or MV projection image pairs

    International Nuclear Information System (INIS)

    Slagmolen, Pieter; Hermans, Jeroen; Maes, Frederik; Budiharto, Tom; Haustermans, Karin; Heuvel, Frank van den

    2010-01-01

    Purpose: A robust and accurate method that allows the automatic detection of fiducial markers in MV and kV projection image pairs is proposed. The method allows to automatically correct for inter or intrafraction motion. Methods: Intratreatment MV projection images are acquired during each of five treatment beams of prostate cancer patients with four implanted fiducial markers. The projection images are first preprocessed using a series of marker enhancing filters. 2D candidate marker locations are generated for each of the filtered projection images and 3D candidate marker locations are reconstructed by pairing candidates in subsequent projection images. The correct marker positions are retrieved in 3D by the minimization of a cost function that combines 2D image intensity and 3D geometric or shape information for the entire marker configuration simultaneously. This optimization problem is solved using dynamic programming such that the globally optimal configuration for all markers is always found. Translational interfraction and intrafraction prostate motion and the required patient repositioning is assessed from the position of the centroid of the detected markers in different MV image pairs. The method was validated on a phantom using CT as ground-truth and on clinical data sets of 16 patients using manual marker annotations as ground-truth. Results: The entire setup was confirmed to be accurate to around 1 mm by the phantom measurements. The reproducibility of the manual marker selection was less than 3.5 pixels in the MV images. In patient images, markers were correctly identified in at least 99% of the cases for anterior projection images and 96% of the cases for oblique projection images. The average marker detection accuracy was 1.4±1.8 pixels in the projection images. The centroid of all four reconstructed marker positions in 3D was positioned within 2 mm of the ground-truth position in 99.73% of all cases. Detecting four markers in a pair of MV images

  1. Development of an image reconstruction algorithm for a few number of projection data

    International Nuclear Information System (INIS)

    Vieira, Wilson S.; Brandao, Luiz E.; Braz, Delson

    2007-01-01

    An image reconstruction algorithm was developed for specific cases of radiotracer applications in industry (rotating cylindrical mixers), involving a very few number of projection data. The algorithm was planned for imaging radioactive isotope distributions around the center of circular planes. The method consists of adapting the original expectation maximization algorithm (EM) to solve the ill-posed emission tomography inverse problem in order to reconstruct transversal 2D images of an object with only four projections. To achieve this aim, counts of photons emitted by selected radioactive sources in the plane, after they had been simulated using the commercial software MICROSHIELD 5.05, constitutes the projections and a computational code (SPECTEM) was developed to generate activity vectors or images related to those sources. SPECTEM is flexible to support simultaneous changes of the detectors's geometry, the medium under investigation and the properties of the gamma radiation. As a consequence of the code had been followed correctly the proposed method, good results were obtained and they encouraged us to continue the next step of the research: the validation of SPECTEM utilizing experimental data to check its real performance. We aim this code will improve considerably radiotracer methodology, making easier the diagnosis of fails in industrial processes. (author)

  2. Development of an image reconstruction algorithm for a few number of projection data

    Energy Technology Data Exchange (ETDEWEB)

    Vieira, Wilson S.; Brandao, Luiz E. [Instituto de Engenharia Nuclear (IEN-CNEN/RJ), Rio de Janeiro , RJ (Brazil)]. E-mails: wilson@ien.gov.br; brandao@ien.gov.br; Braz, Delson [Universidade Federal do Rio de Janeiro (UFRJ), RJ (Brazil). Coordenacao dos Programa de Pos-graduacao de Engenharia (COPPE). Lab. de Instrumentacao Nuclear]. E-mail: delson@mailhost.lin.ufrj.br

    2007-07-01

    An image reconstruction algorithm was developed for specific cases of radiotracer applications in industry (rotating cylindrical mixers), involving a very few number of projection data. The algorithm was planned for imaging radioactive isotope distributions around the center of circular planes. The method consists of adapting the original expectation maximization algorithm (EM) to solve the ill-posed emission tomography inverse problem in order to reconstruct transversal 2D images of an object with only four projections. To achieve this aim, counts of photons emitted by selected radioactive sources in the plane, after they had been simulated using the commercial software MICROSHIELD 5.05, constitutes the projections and a computational code (SPECTEM) was developed to generate activity vectors or images related to those sources. SPECTEM is flexible to support simultaneous changes of the detectors's geometry, the medium under investigation and the properties of the gamma radiation. As a consequence of the code had been followed correctly the proposed method, good results were obtained and they encouraged us to continue the next step of the research: the validation of SPECTEM utilizing experimental data to check its real performance. We aim this code will improve considerably radiotracer methodology, making easier the diagnosis of fails in industrial processes. (author)

  3. Image reconstruction for digital breast tomosynthesis (DBT) by using projection-angle-dependent filter functions

    Energy Technology Data Exchange (ETDEWEB)

    Park, Yeonok; Park, Chulkyu; Cho, Hyosung; Je, Uikyu; Hong, Daeki; Lee, Minsik; Cho, Heemoon; Choi, Sungil; Koo, Yangseo [Yonsei University, Wonju (Korea, Republic of)

    2014-09-15

    Digital breast tomosynthesis (DBT) is considered in clinics as a standard three-dimensional imaging modality, allowing the earlier detection of cancer. It typically acquires only 10-30 projections over a limited angle range of 15 - 60 .deg. with a stationary detector and typically uses a computationally-efficient filtered-backprojection (FBP) algorithm for image reconstruction. However, a common FBP algorithm yields poor image quality resulting from the loss of average image value and the presence of severe image artifacts due to the elimination of the dc component of the image by the ramp filter and to the incomplete data, respectively. As an alternative, iterative reconstruction methods are often used in DBT to overcome these difficulties, even though they are still computationally expensive. In this study, as a compromise, we considered a projection-angle dependent filtering method in which one-dimensional geometry-adapted filter kernels are computed with the aid of a conjugate-gradient method and are incorporated into the standard FBP framework. We implemented the proposed algorithm and performed systematic simulation works to investigate the imaging characteristics. Our results indicate that the proposed method is superior to a conventional FBP method for DBT imaging and has a comparable computational cost, while preserving good image homogeneity and edge sharpening with no serious image artifacts.

  4. Three-dimensional DNA image cytometry by optical projection tomographic microscopy for early cancer diagnosis.

    Science.gov (United States)

    Agarwal, Nitin; Biancardi, Alberto M; Patten, Florence W; Reeves, Anthony P; Seibel, Eric J

    2014-04-01

    Aneuploidy is typically assessed by flow cytometry (FCM) and image cytometry (ICM). We used optical projection tomographic microscopy (OPTM) for assessing cellular DNA content using absorption and fluorescence stains. OPTM combines some of the attributes of both FCM and ICM and generates isometric high-resolution three-dimensional (3-D) images of single cells. Although the depth of field of the microscope objective was in the submicron range, it was extended by scanning the objective's focal plane. The extended depth of field image is similar to a projection in a conventional x-ray computed tomography. These projections were later reconstructed using computed tomography methods to form a 3-D image. We also present an automated method for 3-D nuclear segmentation. Nuclei of chicken, trout, and triploid trout erythrocyte were used to calibrate OPTM. Ratios of integrated optical densities extracted from 50 images of each standard were compared to ratios of DNA indices from FCM. A comparison of mean square errors with thionin, hematoxylin, Feulgen, and SYTOX green was done. Feulgen technique was preferred as it showed highest stoichiometry, least variance, and preserved nuclear morphology in 3-D. The addition of this quantitative biomarker could further strengthen existing classifiers and improve early diagnosis of cancer using 3-D microscopy.

  5. SBMLeditor: effective creation of models in the Systems Biology Markup language (SBML).

    Science.gov (United States)

    Rodriguez, Nicolas; Donizelli, Marco; Le Novère, Nicolas

    2007-03-06

    The need to build a tool to facilitate the quick creation and editing of models encoded in the Systems Biology Markup language (SBML) has been growing with the number of users and the increased complexity of the language. SBMLeditor tries to answer this need by providing a very simple, low level editor of SBML files. Users can create and remove all the necessary bits and pieces of SBML in a controlled way, that maintains the validity of the final SBML file. SBMLeditor is written in JAVA using JCompneur, a library providing interfaces to easily display an XML document as a tree. This decreases dramatically the development time for a new XML editor. The possibility to include custom dialogs for different tags allows a lot of freedom for the editing and validation of the document. In addition to Xerces, SBMLeditor uses libSBML to check the validity and consistency of SBML files. A graphical equation editor allows an easy manipulation of MathML. SBMLeditor can be used as a module of the Systems Biology Workbench. SBMLeditor contains many improvements compared to a generic XML editor, and allow users to create an SBML model quickly and without syntactic errors.

  6. Partial Fingerprint Image Enhancement using Region Division Technique and Morphological Transform

    International Nuclear Information System (INIS)

    Ahmad, A.; Arshad, I.; Raja, G.

    2015-01-01

    Fingerprints are the most renowned biometric trait for identification and verification. The quality of fingerprint image plays a vital role in feature extraction and matching. Existing algorithms work well for good quality fingerprint images and fail for partial fingerprint images as they are obtained from excessively dry fingers or affected by disease resulting in broken ridges. We propose an algorithm to enhance partial fingerprint images using morphological operatins with region division technique. The proposed method divides low quality image into six regions from top to bottom. Morphological operations choose an appropriate Structuring Element (SE) that joins broken ridges and thus enhance the image for further processing. The proposed method uses SE line with suitable angle theta and radius r in each region based on the orientation of the ridges. The algorithm is applied to 14 low quality fingerprint images from FVC-2002 database. Experimental results show that percentage accuracy has been improved using the proposed algorithm. The manual markup has been reduced and accuracy of 76.16% with Equal Error Rate (EER) of 3.16% is achieved. (author)

  7. Reconstruction of implanted marker trajectories from cone-beam CT projection images using interdimensional correlation modeling

    International Nuclear Information System (INIS)

    Chung, Hyekyun; Poulsen, Per Rugaard; Keall, Paul J.; Cho, Seungryong; Cho, Byungchul

    2016-01-01

    Purpose: Cone-beam CT (CBCT) is a widely used imaging modality for image-guided radiotherapy. Most vendors provide CBCT systems that are mounted on a linac gantry. Thus, CBCT can be used to estimate the actual 3-dimensional (3D) position of moving respiratory targets in the thoracic/abdominal region using 2D projection images. The authors have developed a method for estimating the 3D trajectory of respiratory-induced target motion from CBCT projection images using interdimensional correlation modeling. Methods: Because the superior–inferior (SI) motion of a target can be easily analyzed on projection images of a gantry-mounted CBCT system, the authors investigated the interdimensional correlation of the SI motion with left–right and anterior–posterior (AP) movements while the gantry is rotating. A simple linear model and a state-augmented model were implemented and applied to the interdimensional correlation analysis, and their performance was compared. The parameters of the interdimensional correlation models were determined by least-square estimation of the 2D error between the actual and estimated projected target position. The method was validated using 160 3D tumor trajectories from 46 thoracic/abdominal cancer patients obtained during CyberKnife treatment. The authors’ simulations assumed two application scenarios: (1) retrospective estimation for the purpose of moving tumor setup used just after volumetric matching with CBCT; and (2) on-the-fly estimation for the purpose of real-time target position estimation during gating or tracking delivery, either for full-rotation volumetric-modulated arc therapy (VMAT) in 60 s or a stationary six-field intensity-modulated radiation therapy (IMRT) with a beam delivery time of 20 s. Results: For the retrospective CBCT simulations, the mean 3D root-mean-square error (RMSE) for all 4893 trajectory segments was 0.41 mm (simple linear model) and 0.35 mm (state-augmented model). In the on-the-fly simulations, prior

  8. Reconstruction of implanted marker trajectories from cone-beam CT projection images using interdimensional correlation modeling

    Energy Technology Data Exchange (ETDEWEB)

    Chung, Hyekyun [Department of Nuclear and Quantum Engineering, Korea Advanced Institute of Science and Technology, Daejeon 34141, South Korea and Department of Radiation Oncology, Asan Medical Center, University of Ulsan College of Medicine, Seoul 138-736 (Korea, Republic of); Poulsen, Per Rugaard [Department of Oncology, Aarhus University Hospital, Nørrebrogade 44, 8000 Aarhus C (Denmark); Keall, Paul J. [Radiation Physics Laboratory, Sydney Medical School, University of Sydney, NSW 2006 (Australia); Cho, Seungryong [Department of Nuclear and Quantum Engineering, Korea Advanced Institute of Science and Technology, Daejeon 34141 (Korea, Republic of); Cho, Byungchul, E-mail: cho.byungchul@gmail.com, E-mail: bcho@amc.seoul.kr [Department of Radiation Oncology, Asan Medical Center, University of Ulsan College of Medicine, Seoul 05505 (Korea, Republic of)

    2016-08-15

    Purpose: Cone-beam CT (CBCT) is a widely used imaging modality for image-guided radiotherapy. Most vendors provide CBCT systems that are mounted on a linac gantry. Thus, CBCT can be used to estimate the actual 3-dimensional (3D) position of moving respiratory targets in the thoracic/abdominal region using 2D projection images. The authors have developed a method for estimating the 3D trajectory of respiratory-induced target motion from CBCT projection images using interdimensional correlation modeling. Methods: Because the superior–inferior (SI) motion of a target can be easily analyzed on projection images of a gantry-mounted CBCT system, the authors investigated the interdimensional correlation of the SI motion with left–right and anterior–posterior (AP) movements while the gantry is rotating. A simple linear model and a state-augmented model were implemented and applied to the interdimensional correlation analysis, and their performance was compared. The parameters of the interdimensional correlation models were determined by least-square estimation of the 2D error between the actual and estimated projected target position. The method was validated using 160 3D tumor trajectories from 46 thoracic/abdominal cancer patients obtained during CyberKnife treatment. The authors’ simulations assumed two application scenarios: (1) retrospective estimation for the purpose of moving tumor setup used just after volumetric matching with CBCT; and (2) on-the-fly estimation for the purpose of real-time target position estimation during gating or tracking delivery, either for full-rotation volumetric-modulated arc therapy (VMAT) in 60 s or a stationary six-field intensity-modulated radiation therapy (IMRT) with a beam delivery time of 20 s. Results: For the retrospective CBCT simulations, the mean 3D root-mean-square error (RMSE) for all 4893 trajectory segments was 0.41 mm (simple linear model) and 0.35 mm (state-augmented model). In the on-the-fly simulations, prior

  9. Learning binary code via PCA of angle projection for image retrieval

    Science.gov (United States)

    Yang, Fumeng; Ye, Zhiqiang; Wei, Xueqi; Wu, Congzhong

    2018-01-01

    With benefits of low storage costs and high query speeds, binary code representation methods are widely researched for efficiently retrieving large-scale data. In image hashing method, learning hashing function to embed highdimensions feature to Hamming space is a key step for accuracy retrieval. Principal component analysis (PCA) technical is widely used in compact hashing methods, and most these hashing methods adopt PCA projection functions to project the original data into several dimensions of real values, and then each of these projected dimensions is quantized into one bit by thresholding. The variances of different projected dimensions are different, and with real-valued projection produced more quantization error. To avoid the real-valued projection with large quantization error, in this paper we proposed to use Cosine similarity projection for each dimensions, the angle projection can keep the original structure and more compact with the Cosine-valued. We used our method combined the ITQ hashing algorithm, and the extensive experiments on the public CIFAR-10 and Caltech-256 datasets validate the effectiveness of the proposed method.

  10. Image reconstruction from projections and its application in emission computer tomography

    International Nuclear Information System (INIS)

    Kuba, Attila; Csernay, Laszlo

    1989-01-01

    Computer tomography is an imaging technique for producing cross sectional images by reconstruction from projections. Its two main branches are called transmission and emission computer tomography, TCT and ECT, resp. After an overview of the theory and practice of TCT and ECT, the first Hungarian ECT type MB 9300 SPECT consisting of a gamma camera and Ketronic Medax N computer is described, and its applications to radiological patient observations are discussed briefly. (R.P.) 28 refs.; 4 figs

  11. Optimized image acquisition for breast tomosynthesis in projection and reconstruction space

    OpenAIRE

    Chawla, Amarpreet S.; Lo, Joseph Y.; Baker, Jay A.; Samei, Ehsan

    2009-01-01

    Breast tomosynthesis has been an exciting new development in the field of breast imaging. While the diagnostic improvement via tomosynthesis is notable, the full potential of tomosynthesis has not yet been realized. This may be attributed to the dependency of the diagnostic quality of tomosynthesis on multiple variables, each of which needs to be optimized. Those include dose, number of angular projections, and the total angular span of those projections. In this study, the authors investigat...

  12. Short Project-Based Learning with MATLAB Applications to Support the Learning of Video-Image Processing

    Science.gov (United States)

    Gil, Pablo

    2017-10-01

    University courses concerning Computer Vision and Image Processing are generally taught using a traditional methodology that is focused on the teacher rather than on the students. This approach is consequently not effective when teachers seek to attain cognitive objectives involving their students' critical thinking. This manuscript covers the development, implementation and assessment of a short project-based engineering course with MATLAB applications Multimedia Engineering being taken by Bachelor's degree students. The principal goal of all course lectures and hands-on laboratory activities was for the students to not only acquire image-specific technical skills but also a general knowledge of data analysis so as to locate phenomena in pixel regions of images and video frames. This would hopefully enable the students to develop skills regarding the implementation of the filters, operators, methods and techniques used for image processing and computer vision software libraries. Our teaching-learning process thus permits the accomplishment of knowledge assimilation, student motivation and skill development through the use of a continuous evaluation strategy to solve practical and real problems by means of short projects designed using MATLAB applications. Project-based learning is not new. This approach has been used in STEM learning in recent decades. But there are many types of projects. The aim of the current study is to analyse the efficacy of short projects as a learning tool when compared to long projects during which the students work with more independence. This work additionally presents the impact of different types of activities, and not only short projects, on students' overall results in this subject. Moreover, a statistical study has allowed the author to suggest a link between the students' success ratio and the type of content covered and activities completed on the course. The results described in this paper show that those students who took part

  13. The Ilac-Project Supporting Ancient Coin Classification by Means of Image Analysis

    Science.gov (United States)

    Kavelar, A.; Zambanini, S.; Kampel, M.; Vondrovec, K.; Siegl, K.

    2013-07-01

    This paper presents the ILAC project, which aims at the development of an automated image-based classification system for ancient Roman Republican coins. The benefits of such a system are manifold: operating at the suture between computer vision and numismatics, ILAC can reduce the day-to-day workload of numismatists by assisting them in classification tasks and providing a preselection of suitable coin classes. This is especially helpful for large coin hoard findings comprising several thousands of coins. Furthermore, this system could be implemented in an online platform for hobby numismatists, allowing them to access background information about their coin collection by simply uploading a photo of obverse and reverse for the coin of interest. ILAC explores different computer vision techniques and their combinations for the use of image-based coin recognition. Some of these methods, such as image matching, use the entire coin image in the classification process, while symbol or legend recognition exploit certain characteristics of the coin imagery. An overview of the methods explored so far and the respective experiments is given as well as an outlook on the next steps of the project.

  14. THE ILAC-PROJECT: SUPPORTING ANCIENT COIN CLASSIFICATION BY MEANS OF IMAGE ANALYSIS

    Directory of Open Access Journals (Sweden)

    A. Kavelar

    2013-07-01

    Full Text Available This paper presents the ILAC project, which aims at the development of an automated image-based classification system for ancient Roman Republican coins. The benefits of such a system are manifold: operating at the suture between computer vision and numismatics, ILAC can reduce the day-to-day workload of numismatists by assisting them in classification tasks and providing a preselection of suitable coin classes. This is especially helpful for large coin hoard findings comprising several thousands of coins. Furthermore, this system could be implemented in an online platform for hobby numismatists, allowing them to access background information about their coin collection by simply uploading a photo of obverse and reverse for the coin of interest. ILAC explores different computer vision techniques and their combinations for the use of image-based coin recognition. Some of these methods, such as image matching, use the entire coin image in the classification process, while symbol or legend recognition exploit certain characteristics of the coin imagery. An overview of the methods explored so far and the respective experiments is given as well as an outlook on the next steps of the project.

  15. Tiny Devices Project Sharp, Colorful Images

    Science.gov (United States)

    2009-01-01

    Displaytech Inc., based in Longmont, Colorado and recently acquired by Micron Technology Inc. of Boise, Idaho, first received a Small Business Innovation Research contract in 1993 from Johnson Space Center to develop tiny, electronic, color displays, called microdisplays. Displaytech has since sold over 20 million microdisplays and was ranked one of the fastest growing technology companies by Deloitte and Touche in 2005. Customers currently incorporate the microdisplays in tiny pico-projectors, which weigh only a few ounces and attach to media players, cell phones, and other devices. The projectors can convert a digital image from the typical postage stamp size into a bright, clear, four-foot projection. The company believes sales of this type of pico-projector may exceed $1.1 billion within 5 years.

  16. Ultrasonic imaging of projected components of PFBR

    Energy Technology Data Exchange (ETDEWEB)

    Sylvia, J.I., E-mail: sylvia@igcar.gov.in [Indira Gandhi Centre for Atomic Research, Kalpakkam 603 102, Tamil Nadu (India); Jeyan, M.R.; Anbucheliyan, M.; Asokane, C.; Babu, V. Rajan; Babu, B.; Rajan, K.K.; Velusamy, K.; Jayakumar, T. [Indira Gandhi Centre for Atomic Research, Kalpakkam 603 102, Tamil Nadu (India)

    2013-05-15

    Highlights: ► Under sodium ultrasonic scanner in PFBR is for detecting protruding objects. ► Feasibility study for detecting Absorber rods and its drive mechanisms. ► Developed in-house PC based ultrasonic imaging system. ► Different case studies were carried out on simulated ARDM's. ► Implemented the experimental results to PFBR application. -- Abstract: The 500 MWe, sodium cooled, Prototype Fast Breeder Reactor (PFBR) is under advanced stage of construction at Kalpakkam in India. Opacity of sodium restricts visual inspection of components immersed in sodium by optical means. Ultrasonic wave passes through sodium hence ultrasonic techniques using under sodium ultrasonic scanners are developed to obtain under sodium images. The main objective of such an Under Sodium Ultrasonic Scanner (USUSS) for Prototype Fast Breeder Reactor (PFBR) is to detect and ensure that no core Sub Assembly (SA) or Absorber Rod or its Drive Mechanism is protruded in the above core plenum before starting the fuel handling operation. Hence, it is necessary to detect and locate the object, if it is protruding the above core plenum. To study the feasibility of detecting the absorber rods and their drive mechanisms using direct ultrasonic imaging technique, experiments were carried out for different orientations and profiles of the projected components in a 5 m diameter water tank. The in-house developed PC based ultrasonic scanning system is used for acquisition and analysis of data. The pseudo three dimensional color images obtained are discussed and the results are applicable for PFBR. This paper gives the details of the features of the absorber rods and their drive mechanisms, their orientation in the reactor core, experimental setup, PC based ultrasonic scanning system, ultrasonic images and the discussion on the results.

  17. Methods of X-ray CT image reconstruction from few projections

    International Nuclear Information System (INIS)

    Wang, H.

    2011-01-01

    To improve the safety (low dose) and the productivity (fast acquisition) of a X-ray CT system, we want to reconstruct a high quality image from a small number of projections. The classical reconstruction algorithms generally fail since the reconstruction procedure is unstable and suffers from artifacts. A new approach based on the recently developed 'Compressed Sensing' (CS) theory assumes that the unknown image is in some sense 'sparse' or 'compressible', and the reconstruction is formulated through a non linear optimization problem (TV/l1 minimization) by enhancing the sparsity. Using the pixel (or voxel in 3D) as basis, to apply the CS framework in CT one usually needs a 'sparsifying' transform, and combines it with the 'X-ray projector' which applies on the pixel image. In this thesis, we have adapted a 'CT-friendly' radial basis of Gaussian family called 'blob' to the CS-CT framework. The blob has better space-frequency localization properties than the pixel, and many operations, such as the X-ray transform, can be evaluated analytically and are highly parallelizable (on GPU platform). Compared to the classical Kaisser-Bessel blob, the new basis has a multi-scale structure: an image is the sum of dilated and translated radial Mexican hat functions. The typical medical objects are compressible under this basis, so the sparse representation system used in the ordinary CS algorithms is no more needed. 2D simulations show that the existing TV and l1 algorithms are more efficient and the reconstructions have better visual quality than the equivalent approach based on the pixel or wavelet basis. The new approach has also been validated on 2D experimental data, where we have observed that in general the number of projections can be reduced to about 50%, without compromising the image quality. (author) [fr

  18. Motion nature projection reduces patient's psycho-physiological anxiety during CT imaging.

    NARCIS (Netherlands)

    Zijlstra, Emma; Hagedoorn, Mariët; Krijnen, Wim; van der Schans, Cees; Mobach, Mark P.

    2017-01-01

    A growing body of evidence indicates that natural environments can positively influence people. This study investigated whether the use of motion nature projection in computed tomography (CT) imaging rooms is effective in mitigating psycho-physiological anxiety (vs. no intervention) using a

  19. Evaluation of noise limits to improve image processing in soft X-ray projection microscopy.

    Science.gov (United States)

    Jamsranjav, Erdenetogtokh; Kuge, Kenichi; Ito, Atsushi; Kinjo, Yasuhito; Shiina, Tatsuo

    2017-03-03

    Soft X-ray microscopy has been developed for high resolution imaging of hydrated biological specimens due to the availability of water window region. In particular, a projection type microscopy has advantages in wide viewing area, easy zooming function and easy extensibility to computed tomography (CT). The blur of projection image due to the Fresnel diffraction of X-rays, which eventually reduces spatial resolution, could be corrected by an iteration procedure, i.e., repetition of Fresnel and inverse Fresnel transformations. However, it was found that the correction is not enough to be effective for all images, especially for images with low contrast. In order to improve the effectiveness of image correction by computer processing, we in this study evaluated the influence of background noise in the iteration procedure through a simulation study. In the study, images of model specimen with known morphology were used as a substitute for the chromosome images, one of the targets of our microscope. Under the condition that artificial noise was distributed on the images randomly, we introduced two different parameters to evaluate noise effects according to each situation where the iteration procedure was not successful, and proposed an upper limit of the noise within which the effective iteration procedure for the chromosome images was possible. The study indicated that applying the new simulation and noise evaluation method was useful for image processing where background noises cannot be ignored compared with specimen images.

  20. 3D fingerprint imaging system based on full-field fringe projection profilometry

    Science.gov (United States)

    Huang, Shujun; Zhang, Zonghua; Zhao, Yan; Dai, Jie; Chen, Chao; Xu, Yongjia; Zhang, E.; Xie, Lili

    2014-01-01

    As an unique, unchangeable and easily acquired biometrics, fingerprint has been widely studied in academics and applied in many fields over the years. The traditional fingerprint recognition methods are based on the obtained 2D feature of fingerprint. However, fingerprint is a 3D biological characteristic. The mapping from 3D to 2D loses 1D information and causes nonlinear distortion of the captured fingerprint. Therefore, it is becoming more and more important to obtain 3D fingerprint information for recognition. In this paper, a novel 3D fingerprint imaging system is presented based on fringe projection technique to obtain 3D features and the corresponding color texture information. A series of color sinusoidal fringe patterns with optimum three-fringe numbers are projected onto a finger surface. From another viewpoint, the fringe patterns are deformed by the finger surface and captured by a CCD camera. 3D shape data of the finger can be obtained from the captured fringe pattern images. This paper studies the prototype of the 3D fingerprint imaging system, including principle of 3D fingerprint acquisition, hardware design of the 3D imaging system, 3D calibration of the system, and software development. Some experiments are carried out by acquiring several 3D fingerprint data. The experimental results demonstrate the feasibility of the proposed 3D fingerprint imaging system.

  1. Image restoration by the method of convex projections: part 2 applications and numerical results.

    Science.gov (United States)

    Sezan, M I; Stark, H

    1982-01-01

    The image restoration theory discussed in a previous paper by Youla and Webb [1] is applied to a simulated image and the results compared with the well-known method known as the Gerchberg-Papoulis algorithm. The results show that the method of image restoration by projection onto convex sets, by providing a convenient technique for utilizing a priori information, performs significantly better than the Gerchberg-Papoulis method.

  2. Project Blue: Optical Coronagraphic Imaging Search for Terrestrial-class Exoplanets in Alpha Centauri

    Science.gov (United States)

    Morse, Jon; Project Blue team

    2018-01-01

    Project Blue is a coronagraphic imaging space telescope mission designed to search for habitable worlds orbiting the nearest Sun-like stars in the Alpha Centauri system. With a 45-50 cm baseline primary mirror size, Project Blue will perform a reconnaissance of the habitable zones of Alpha Centauri A and B in blue light and one or two longer wavelength bands to determine the hue of any planets discovered. Light passing through the off-axis telescope feeds into a coronagraphic instrument that forms the heart of the mission. Various coronagraph designs are being considered, such as phase induced amplitude apodization (PIAA), vector vortex, etc. Differential orbital image processing techniques will be employed to analyze the data for faint planets embedded in the residual glare of the parent star. Project Blue will advance our knowledge about the presence or absence of terrestrial-class exoplanets in the habitable zones and measure the brightness of zodiacal dust around each star, which will aid future missions in planning their observational surveys of exoplanets. It also provides on-orbit demonstration of high-contrast coronagraphic imaging technologies and techniques that will be useful for planning and implementing future space missions by NASA and other space agencies. We present an overview of the science goals, mission concept and development schedule. As part of our cooperative agreement with NASA, the Project Blue team intends to make the data available in a publicly accessible archive.

  3. Neural network CT image reconstruction method for small amount of projection data

    International Nuclear Information System (INIS)

    Ma, X.F.; Fukuhara, M.; Takeda, T.

    2000-01-01

    This paper presents a new method for two-dimensional image reconstruction by using a multi-layer neural network. Though a conventionally used object function of such a neural network is composed of a sum of squared errors of the output data, we define an object function composed of a sum of squared residuals of an integral equation. By employing an appropriate numerical line integral for this integral equation, we can construct a neural network which can be used for CT image reconstruction for cases with small amount of projection data. We applied this method to some model problems and obtained satisfactory results. This method is especially useful for analyses of laboratory experiments or field observations where only a small amount of projection data is available in comparison with the well-developed medical applications

  4. Neural network CT image reconstruction method for small amount of projection data

    CERN Document Server

    Ma, X F; Takeda, T

    2000-01-01

    This paper presents a new method for two-dimensional image reconstruction by using a multi-layer neural network. Though a conventionally used object function of such a neural network is composed of a sum of squared errors of the output data, we define an object function composed of a sum of squared residuals of an integral equation. By employing an appropriate numerical line integral for this integral equation, we can construct a neural network which can be used for CT image reconstruction for cases with small amount of projection data. We applied this method to some model problems and obtained satisfactory results. This method is especially useful for analyses of laboratory experiments or field observations where only a small amount of projection data is available in comparison with the well-developed medical applications.

  5. Final Project Report: Imaging Fault Zones Using a Novel Elastic Reverse-Time Migration Imaging Technique

    Energy Technology Data Exchange (ETDEWEB)

    Huang, Lianjie [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Chen, Ting [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Tan, Sirui [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Lin, Youzuo [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Gao, Kai [Los Alamos National Lab. (LANL), Los Alamos, NM (United States)

    2017-05-10

    Imaging fault zones and fractures is crucial for geothermal operators, providing important information for reservoir evaluation and management strategies. However, there are no existing techniques available for directly and clearly imaging fault zones, particularly for steeply dipping faults and fracture zones. In this project, we developed novel acoustic- and elastic-waveform inversion methods for high-resolution velocity model building. In addition, we developed acoustic and elastic reverse-time migration methods for high-resolution subsurface imaging of complex subsurface structures and steeply-dipping fault/fracture zones. We first evaluated and verified the improved capabilities of our newly developed seismic inversion and migration imaging methods using synthetic seismic data. Our numerical tests verified that our new methods directly image subsurface fracture/fault zones using surface seismic reflection data. We then applied our novel seismic inversion and migration imaging methods to a field 3D surface seismic dataset acquired at the Soda Lake geothermal field using Vibroseis sources. Our migration images of the Soda Lake geothermal field obtained using our seismic inversion and migration imaging algorithms revealed several possible fault/fracture zones. AltaRock Energy, Inc. is working with Cyrq Energy, Inc. to refine the geologic interpretation at the Soda Lake geothermal field. Trenton Cladouhos, Senior Vice President R&D of AltaRock, was very interested in our imaging results of 3D surface seismic data from the Soda Lake geothermal field. He planed to perform detailed interpretation of our images in collaboration with James Faulds and Holly McLachlan of University of Nevada at Reno. Using our high-resolution seismic inversion and migration imaging results can help determine the optimal locations to drill wells for geothermal energy production and reduce the risk of geothermal exploration.

  6. Efficient Analysis of Systems Biology Markup Language Models of Cellular Populations Using Arrays.

    Science.gov (United States)

    Watanabe, Leandro; Myers, Chris J

    2016-08-19

    The Systems Biology Markup Language (SBML) has been widely used for modeling biological systems. Although SBML has been successful in representing a wide variety of biochemical models, the core standard lacks the structure for representing large complex regular systems in a standard way, such as whole-cell and cellular population models. These models require a large number of variables to represent certain aspects of these types of models, such as the chromosome in the whole-cell model and the many identical cell models in a cellular population. While SBML core is not designed to handle these types of models efficiently, the proposed SBML arrays package can represent such regular structures more easily. However, in order to take full advantage of the package, analysis needs to be aware of the arrays structure. When expanding the array constructs within a model, some of the advantages of using arrays are lost. This paper describes a more efficient way to simulate arrayed models. To illustrate the proposed method, this paper uses a population of repressilator and genetic toggle switch circuits as examples. Results show that there are memory benefits using this approach with a modest cost in runtime.

  7. The Systems Biology Markup Language (SBML Level 3 Package: Layout, Version 1 Core

    Directory of Open Access Journals (Sweden)

    Gauges Ralph

    2015-06-01

    Full Text Available Many software tools provide facilities for depicting reaction network diagrams in a visual form. Two aspects of such a visual diagram can be distinguished: the layout (i.e.: the positioning and connections of the elements in the diagram, and the graphical form of the elements (for example, the glyphs used for symbols, the properties of the lines connecting them, and so on. For software tools that also read and write models in SBML (Systems Biology Markup Language format, a common need is to store the network diagram together with the SBML representation of the model. This in turn raises the question of how to encode the layout and the rendering of these diagrams. The SBML Level 3 Version 1 Core specification does not provide a mechanism for explicitly encoding diagrams, but it does provide a mechanism for SBML packages to extend the Core specification and add additional syntactical constructs. The Layout package for SBML Level 3 adds the necessary features to SBML so that diagram layouts can be encoded in SBML files, and a companion package called SBML Rendering specifies how the graphical rendering of elements can be encoded.

  8. Systematic reconstruction of TRANSPATH data into cell system markup language.

    Science.gov (United States)

    Nagasaki, Masao; Saito, Ayumu; Li, Chen; Jeong, Euna; Miyano, Satoru

    2008-06-23

    Many biological repositories store information based on experimental study of the biological processes within a cell, such as protein-protein interactions, metabolic pathways, signal transduction pathways, or regulations of transcription factors and miRNA. Unfortunately, it is difficult to directly use such information when generating simulation-based models. Thus, modeling rules for encoding biological knowledge into system-dynamics-oriented standardized formats would be very useful for fully understanding cellular dynamics at the system level. We selected the TRANSPATH database, a manually curated high-quality pathway database, which provides a plentiful source of cellular events in humans, mice, and rats, collected from over 31,500 publications. In this work, we have developed 16 modeling rules based on hybrid functional Petri net with extension (HFPNe), which is suitable for graphical representing and simulating biological processes. In the modeling rules, each Petri net element is incorporated with Cell System Ontology to enable semantic interoperability of models. As a formal ontology for biological pathway modeling with dynamics, CSO also defines biological terminology and corresponding icons. By combining HFPNe with the CSO features, it is possible to make TRANSPATH data to simulation-based and semantically valid models. The results are encoded into a biological pathway format, Cell System Markup Language (CSML), which eases the exchange and integration of biological data and models. By using the 16 modeling rules, 97% of the reactions in TRANSPATH are converted into simulation-based models represented in CSML. This reconstruction demonstrates that it is possible to use our rules to generate quantitative models from static pathway descriptions.

  9. Dynamic PET simulator via tomographic emission projection for kinetic modeling and parametric image studies.

    Science.gov (United States)

    Häggström, Ida; Beattie, Bradley J; Schmidtlein, C Ross

    2016-06-01

    To develop and evaluate a fast and simple tool called dpetstep (Dynamic PET Simulator of Tracers via Emission Projection), for dynamic PET simulations as an alternative to Monte Carlo (MC), useful for educational purposes and evaluation of the effects of the clinical environment, postprocessing choices, etc., on dynamic and parametric images. The tool was developed in matlab using both new and previously reported modules of petstep (PET Simulator of Tracers via Emission Projection). Time activity curves are generated for each voxel of the input parametric image, whereby effects of imaging system blurring, counting noise, scatters, randoms, and attenuation are simulated for each frame. Each frame is then reconstructed into images according to the user specified method, settings, and corrections. Reconstructed images were compared to MC data, and simple Gaussian noised time activity curves (GAUSS). dpetstep was 8000 times faster than MC. Dynamic images from dpetstep had a root mean square error that was within 4% on average of that of MC images, whereas the GAUSS images were within 11%. The average bias in dpetstep and MC images was the same, while GAUSS differed by 3% points. Noise profiles in dpetstep images conformed well to MC images, confirmed visually by scatter plot histograms, and statistically by tumor region of interest histogram comparisons that showed no significant differences (p dynamic PET and parametric images, and demonstrated that it generates both images and subsequent parametric images with very similar noise properties to those of MC images, in a fraction of the time. They believe dpetstep to be very useful for generating fast, simple, and realistic results, however since it uses simple scatter and random models it may not be suitable for studies investigating these phenomena. dpetstep can be downloaded free of cost from https://github.com/CRossSchmidtlein/dPETSTEP.

  10. Nonnegative least-squares image deblurring: improved gradient projection approaches

    Science.gov (United States)

    Benvenuto, F.; Zanella, R.; Zanni, L.; Bertero, M.

    2010-02-01

    The least-squares approach to image deblurring leads to an ill-posed problem. The addition of the nonnegativity constraint, when appropriate, does not provide regularization, even if, as far as we know, a thorough investigation of the ill-posedness of the resulting constrained least-squares problem has still to be done. Iterative methods, converging to nonnegative least-squares solutions, have been proposed. Some of them have the 'semi-convergence' property, i.e. early stopping of the iteration provides 'regularized' solutions. In this paper we consider two of these methods: the projected Landweber (PL) method and the iterative image space reconstruction algorithm (ISRA). Even if they work well in many instances, they are not frequently used in practice because, in general, they require a large number of iterations before providing a sensible solution. Therefore, the main purpose of this paper is to refresh these methods by increasing their efficiency. Starting from the remark that PL and ISRA require only the computation of the gradient of the functional, we propose the application to these algorithms of special acceleration techniques that have been recently developed in the area of the gradient methods. In particular, we propose the application of efficient step-length selection rules and line-search strategies. Moreover, remarking that ISRA is a scaled gradient algorithm, we evaluate its behaviour in comparison with a recent scaled gradient projection (SGP) method for image deblurring. Numerical experiments demonstrate that the accelerated methods still exhibit the semi-convergence property, with a considerable gain both in the number of iterations and in the computational time; in particular, SGP appears definitely the most efficient one.

  11. ADASS Web Database XML Project

    Science.gov (United States)

    Barg, M. I.; Stobie, E. B.; Ferro, A. J.; O'Neil, E. J.

    In the spring of 2000, at the request of the ADASS Program Organizing Committee (POC), we began organizing information from previous ADASS conferences in an effort to create a centralized database. The beginnings of this database originated from data (invited speakers, participants, papers, etc.) extracted from HyperText Markup Language (HTML) documents from past ADASS host sites. Unfortunately, not all HTML documents are well formed and parsing them proved to be an iterative process. It was evident at the beginning that if these Web documents were organized in a standardized way, such as XML (Extensible Markup Language), the processing of this information across the Web could be automated, more efficient, and less error prone. This paper will briefly review the many programming tools available for processing XML, including Java, Perl and Python, and will explore the mapping of relational data from our MySQL database to XML.

  12. Neural network algorithm for image reconstruction using the grid friendly projections

    International Nuclear Information System (INIS)

    Cierniak, R.

    2011-01-01

    Full text: The presented paper describes a development of original approach to the reconstruction problem using a recurrent neural network. Particularly, the 'grid-friendly' angles of performed projections are selected according to the discrete Radon transform (DRT) concept to decrease the number of projections required. The methodology of our approach is consistent with analytical reconstruction algorithms. Reconstruction problem is reformulated in our approach to optimization problem. This problem is solved in present concept using method based on the maximum likelihood methodology. The reconstruction algorithm proposed in this work is consequently adapted for more practical discrete fan beam projections. Computer simulation results show that the neural network reconstruction algorithm designed to work in this way improves obtained results and outperforms conventional methods in reconstructed image quality. (author)

  13. Image applications for coastal resource planning: Elkhorn Slough Pilot Project

    Science.gov (United States)

    Kvitek, Rikk G.; Sharp, Gary D.; VanCoops, Jonathan; Fitzgerald, Michael

    1995-01-01

    The purpose of this project has been to evaluate the utility of digital spectral imagery at two levels of resolution for large scale, accurate, auto-classification of land cover along the Central California Coast. Although remote sensing technology offers obvious advantages over on-the-ground mapping, there are substantial trade-offs that must be made between resolving power and costs. Higher resolution images can theoretically be used to identify smaller habitat patches, but they usually require more scenes to cover a given area and processing these images is computationally intense requiring much more computer time and memory. Lower resolution images can cover much larger areas, are less costly to store, process, and manipulate, but due to their larger pixel size can lack the resolving power of the denser images. This lack of resolving power can be critical in regions such as the Central California Coast where important habitat change often occurs on a scale of 10 meters. Our approach has been to compare vegetation and habitat classification results from two aircraft-based spectral scenes covering the same study area but at different levels of resolution with a previously produced ground-truthed land cover base map of the area. Both of the spectral images used for this project were of significantly higher resolution than the satellite-based LandSat scenes used in the C-CAP program. The lower reaches of the Elkhorn Slough watershed was chosen as an ideal study site because it encompasses a suite of important vegetation types and habitat loss processes characteristic of the central coast region. Dramatic habitat alterations have and are occurring within the Elkhorn Slough drainage area, including erosion and sedimentation, land use conversion, wetland loss, and incremental loss due to development and encroachnnent by agriculture. Additonally, much attention has already been focused on the Elkhorn Slough due to its status as a National Marine Education and Research

  14. Restructuring an EHR system and the Medical Markup Language (MML) standard to improve interoperability by archetype technology.

    Science.gov (United States)

    Kobayashi, Shinji; Kume, Naoto; Yoshihara, Hiroyuki

    2015-01-01

    In 2001, we developed an EHR system for regional healthcare information inter-exchange and to provide individual patient data to patients. This system was adopted in three regions in Japan. We also developed a Medical Markup Language (MML) standard for inter- and intra-hospital communications. The system was built on a legacy platform, however, and had not been appropriately maintained or updated to meet clinical requirements. To improve future maintenance costs, we reconstructed the EHR system using archetype technology on the Ruby on Rails platform, and generated MML equivalent forms from archetypes. The system was deployed as a cloud-based system for preliminary use as a regional EHR. The system now has the capability to catch up with new requirements, maintaining semantic interoperability with archetype technology. It is also more flexible than the legacy EHR system.

  15. Fan-beam and cone-beam image reconstruction via filtering the backprojection image of differentiated projection data

    International Nuclear Information System (INIS)

    Zhuang Tingliang; Leng Shuai; Nett, Brian E; Chen Guanghong

    2004-01-01

    In this paper, a new image reconstruction scheme is presented based on Tuy's cone-beam inversion scheme and its fan-beam counterpart. It is demonstrated that Tuy's inversion scheme may be used to derive a new framework for fan-beam and cone-beam image reconstruction. In this new framework, images are reconstructed via filtering the backprojection image of differentiated projection data. The new framework is mathematically exact and is applicable to a general source trajectory provided the Tuy data sufficiency condition is satisfied. By choosing a piece-wise constant function for one of the components in the factorized weighting function, the filtering kernel is one dimensional, viz. the filtering process is along a straight line. Thus, the derived image reconstruction algorithm is mathematically exact and efficient. In the cone-beam case, the derived reconstruction algorithm is applicable to a large class of source trajectories where the pi-lines or the generalized pi-lines exist. In addition, the new reconstruction scheme survives the super-short scan mode in both the fan-beam and cone-beam cases provided the data are not transversely truncated. Numerical simulations were conducted to validate the new reconstruction scheme for the fan-beam case

  16. Imaging reconstruction based on improved wavelet denoising combined with parallel-beam filtered back-projection algorithm

    Science.gov (United States)

    Ren, Zhong; Liu, Guodong; Huang, Zhen

    2012-11-01

    The image reconstruction is a key step in medical imaging (MI) and its algorithm's performance determinates the quality and resolution of reconstructed image. Although some algorithms have been used, filter back-projection (FBP) algorithm is still the classical and commonly-used algorithm in clinical MI. In FBP algorithm, filtering of original projection data is a key step in order to overcome artifact of the reconstructed image. Since simple using of classical filters, such as Shepp-Logan (SL), Ram-Lak (RL) filter have some drawbacks and limitations in practice, especially for the projection data polluted by non-stationary random noises. So, an improved wavelet denoising combined with parallel-beam FBP algorithm is used to enhance the quality of reconstructed image in this paper. In the experiments, the reconstructed effects were compared between the improved wavelet denoising and others (directly FBP, mean filter combined FBP and median filter combined FBP method). To determine the optimum reconstruction effect, different algorithms, and different wavelet bases combined with three filters were respectively test. Experimental results show the reconstruction effect of improved FBP algorithm is better than that of others. Comparing the results of different algorithms based on two evaluation standards i.e. mean-square error (MSE), peak-to-peak signal-noise ratio (PSNR), it was found that the reconstructed effects of the improved FBP based on db2 and Hanning filter at decomposition scale 2 was best, its MSE value was less and the PSNR value was higher than others. Therefore, this improved FBP algorithm has potential value in the medical imaging.

  17. Image restoration by the method of convex projections: part 1 theory.

    Science.gov (United States)

    Youla, D C; Webb, H

    1982-01-01

    A projection operator onto a closed convex set in Hilbert space is one of the few examples of a nonlinear map that can be defined in simple abstract terms. Moreover, it minimizes distance and is nonexpansive, and therefore shares two of the more important properties of ordinary linear orthogonal projections onto closed linear manifolds. In this paper, we exploit the properties of these operators to develop several iterative algorithms for image restoration from partial data which permit any number of nonlinear constraints of a certain type to be subsumed automatically. Their common conceptual basis is as follows. Every known property of an original image f is envisaged as restricting it to lie in a well-defined closed convex set. Thus, m such properties place f in the intersection E(0) = E(i) of the corresponding closed convex sets E(1),E(2),...EE(m). Given only the projection operators PE(i) onto the individual E(i)'s, i = 1 --> m, we restore f by recursive means. Clearly, in this approach, the realization of the P(i)'s in a Hilbert space setting is one of the major synthesis problems. Section I describes the geometrical significance of the three main theorems in considerable detail, and most of the underlying ideas are illustrated with the aid of simple diagrams. Section II presents rules for the numerical implementation of 11 specific projection operators which are found to occur frequently in many signal-processing applications, and the Appendix contains proofs of all the major results.

  18. Exact fan-beam image reconstruction algorithm for truncated projection data acquired from an asymmetric half-size detector

    International Nuclear Information System (INIS)

    Leng Shuai; Zhuang Tingliang; Nett, Brian E; Chen Guanghong

    2005-01-01

    In this paper, we present a new algorithm designed for a specific data truncation problem in fan-beam CT. We consider a scanning configuration in which the fan-beam projection data are acquired from an asymmetrically positioned half-sized detector. Namely, the asymmetric detector only covers one half of the scanning field of view. Thus, the acquired fan-beam projection data are truncated at every view angle. If an explicit data rebinning process is not invoked, this data acquisition configuration will reek havoc on many known fan-beam image reconstruction schemes including the standard filtered backprojection (FBP) algorithm and the super-short-scan FBP reconstruction algorithms. However, we demonstrate that a recently developed fan-beam image reconstruction algorithm which reconstructs an image via filtering a backprojection image of differentiated projection data (FBPD) survives the above fan-beam data truncation problem. Namely, we may exactly reconstruct the whole image object using the truncated data acquired in a full scan mode (2π angular range). We may also exactly reconstruct a small region of interest (ROI) using the truncated projection data acquired in a short-scan mode (less than 2π angular range). The most important characteristic of the proposed reconstruction scheme is that an explicit data rebinning process is not introduced. Numerical simulations were conducted to validate the new reconstruction algorithm

  19. Dynamic PET simulator via tomographic emission projection for kinetic modeling and parametric image studies

    Energy Technology Data Exchange (ETDEWEB)

    Häggström, Ida, E-mail: haeggsti@mskcc.org [Department of Medical Physics, Memorial Sloan Kettering Cancer Center, New York, New York 10065 and Department of Radiation Sciences, Umeå University, Umeå 90187 (Sweden); Beattie, Bradley J.; Schmidtlein, C. Ross [Department of Medical Physics, Memorial Sloan Kettering Cancer Center, New York, New York 10065 (United States)

    2016-06-15

    Purpose: To develop and evaluate a fast and simple tool called dPETSTEP (Dynamic PET Simulator of Tracers via Emission Projection), for dynamic PET simulations as an alternative to Monte Carlo (MC), useful for educational purposes and evaluation of the effects of the clinical environment, postprocessing choices, etc., on dynamic and parametric images. Methods: The tool was developed in MATLAB using both new and previously reported modules of PETSTEP (PET Simulator of Tracers via Emission Projection). Time activity curves are generated for each voxel of the input parametric image, whereby effects of imaging system blurring, counting noise, scatters, randoms, and attenuation are simulated for each frame. Each frame is then reconstructed into images according to the user specified method, settings, and corrections. Reconstructed images were compared to MC data, and simple Gaussian noised time activity curves (GAUSS). Results: dPETSTEP was 8000 times faster than MC. Dynamic images from dPETSTEP had a root mean square error that was within 4% on average of that of MC images, whereas the GAUSS images were within 11%. The average bias in dPETSTEP and MC images was the same, while GAUSS differed by 3% points. Noise profiles in dPETSTEP images conformed well to MC images, confirmed visually by scatter plot histograms, and statistically by tumor region of interest histogram comparisons that showed no significant differences (p < 0.01). Compared to GAUSS, dPETSTEP images and noise properties agreed better with MC. Conclusions: The authors have developed a fast and easy one-stop solution for simulations of dynamic PET and parametric images, and demonstrated that it generates both images and subsequent parametric images with very similar noise properties to those of MC images, in a fraction of the time. They believe dPETSTEP to be very useful for generating fast, simple, and realistic results, however since it uses simple scatter and random models it may not be suitable for

  20. Dynamic PET simulator via tomographic emission projection for kinetic modeling and parametric image studies

    International Nuclear Information System (INIS)

    Häggström, Ida; Beattie, Bradley J.; Schmidtlein, C. Ross

    2016-01-01

    Purpose: To develop and evaluate a fast and simple tool called dPETSTEP (Dynamic PET Simulator of Tracers via Emission Projection), for dynamic PET simulations as an alternative to Monte Carlo (MC), useful for educational purposes and evaluation of the effects of the clinical environment, postprocessing choices, etc., on dynamic and parametric images. Methods: The tool was developed in MATLAB using both new and previously reported modules of PETSTEP (PET Simulator of Tracers via Emission Projection). Time activity curves are generated for each voxel of the input parametric image, whereby effects of imaging system blurring, counting noise, scatters, randoms, and attenuation are simulated for each frame. Each frame is then reconstructed into images according to the user specified method, settings, and corrections. Reconstructed images were compared to MC data, and simple Gaussian noised time activity curves (GAUSS). Results: dPETSTEP was 8000 times faster than MC. Dynamic images from dPETSTEP had a root mean square error that was within 4% on average of that of MC images, whereas the GAUSS images were within 11%. The average bias in dPETSTEP and MC images was the same, while GAUSS differed by 3% points. Noise profiles in dPETSTEP images conformed well to MC images, confirmed visually by scatter plot histograms, and statistically by tumor region of interest histogram comparisons that showed no significant differences (p < 0.01). Compared to GAUSS, dPETSTEP images and noise properties agreed better with MC. Conclusions: The authors have developed a fast and easy one-stop solution for simulations of dynamic PET and parametric images, and demonstrated that it generates both images and subsequent parametric images with very similar noise properties to those of MC images, in a fraction of the time. They believe dPETSTEP to be very useful for generating fast, simple, and realistic results, however since it uses simple scatter and random models it may not be suitable for

  1. Simulations of Aperture Synthesis Imaging Radar for the EISCAT_3D Project

    Science.gov (United States)

    La Hoz, C.; Belyey, V.

    2012-12-01

    EISCAT_3D is a project to build the next generation of incoherent scatter radars endowed with multiple 3-dimensional capabilities that will replace the current EISCAT radars in Northern Scandinavia. Aperture Synthesis Imaging Radar (ASIR) is one of the technologies adopted by the EISCAT_3D project to endow it with imaging capabilities in 3-dimensions that includes sub-beam resolution. Complemented by pulse compression, it will provide 3-dimensional images of certain types of incoherent scatter radar targets resolved to about 100 metres at 100 km range, depending on the signal-to-noise ratio. This ability will open new research opportunities to map small structures associated with non-homogeneous, unstable processes such as aurora, summer and winter polar radar echoes (PMSE and PMWE), Natural Enhanced Ion Acoustic Lines (NEIALs), structures excited by HF ionospheric heating, meteors, space debris, and others. To demonstrate the feasibility of the antenna configurations and the imaging inversion algorithms a simulation of synthetic incoherent scattering data has been performed. The simulation algorithm incorporates the ability to control the background plasma parameters with non-homogeneous, non-stationary components over an extended 3-dimensional space. Control over the positions of a number of separated receiving antennas, their signal-to-noise-ratios and arriving phases allows realistic simulation of a multi-baseline interferometric imaging radar system. The resulting simulated data is fed into various inversion algorithms. This simulation package is a powerful tool to evaluate various antenna configurations and inversion algorithms. Results applied to realistic design alternatives of EISCAT_3D will be described.

  2. THE USE OF PUBLIC RELATIONS IN PROJECTING AN ORGANIZATION'S POSITIVE IMAGE

    Directory of Open Access Journals (Sweden)

    Ioana Olariu

    2017-07-01

    Full Text Available This article is a theoretical approach on the importance of using public relations in helping an organization to project a positive image. The study of the impact information has on the image of organisations seems to be an interesting research topic. Practice has proved that the image of institutions has a patrimonial value and it is sometimes essential in raising their credibility. It can be said that an image is defined as the representation of certain attitudes, opinions or prejudices concerning a person, a group of persons or the public opinion concerning an institution. In other words, an image is the opinion of a person, of a group of persons or of the public opinion regarding that institution. All specialists agree that a negative image affects, sometimes to an incredible extent, the success of an institution. In the contemporary age, we cannot speak about public opinion without taking into consideration the mass media as a main agent in transmitting the information to the public, with unlimited possibilities of influencing or forming it. The plan for the PR department starts with its own declaration of principles, which describes its roles and contribution to the organisation.

  3. IMPROVED COMPRESSION OF XML FILES FOR FAST IMAGE TRANSMISSION

    Directory of Open Access Journals (Sweden)

    S. Manimurugan

    2011-02-01

    Full Text Available The eXtensible Markup Language (XML is a format that is widely used as a tool for data exchange and storage. It is being increasingly used in secure transmission of image data over wireless network and World Wide Web. Verbose in nature, XML files can be tens of megabytes long. Thus, to reduce their size and to allow faster transmission, compression becomes vital. Several general purpose compression tools have been proposed without satisfactory results. This paper proposes a novel technique using modified BWT for compressing XML files in a lossless fashion. The experimental results show that the performance of the proposed technique outperforms both general purpose and XML-specific compressors.

  4. Remote Imaging Projects In Research And Astrophotography With Starpals

    Science.gov (United States)

    Fischer, Audrey; Kingan, J.

    2008-05-01

    StarPals is a nascent non-profit organization with the goal of providing opportunities for international collaboration between students of all ages within space science research. We believe that by encouraging an interest in the cosmos, the one thing that is truly Universal, from a young age, students will not only further their knowledge of and interest in science but will learn valuable teamwork and life skills. The goal is to foster respect, understanding and appreciation of cultural diversity among all StarPals participants, whether students, teachers, or mentors. StarPals aims to inspire students by providing opportunities in which, more than simply visualizing themselves as research scientists, they can actually become one. The technologies of robotic telescopes, videoconferencing, and online classrooms are expanding the possibilities like never before. In honor of IYA2009, StarPals would like to encourage 400 schools to participate on a global scale in astronomy/cosmology research on various concurrent projects. We will offer in-person or online workshops and training sessions to teach the teachers. We will be seeking publication in scientific journals for some student research. For our current project, the Double Stars Challenge, students use the robotic telescopes to take a series of four images of one of 30 double stars from a list furnished by the US Naval Observatory and then use MPO Canopus software to take distance and position angle measurements. StarPals provides students with hands-on training, telescope time, and software to complete the imaging and measuring. A paper will be drafted from our research data and submitted to the Journal of Double Star Observations. The kids who participate in this project may potentially be the youngest contributors to an article in a vetted scientific journal. Kids rapidly adapt and improve their computer skills operating these telescopes and discover for themselves that science is COOL!

  5. Diagnosing and mapping pulmonary emphysema on X-ray projection images: incremental value of grating-based X-ray dark-field imaging.

    Science.gov (United States)

    Meinel, Felix G; Schwab, Felix; Schleede, Simone; Bech, Martin; Herzen, Julia; Achterhold, Klaus; Auweter, Sigrid; Bamberg, Fabian; Yildirim, Ali Ö; Bohla, Alexander; Eickelberg, Oliver; Loewen, Rod; Gifford, Martin; Ruth, Ronald; Reiser, Maximilian F; Pfeiffer, Franz; Nikolaou, Konstantin

    2013-01-01

    To assess whether grating-based X-ray dark-field imaging can increase the sensitivity of X-ray projection images in the diagnosis of pulmonary emphysema and allow for a more accurate assessment of emphysema distribution. Lungs from three mice with pulmonary emphysema and three healthy mice were imaged ex vivo using a laser-driven compact synchrotron X-ray source. Median signal intensities of transmission (T), dark-field (V) and a combined parameter (normalized scatter) were compared between emphysema and control group. To determine the diagnostic value of each parameter in differentiating between healthy and emphysematous lung tissue, a receiver-operating-characteristic (ROC) curve analysis was performed both on a per-pixel and a per-individual basis. Parametric maps of emphysema distribution were generated using transmission, dark-field and normalized scatter signal and correlated with histopathology. Transmission values relative to water were higher for emphysematous lungs than for control lungs (1.11 vs. 1.06, pemphysema provides color-coded parametric maps, which show the best correlation with histopathology. In a murine model, the complementary information provided by X-ray transmission and dark-field images adds incremental diagnostic value in detecting pulmonary emphysema and visualizing its regional distribution as compared to conventional X-ray projections.

  6. Improvement of image quality of holographic projection on tilted plane using iterative algorithm

    Science.gov (United States)

    Pang, Hui; Cao, Axiu; Wang, Jiazhou; Zhang, Man; Deng, Qiling

    2017-12-01

    Holographic image projection on tilted plane has an important application prospect. In this paper, we propose a method to compute the phase-only hologram that can reconstruct a clear image on tilted plane. By adding a constant phase to the target image of the inclined plane, the corresponding light field distribution on the plane that is parallel to the hologram plane is derived through the titled diffraction calculation. Then the phase distribution of the hologram is obtained by the iterative algorithm with amplitude and phase constrain. Simulation and optical experiment are performed to show the effectiveness of the proposed method.

  7. GPU acceleration of 3D forward and backward projection using separable footprints for X-ray CT image reconstruction

    Energy Technology Data Exchange (ETDEWEB)

    Wu, Meng; Fessler, Jeffrey A. [Michigan Univ., Ann Arbor, MI (United States). Dept. of Electrical Engineering and Computer Science

    2011-07-01

    Iterative 3D image reconstruction methods can improve image quality over conventional filtered back projection (FBP) in X-ray computed tomography. However, high computational costs deter the routine use of iterative reconstruction clinically. The separable footprint method for forward and back-projection simplifies the integrals over a detector cell in a way that is quite accurate and also has a relatively efficient CPU implementation. In this project, we implemented the separable footprints method for both forward and backward projection on a graphics processing unit (GPU) with NVDIA's parallel computing architecture (CUDA). This paper describes our GPU kernels for the separable footprint method and simulation results. (orig.)

  8. Sally Ride EarthKAM - Automated Image Geo-Referencing Using Google Earth Web Plug-In

    Science.gov (United States)

    Andres, Paul M.; Lazar, Dennis K.; Thames, Robert Q.

    2013-01-01

    Sally Ride EarthKAM is an educational program funded by NASA that aims to provide the public the ability to picture Earth from the perspective of the International Space Station (ISS). A computer-controlled camera is mounted on the ISS in a nadir-pointing window; however, timing limitations in the system cause inaccurate positional metadata. Manually correcting images within an orbit allows the positional metadata to be improved using mathematical regressions. The manual correction process is time-consuming and thus, unfeasible for a large number of images. The standard Google Earth program allows for the importing of KML (keyhole markup language) files that previously were created. These KML file-based overlays could then be manually manipulated as image overlays, saved, and then uploaded to the project server where they are parsed and the metadata in the database is updated. The new interface eliminates the need to save, download, open, re-save, and upload the KML files. Everything is processed on the Web, and all manipulations go directly into the database. Administrators also have the control to discard any single correction that was made and validate a correction. This program streamlines a process that previously required several critical steps and was probably too complex for the average user to complete successfully. The new process is theoretically simple enough for members of the public to make use of and contribute to the success of the Sally Ride EarthKAM project. Using the Google Earth Web plug-in, EarthKAM images, and associated metadata, this software allows users to interactively manipulate an EarthKAM image overlay, and update and improve the associated metadata. The Web interface uses the Google Earth JavaScript API along with PHP-PostgreSQL to present the user the same interface capabilities without leaving the Web. The simpler graphical user interface will allow the public to participate directly and meaningfully with EarthKAM. The use of

  9. PLÉIADES PROJECT: ASSESSMENT OF GEOREFERENCING ACCURACY, IMAGE QUALITY, PANSHARPENING PERFORMENCE AND DSM/DTM QUALITY

    Directory of Open Access Journals (Sweden)

    H. Topan

    2016-06-01

    Full Text Available Pléiades 1A and 1B are twin optical satellites of Optical and Radar Federated Earth Observation (ORFEO program jointly running by France and Italy. They are the first satellites of Europe with sub-meter resolution. Airbus DS (formerly Astrium Geo runs a MyGIC (formerly Pléiades Users Group program to validate Pléiades images worldwide for various application purposes. The authors conduct three projects, one is within this program, the second is supported by BEU Scientific Research Project Program, and the third is supported by TÜBİTAK. Assessment of georeferencing accuracy, image quality, pansharpening performance and Digital Surface Model/Digital Terrain Model (DSM/DTM quality subjects are investigated in these projects. For these purposes, triplet panchromatic (50 cm Ground Sampling Distance (GSD and VNIR (2 m GSD Pléiades 1A images were investigated over Zonguldak test site (Turkey which is urbanised, mountainous and covered by dense forest. The georeferencing accuracy was estimated with a standard deviation in X and Y (SX, SY in the range of 0.45m by bias corrected Rational Polynomial Coefficient (RPC orientation, using ~170 Ground Control Points (GCPs. 3D standard deviation of ±0.44m in X, ±0.51m in Y, and ±1.82m in Z directions have been reached in spite of the very narrow angle of convergence by bias corrected RPC orientation. The image quality was also investigated with respect to effective resolution, Signal to Noise Ratio (SNR and blur coefficient. The effective resolution was estimated with factor slightly below 1.0, meaning that the image quality corresponds to the nominal resolution of 50cm. The blur coefficients were achieved between 0.39-0.46 for triplet panchromatic images, indicating a satisfying image quality. SNR is in the range of other comparable space borne images which may be caused by de-noising of Pléiades images. The pansharpened images were generated by various methods, and are validated by most common

  10. Respiratory compensation in projection imaging using a magnification and displacement model

    International Nuclear Information System (INIS)

    Crawford, C.R.; King, K.F.; Ritchie, C.J.; Godwin, J.D.

    1996-01-01

    Respiratory motion during the collection of computed tomography (CT) projections generates structured artifacts and a loss of resolution that can render the scans unusable. This motion is problematic in scans of those patients who cannot suspend respiration, such as the very young or incubated patients. In this paper, the authors present an algorithm that can be used to reduce motion artifacts in CT scans caused by respiration. An approximate model for the effect of respiration is that the object cross section under interrogation experiences time-varying magnification and displacement along two axes. Using this model an exact filtered backprojection algorithm is derived for the case of parallel projections. The result is extended to generate an approximate reconstruction formula for fan-beam projections. Computer simulations and scans of phantoms on a commercial CT scanner validate the new reconstruction algorithms for parallel and fan-beam projections. Significant reduction in respiratory artifacts is demonstrated clinically when the motion model is satisfied. The method can be applied to projection data used in CT single photon emission computed tomography (SPECT), positron emission tomography (PET), and magnetic resonance imaging (MRI)

  11. Use of the geometric mean of opposing planar projections in pre-reconstruction restoration of SPECT images

    International Nuclear Information System (INIS)

    Boulfelfel, D.; Rangayyan, R.M.; Hahn, L.J.; Kloiber, R.

    1992-01-01

    This paper presents a restoration scheme for single photon emission computed tomography (SPECT) images that performs restoration before reconstruction (pre-reconstruction restoration) from planar (projection) images. In this scheme, the pixel-by-pixel geometric mean of each pair of opposing (conjugate) planar projections is computed prior to the reconstruction process. The averaging process is shown to help in making the degradation phenomenon less dependent on the distance of each point of the object from the camera. The restoration filters investigated are the Wiener and power spectrum equalization filters. (author)

  12. Semantically supporting data discovery, markup and aggregation in the European Marine Observation and Data Network (EMODnet)

    Science.gov (United States)

    Lowry, Roy; Leadbetter, Adam

    2014-05-01

    The semantic content of the NERC Vocabulary Server (NVS) has been developed over thirty years. It has been used to mark up metadata and data in a wide range of international projects, including the European Commission (EC) Framework Programme 7 projects SeaDataNet and The Open Service Network for Marine Environmental Data (NETMAR). Within the United States, the National Science Foundation projects Rolling Deck to Repository and Biological & Chemical Data Management Office (BCO-DMO) use concepts from NVS for markup. Further, typed relationships between NVS concepts and terms served by the Marine Metadata Interoperability Ontology Registry and Repository. The vast majority of the concepts publicly served from NVS (35% of ~82,000) form the British Oceanographic Data Centre (BODC) Parameter Usage Vocabulary (PUV). The PUV is instantiated on the NVS as a SKOS concept collection. These terms are used to describe the individual channels in data and metadata served by, for example, BODC, SeaDataNet and BCO-DMO. The PUV terms are designed to be very precise and may contain a high level of detail. Some users have reported that the PUV is difficult to navigate due to its size and complexity (a problem CSIRO have begun to address by deploying a SISSVoc interface to the NVS), and it has been difficult to aggregate data as multiple PUV terms can - with full validity - be used to describe the same data channels. Better approaches to data aggregation are required as a use case for the PUV from the EC European Marine Observation and Data Network (EMODnet) Chemistry project. One solution, proposed and demonstrated during the course of the NETMAR project, is to build new SKOS concept collections which formalise the desired aggregations for given applications, and uses typed relationships to state which PUV concepts contribute to a specific aggregation. Development of these new collections requires input from a group of experts in the application domain who can decide which PUV

  13. EISCAT Aperture Synthesis Imaging (EASI _3D) for the EISCAT_3D Project

    Science.gov (United States)

    La Hoz, Cesar; Belyey, Vasyl

    2012-07-01

    Aperture Synthesis Imaging Radar (ASIR) is one of the technologies adopted by the EISCAT_3D project to endow it with imaging capabilities in 3-dimensions that includes sub-beam resolution. Complemented by pulse compression, it will provide 3-dimensional images of certain types of incoherent scatter radar targets resolved to about 100 metres at 100 km range, depending on the signal-to-noise ratio. This ability will open new research opportunities to map small structures associated with non-homogeneous, unstable processes such as aurora, summer and winter polar radar echoes (PMSE and PMWE), Natural Enhanced Ion Acoustic Lines (NEIALs), structures excited by HF ionospheric heating, meteors, space debris, and others. The underlying physico-mathematical principles of the technique are the same as the technique employed in radioastronomy to image stellar objects; both require sophisticated inversion techniques to obtain reliable images.

  14. Digital breast tomosynthesis: computer-aided detection of clustered microcalcifications on planar projection images

    International Nuclear Information System (INIS)

    Samala, Ravi K; Chan, Heang-Ping; Lu, Yao; Hadjiiski, Lubomir M; Wei, Jun; Helvie, Mark A

    2014-01-01

    This paper describes a new approach to detect microcalcification clusters (MCs) in digital breast tomosynthesis (DBT) via its planar projection (PPJ) image. With IRB approval, two-view (cranio-caudal and mediolateral oblique views) DBTs of human subject breasts were obtained with a GE GEN2 prototype DBT system that acquires 21 projection angles spanning 60° in 3° increments. A data set of 307 volumes (154 human subjects) was divided by case into independent training (127 with MCs) and test sets (104 with MCs and 76 free of MCs). A simultaneous algebraic reconstruction technique with multiscale bilateral filtering (MSBF) regularization was used to enhance microcalcifications and suppress noise. During the MSBF regularized reconstruction, the DBT volume was separated into high frequency (HF) and low frequency components representing microcalcifications and larger structures. At the final iteration, maximum intensity projection was applied to the regularized HF volume to generate a PPJ image that contained MCs with increased contrast-to-noise ratio (CNR) and reduced search space. High CNR objects in the PPJ image were extracted and labeled as microcalcification candidates. Convolution neural network trained to recognize the image pattern of microcalcifications was used to classify the candidates into true calcifications and tissue structures and artifacts. The remaining microcalcification candidates were grouped into MCs by dynamic conditional clustering based on adaptive CNR threshold and radial distance criteria. False positive (FP) clusters were further reduced using the number of candidates in a cluster, CNR and size of microcalcification candidates. At 85% sensitivity an FP rate of 0.71 and 0.54 was achieved for view- and case-based sensitivity, respectively, compared to 2.16 and 0.85 achieved in DBT. The improvement was significant (p-value = 0.003) by JAFROC analysis. (paper)

  15. Image quality in thoracic 4D cone-beam CT: A sensitivity analysis of respiratory signal, binning method, reconstruction algorithm, and projection angular spacing

    International Nuclear Information System (INIS)

    Shieh, Chun-Chien; Kipritidis, John; O’Brien, Ricky T.; Keall, Paul J.; Kuncic, Zdenka

    2014-01-01

    Purpose: Respiratory signal, binning method, and reconstruction algorithm are three major controllable factors affecting image quality in thoracic 4D cone-beam CT (4D-CBCT), which is widely used in image guided radiotherapy (IGRT). Previous studies have investigated each of these factors individually, but no integrated sensitivity analysis has been performed. In addition, projection angular spacing is also a key factor in reconstruction, but how it affects image quality is not obvious. An investigation of the impacts of these four factors on image quality can help determine the most effective strategy in improving 4D-CBCT for IGRT. Methods: Fourteen 4D-CBCT patient projection datasets with various respiratory motion features were reconstructed with the following controllable factors: (i) respiratory signal (real-time position management, projection image intensity analysis, or fiducial marker tracking), (ii) binning method (phase, displacement, or equal-projection-density displacement binning), and (iii) reconstruction algorithm [Feldkamp–Davis–Kress (FDK), McKinnon–Bates (MKB), or adaptive-steepest-descent projection-onto-convex-sets (ASD-POCS)]. The image quality was quantified using signal-to-noise ratio (SNR), contrast-to-noise ratio, and edge-response width in order to assess noise/streaking and blur. The SNR values were also analyzed with respect to the maximum, mean, and root-mean-squared-error (RMSE) projection angular spacing to investigate how projection angular spacing affects image quality. Results: The choice of respiratory signals was found to have no significant impact on image quality. Displacement-based binning was found to be less prone to motion artifacts compared to phase binning in more than half of the cases, but was shown to suffer from large interbin image quality variation and large projection angular gaps. Both MKB and ASD-POCS resulted in noticeably improved image quality almost 100% of the time relative to FDK. In addition, SNR

  16. Image quality in thoracic 4D cone-beam CT: A sensitivity analysis of respiratory signal, binning method, reconstruction algorithm, and projection angular spacing

    Energy Technology Data Exchange (ETDEWEB)

    Shieh, Chun-Chien [Radiation Physics Laboratory, Sydney Medical School, University of Sydney, NSW 2006, Australia and Institute of Medical Physics, School of Physics, University of Sydney, NSW 2006 (Australia); Kipritidis, John; O’Brien, Ricky T.; Keall, Paul J., E-mail: paul.keall@sydney.edu.au [Radiation Physics Laboratory, Sydney Medical School, University of Sydney, NSW 2006 (Australia); Kuncic, Zdenka [Institute of Medical Physics, School of Physics, University of Sydney, NSW 2006 (Australia)

    2014-04-15

    Purpose: Respiratory signal, binning method, and reconstruction algorithm are three major controllable factors affecting image quality in thoracic 4D cone-beam CT (4D-CBCT), which is widely used in image guided radiotherapy (IGRT). Previous studies have investigated each of these factors individually, but no integrated sensitivity analysis has been performed. In addition, projection angular spacing is also a key factor in reconstruction, but how it affects image quality is not obvious. An investigation of the impacts of these four factors on image quality can help determine the most effective strategy in improving 4D-CBCT for IGRT. Methods: Fourteen 4D-CBCT patient projection datasets with various respiratory motion features were reconstructed with the following controllable factors: (i) respiratory signal (real-time position management, projection image intensity analysis, or fiducial marker tracking), (ii) binning method (phase, displacement, or equal-projection-density displacement binning), and (iii) reconstruction algorithm [Feldkamp–Davis–Kress (FDK), McKinnon–Bates (MKB), or adaptive-steepest-descent projection-onto-convex-sets (ASD-POCS)]. The image quality was quantified using signal-to-noise ratio (SNR), contrast-to-noise ratio, and edge-response width in order to assess noise/streaking and blur. The SNR values were also analyzed with respect to the maximum, mean, and root-mean-squared-error (RMSE) projection angular spacing to investigate how projection angular spacing affects image quality. Results: The choice of respiratory signals was found to have no significant impact on image quality. Displacement-based binning was found to be less prone to motion artifacts compared to phase binning in more than half of the cases, but was shown to suffer from large interbin image quality variation and large projection angular gaps. Both MKB and ASD-POCS resulted in noticeably improved image quality almost 100% of the time relative to FDK. In addition, SNR

  17. Sparse Nonlinear Electromagnetic Imaging Accelerated With Projected Steepest Descent Algorithm

    KAUST Repository

    Desmal, Abdulla

    2017-04-03

    An efficient electromagnetic inversion scheme for imaging sparse 3-D domains is proposed. The scheme achieves its efficiency and accuracy by integrating two concepts. First, the nonlinear optimization problem is constrained using L₀ or L₁-norm of the solution as the penalty term to alleviate the ill-posedness of the inverse problem. The resulting Tikhonov minimization problem is solved using nonlinear Landweber iterations (NLW). Second, the efficiency of the NLW is significantly increased using a steepest descent algorithm. The algorithm uses a projection operator to enforce the sparsity constraint by thresholding the solution at every iteration. Thresholding level and iteration step are selected carefully to increase the efficiency without sacrificing the convergence of the algorithm. Numerical results demonstrate the efficiency and accuracy of the proposed imaging scheme in reconstructing sparse 3-D dielectric profiles.

  18. Maximum intensity projection MR angiography using shifted image data

    International Nuclear Information System (INIS)

    Machida, Yoshio; Ichinose, Nobuyasu; Hatanaka, Masahiko; Goro, Takehiko; Kitake, Shinichi; Hatta, Junicchi.

    1992-01-01

    The quality of MR angiograms has been significantly improved in past several years. Spatial resolution, however, is not sufficient for clinical use. On the other hand, MR image data can be filled at anywhere using Fourier shift theorem, and the quality of multi-planar reformed image has been reported to be improved remarkably using 'shifted data'. In this paper, we have clarified the efficiency of 'shifted data' for maximum intensity projection MR angiography. Our experimental studies and theoretical consideration showd that the quality of MR angiograms has been significantly improved using 'shifted data' as follows; 1) remarkable reduction of mosaic artifact, 2) improvement of spatial continuity for the blood vessels, and 3) reduction of variance for the signal intensity along the blood vessels. In other words, the angiograms looks much 'finer' than conventional ones, although the spatial resolution is not improved theoretically. Furthermore, we found the quality of MR angiograms dose not improve significantly with the 'shifted data' more than twice as dense as ordinal ones. (author)

  19. From data to analysis: linking NWChem and Avogadro with the syntax and semantics of Chemical Markup Language

    Energy Technology Data Exchange (ETDEWEB)

    De Jong, Wibe A.; Walker, Andrew M.; Hanwell, Marcus D.

    2013-05-24

    Background Multidisciplinary integrated research requires the ability to couple the diverse sets of data obtained from a range of complex experiments and computer simulations. Integrating data requires semantically rich information. In this paper the generation of semantically rich data from the NWChem computational chemistry software is discussed within the Chemical Markup Language (CML) framework. Results The NWChem computational chemistry software has been modified and coupled to the FoX library to write CML compliant XML data files. The FoX library was expanded to represent the lexical input files used by the computational chemistry software. Conclusions The production of CML compliant XML files for the computational chemistry software NWChem can be relatively easily accomplished using the FoX library. A unified computational chemistry or CompChem convention and dictionary needs to be developed through a community-based effort. The long-term goal is to enable a researcher to do Google-style chemistry and physics searches.

  20. The Biological Connection Markup Language: a SBGN-compliant format for visualization, filtering and analysis of biological pathways.

    Science.gov (United States)

    Beltrame, Luca; Calura, Enrica; Popovici, Razvan R; Rizzetto, Lisa; Guedez, Damariz Rivero; Donato, Michele; Romualdi, Chiara; Draghici, Sorin; Cavalieri, Duccio

    2011-08-01

    Many models and analysis of signaling pathways have been proposed. However, neither of them takes into account that a biological pathway is not a fixed system, but instead it depends on the organism, tissue and cell type as well as on physiological, pathological and experimental conditions. The Biological Connection Markup Language (BCML) is a format to describe, annotate and visualize pathways. BCML is able to store multiple information, permitting a selective view of the pathway as it exists and/or behave in specific organisms, tissues and cells. Furthermore, BCML can be automatically converted into data formats suitable for analysis and into a fully SBGN-compliant graphical representation, making it an important tool that can be used by both computational biologists and 'wet lab' scientists. The XML schema and the BCML software suite are freely available under the LGPL for download at http://bcml.dc-atlas.net. They are implemented in Java and supported on MS Windows, Linux and OS X.

  1. Low-complexity camera digital signal imaging for video document projection system

    Science.gov (United States)

    Hsia, Shih-Chang; Tsai, Po-Shien

    2011-04-01

    We present high-performance and low-complexity algorithms for real-time camera imaging applications. The main functions of the proposed camera digital signal processing (DSP) involve color interpolation, white balance, adaptive binary processing, auto gain control, and edge and color enhancement for video projection systems. A series of simulations demonstrate that the proposed method can achieve good image quality while keeping computation cost and memory requirements low. On the basis of the proposed algorithms, the cost-effective hardware core is developed using Verilog HDL. The prototype chip has been verified with one low-cost programmable device. The real-time camera system can achieve 1270 × 792 resolution with the combination of extra components and can demonstrate each DSP function.

  2. How many Enrons? Mark-ups in the stated capital cost of independent power producers' (IPPs') power projects in developing countries

    International Nuclear Information System (INIS)

    Phadke, Amol

    2009-01-01

    I analyze the determinants of the stated capital cost of IPPs' power projects which significantly influences their price of power. I show that IPPs face a strong incentive to overstate their capital cost and argue that effective competition or regulatory scrutiny will limit the extent of the same. I analyze the stated capital costs of combined cycle gas turbine (CCGT) IPP projects in eight developing countries which became operational during 1990-2006 and find that the stated capital cost of projects selected without competitive bidding is 44-56% higher than those selected with competitive bidding, even after controlling for the effect of cost differences among projects. The extent to which the stated capital costs of projects selected without competitive bidding are higher compared those selected with competitive bidding, is a lower bound on the extent to which they are overstated. My results indicate the drawbacks associated with a policy of promoting private sector participation without an adequate focus on improving competition or regulation. (author)

  3. A 3D imaging system integrating photoacoustic and fluorescence orthogonal projections for anatomical, functional and molecular assessment of rodent models

    Science.gov (United States)

    Brecht, Hans P.; Ivanov, Vassili; Dumani, Diego S.; Emelianov, Stanislav Y.; Anastasio, Mark A.; Ermilov, Sergey A.

    2018-03-01

    We have developed a preclinical 3D imaging instrument integrating photoacoustic tomography and fluorescence (PAFT) addressing known deficiencies in sensitivity and spatial resolution of the individual imaging components. PAFT is designed for simultaneous acquisition of photoacoustic and fluorescence orthogonal projections at each rotational position of a biological object, enabling direct registration of the two imaging modalities. Orthogonal photoacoustic projections are utilized to reconstruct large (21 cm3 ) volumes showing vascularized anatomical structures and regions of induced optical contrast with spatial resolution exceeding 100 µm. The major advantage of orthogonal fluorescence projections is significant reduction of background noise associated with transmitted or backscattered photons. The fluorescence imaging component of PAFT is used to boost detection sensitivity by providing low-resolution spatial constraint for the fluorescent biomarkers. PAFT performance characteristics were assessed by imaging optical and fluorescent contrast agents in tissue mimicking phantoms and in vivo. The proposed PAFT technology will enable functional and molecular volumetric imaging using fluorescent biomarkers, nanoparticles, and other photosensitive constructs mapped with high fidelity over robust anatomical structures, such as skin, central and peripheral vasculature, and internal organs.

  4. Breast EIT using a new projected image reconstruction method with multi-frequency measurements.

    Science.gov (United States)

    Lee, Eunjung; Ts, Munkh-Erdene; Seo, Jin Keun; Woo, Eung Je

    2012-05-01

    We propose a new method to produce admittivity images of the breast for the diagnosis of breast cancer using electrical impedance tomography(EIT). Considering the anatomical structure of the breast, we designed an electrode configuration where current-injection and voltage-sensing electrodes are separated in such a way that internal current pathways are approximately along the tangential direction of an array of voltage-sensing electrodes. Unlike conventional EIT imaging methods where the number of injected currents is maximized to increase the total amount of measured data, current is injected only twice between two pairs of current-injection electrodes attached along the circumferential side of the breast. For each current injection, the induced voltages are measured from the front surface of the breast using as many voltage-sensing electrodes as possible. Although this electrode configurational lows us to measure induced voltages only on the front surface of the breast,they are more sensitive to an anomaly inside the breast since such an injected current tends to produce a more uniform internal current density distribution. Furthermore, the sensitivity of a measured boundary voltage between two equipotential lines on the front surface of the breast is improved since those equipotential lines are perpendicular to the primary direction of internal current streamlines. One should note that this novel data collection method is different from those of other frontal plane techniques such as the x-ray projection and T-scan imaging methods because we do not get any data on the plane that is perpendicular to the current flow. To reconstruct admittivity images using two measured voltage data sets, a new projected image reconstruction algorithm is developed. Numerical simulations demonstrate the frequency-difference EIT imaging of the breast. The results show that the new method is promising to accurately detect and localize small anomalies inside the breast.

  5. Breast EIT using a new projected image reconstruction method with multi-frequency measurements

    International Nuclear Information System (INIS)

    Lee, Eunjung; Ts, Munkh-Erdene; Seo, Jin Keun; Woo, Eung Je

    2012-01-01

    We propose a new method to produce admittivity images of the breast for the diagnosis of breast cancer using electrical impedance tomography (EIT). Considering the anatomical structure of the breast, we designed an electrode configuration where current-injection and voltage-sensing electrodes are separated in such a way that internal current pathways are approximately along the tangential direction of an array of voltage-sensing electrodes. Unlike conventional EIT imaging methods where the number of injected currents is maximized to increase the total amount of measured data, current is injected only twice between two pairs of current-injection electrodes attached along the circumferential side of the breast. For each current injection, the induced voltages are measured from the front surface of the breast using as many voltage-sensing electrodes as possible. Although this electrode configuration allows us to measure induced voltages only on the front surface of the breast, they are more sensitive to an anomaly inside the breast since such an injected current tends to produce a more uniform internal current density distribution. Furthermore, the sensitivity of a measured boundary voltage between two equipotential lines on the front surface of the breast is improved since those equipotential lines are perpendicular to the primary direction of internal current streamlines. One should note that this novel data collection method is different from those of other frontal plane techniques such as the x-ray projection and T-scan imaging methods because we do not get any data on the plane that is perpendicular to the current flow. To reconstruct admittivity images using two measured voltage data sets, a new projected image reconstruction algorithm is developed. Numerical simulations demonstrate the frequency-difference EIT imaging of the breast. The results show that the new method is promising to accurately detect and localize small anomalies inside the breast. (paper)

  6. Image quality of microcalcifications in digital breast tomosynthesis: Effects of projection-view distributions

    OpenAIRE

    Lu, Yao; Chan, Heang-Ping; Wei, Jun; Goodsitt, Mitch; Carson, Paul L.; Hadjiiski, Lubomir; Schmitz, Andrea; Eberhard, Jeffrey W.; Claus, Bernhard E. H.

    2011-01-01

    Purpose: To analyze the effects of projection-view (PV) distribution on the contrast and spatial blurring of microcalcifications on the tomosynthesized slices (X-Y plane) and along the depth (Z) direction for the same radiation dose in digital breast tomosynthesis (DBT).Methods: A GE GEN2 prototype DBT system was used for acquisition of DBT scans. The system acquires PV images from 21 angles in 3° increments over a ±30° range. From these acquired PV images, the authors selected six subsets of...

  7. Future perspectives - proposal for Oxford Physiome Project.

    Science.gov (United States)

    Oku, Yoshitaka

    2010-01-01

    The Physiome Project is an effort to understand living creatures using "analysis by synthesis" strategy, i.e., by reproducing their behaviors. In order to achieve its goal, sharing developed models between different computer languages and application programs to incorporate into integrated models is critical. To date, several XML-based markup languages has been developed for this purpose. However, source codes written with XML-based languages are very difficult to read and edit using text editors. An alternative way is to use an object-oriented meta-language, which can be translated to different computer languages and transplanted to different application programs. Object-oriented languages are suitable for describing structural organization by hierarchical classes and taking advantage of statistical properties to reduce the number of parameter while keeping the complexity of behaviors. Using object-oriented languages to describe each element and posting it to a public domain should be the next step to build up integrated models of the respiratory control system.

  8. Histoimmunogenetics Markup Language 1.0: Reporting next generation sequencing-based HLA and KIR genotyping.

    Science.gov (United States)

    Milius, Robert P; Heuer, Michael; Valiga, Daniel; Doroschak, Kathryn J; Kennedy, Caleb J; Bolon, Yung-Tsi; Schneider, Joel; Pollack, Jane; Kim, Hwa Ran; Cereb, Nezih; Hollenbach, Jill A; Mack, Steven J; Maiers, Martin

    2015-12-01

    We present an electronic format for exchanging data for HLA and KIR genotyping with extensions for next-generation sequencing (NGS). This format addresses NGS data exchange by refining the Histoimmunogenetics Markup Language (HML) to conform to the proposed Minimum Information for Reporting Immunogenomic NGS Genotyping (MIRING) reporting guidelines (miring.immunogenomics.org). Our refinements of HML include two major additions. First, NGS is supported by new XML structures to capture additional NGS data and metadata required to produce a genotyping result, including analysis-dependent (dynamic) and method-dependent (static) components. A full genotype, consensus sequence, and the surrounding metadata are included directly, while the raw sequence reads and platform documentation are externally referenced. Second, genotype ambiguity is fully represented by integrating Genotype List Strings, which use a hierarchical set of delimiters to represent allele and genotype ambiguity in a complete and accurate fashion. HML also continues to enable the transmission of legacy methods (e.g. site-specific oligonucleotide, sequence-specific priming, and Sequence Based Typing (SBT)), adding features such as allowing multiple group-specific sequencing primers, and fully leveraging techniques that combine multiple methods to obtain a single result, such as SBT integrated with NGS. Copyright © 2015 The Authors. Published by Elsevier Inc. All rights reserved.

  9. Nonintrusive iris image acquisition system based on a pan-tilt-zoom camera and light stripe projection

    Science.gov (United States)

    Yoon, Soweon; Jung, Ho Gi; Park, Kang Ryoung; Kim, Jaihie

    2009-03-01

    Although iris recognition is one of the most accurate biometric technologies, it has not yet been widely used in practical applications. This is mainly due to user inconvenience during the image acquisition phase. Specifically, users try to adjust their eye position within small capture volume at a close distance from the system. To overcome these problems, we propose a novel iris image acquisition system that provides users with unconstrained environments: a large operating range, enabling movement from standing posture, and capturing good-quality iris images in an acceptable time. The proposed system has the following three contributions compared with previous works: (1) the capture volume is significantly increased by using a pan-tilt-zoom (PTZ) camera guided by a light stripe projection, (2) the iris location in the large capture volume is found fast due to 1-D vertical face searching from the user's horizontal position obtained by the light stripe projection, and (3) zooming and focusing on the user's irises at a distance are accurate and fast using the estimated 3-D position of a face by the light stripe projection and the PTZ camera. Experimental results show that the proposed system can capture good-quality iris images in 2.479 s on average at a distance of 1.5 to 3 m, while allowing a limited amount of movement by the user.

  10. The development of MML (Medical Markup Language) version 3.0 as a medical document exchange format for HL7 messages.

    Science.gov (United States)

    Guo, Jinqiu; Takada, Akira; Tanaka, Koji; Sato, Junzo; Suzuki, Muneou; Suzuki, Toshiaki; Nakashima, Yusei; Araki, Kenji; Yoshihara, Hiroyuki

    2004-12-01

    Medical Markup Language (MML), as a set of standards, has been developed over the last 8 years to allow the exchange of medical data between different medical information providers. MML Version 2.21 used XML as a metalanguage and was announced in 1999. In 2001, MML was updated to Version 2.3, which contained 12 modules. The latest version--Version 3.0--is based on the HL7 Clinical Document Architecture (CDA). During the development of this new version, the structure of MML Version 2.3 was analyzed, subdivided into several categories, and redefined so the information defined in MML could be described in HL7 CDA Level One. As a result of this development, it has become possible to exchange MML Version 3.0 medical documents via HL7 messages.

  11. Pricing of brand extensions based on perceptions of brand equity

    Directory of Open Access Journals (Sweden)

    Panagiotis Arsenos

    2018-04-01

    Full Text Available The paper explores the role of brand equity when pricing hypothetical brand extensions. Companies tend to use different pricing techniques for their products, and their pricing decisions are based on many factors, including image and category fit of the product with the existing image and products of the company. Brand extensions are usually investigated from a consumer perspective, focusing on the extension attitude, however, it is essential to understand the corporate decision-making process regarding pricing. Exploring this matter using quantitative research methods, the study provides empirical evidence that companies that have invested heavily in marketing actions in the past and have built strong brand equity over-time, show flexibility in the mark-up during the cost decision-making process of a hypothetical brand extensions. Variations in mark-up percentages are also observed when there is a difference in image and category fit of the extension to the original brand. However, companies characterized by greater brand equity exhibited greater flexibility in the mark-up percentages, even for low fit extensions.

  12. Development and comparison of projection and image space 3D nodule insertion techniques

    Science.gov (United States)

    Robins, Marthony; Solomon, Justin; Sahbaee, Pooyan; Samei, Ehsan

    2016-04-01

    This study aimed to develop and compare two methods of inserting computerized virtual lesions into CT datasets. 24 physical (synthetic) nodules of three sizes and four morphologies were inserted into an anthropomorphic chest phantom (LUNGMAN, KYOTO KAGAKU). The phantom was scanned (Somatom Definition Flash, Siemens Healthcare) with and without nodules present, and images were reconstructed with filtered back projection and iterative reconstruction (SAFIRE) at 0.6 mm slice thickness using a standard thoracic CT protocol at multiple dose settings. Virtual 3D CAD models based on the physical nodules were virtually inserted (accounting for the system MTF) into the nodule-free CT data using two techniques. These techniques include projection-based and image-based insertion. Nodule volumes were estimated using a commercial segmentation tool (iNtuition, TeraRecon, Inc.). Differences were tested using paired t-tests and R2 goodness of fit between the virtually and physically inserted nodules. Both insertion techniques resulted in nodule volumes very similar to the real nodules (values were all <0.97 for both insertion techniques. These data imply that these techniques can confidently be used as a means of inserting virtual nodules in CT datasets. These techniques can be instrumental in building hybrid CT datasets composed of patient images with virtually inserted nodules.

  13. A general XML schema and SPM toolbox for storage of neuro-imaging results and anatomical labels.

    Science.gov (United States)

    Keator, David Bryant; Gadde, Syam; Grethe, Jeffrey S; Taylor, Derek V; Potkin, Steven G

    2006-01-01

    With the increased frequency of multisite, large-scale collaborative neuro-imaging studies, the need for a general, self-documenting framework for the storage and retrieval of activation maps and anatomical labels becomes evident. To address this need, we have developed and extensible markup language (XML) schema and associated tools for the storage of neuro-imaging activation maps and anatomical labels. This schema, as part of the XML-based Clinical Experiment Data Exchange (XCEDE) schema, provides storage capabilities for analysis annotations, activation threshold parameters, and cluster and voxel-level statistics. Activation parameters contain information describing the threshold, degrees of freedom, FWHM smoothness, search volumes, voxel sizes, expected voxels per cluster, and expected number of clusters in the statistical map. Cluster and voxel statistics can be stored along with the coordinates, threshold, and anatomical label information. Multiple threshold types can be documented for a given cluster or voxel along with the uncorrected and corrected probability values. Multiple atlases can be used to generate anatomical labels and stored for each significant voxel or cluter. Additionally, a toolbox for Statistical Parametric Mapping software (http://www. fil. ion.ucl.ac.uk/spm/) was created to capture the results from activation maps using the XML schema that supports both SPM99 and SPM2 versions (http://nbirn.net/Resources/Users/ Applications/xcede/SPM_XMLTools.htm). Support for anatomical labeling is available via the Talairach Daemon (http://ric.uthscsa. edu/projects/talairachdaemon.html) and Automated Anatomical Labeling (http://www. cyceron.fr/freeware/).

  14. Scanned Image Projection System Employing Intermediate Image Plane

    Science.gov (United States)

    DeJong, Christian Dean (Inventor); Hudman, Joshua M. (Inventor)

    2014-01-01

    In imaging system, a spatial light modulator is configured to produce images by scanning a plurality light beams. A first optical element is configured to cause the plurality of light beams to converge along an optical path defined between the first optical element and the spatial light modulator. A second optical element is disposed between the spatial light modulator and a waveguide. The first optical element and the spatial light modulator are arranged such that an image plane is created between the spatial light modulator and the second optical element. The second optical element is configured to collect the diverging light from the image plane and collimate it. The second optical element then delivers the collimated light to a pupil at an input of the waveguide.

  15. Methods of X-ray CT image reconstruction from few projections; Methodes de reconstruction d'images a partir d'un faible nombre de projections en tomographie par rayons X

    Energy Technology Data Exchange (ETDEWEB)

    Wang, H.

    2011-10-24

    To improve the safety (low dose) and the productivity (fast acquisition) of a X-ray CT system, we want to reconstruct a high quality image from a small number of projections. The classical reconstruction algorithms generally fail since the reconstruction procedure is unstable and suffers from artifacts. A new approach based on the recently developed 'Compressed Sensing' (CS) theory assumes that the unknown image is in some sense 'sparse' or 'compressible', and the reconstruction is formulated through a non linear optimization problem (TV/l1 minimization) by enhancing the sparsity. Using the pixel (or voxel in 3D) as basis, to apply the CS framework in CT one usually needs a 'sparsifying' transform, and combines it with the 'X-ray projector' which applies on the pixel image. In this thesis, we have adapted a 'CT-friendly' radial basis of Gaussian family called 'blob' to the CS-CT framework. The blob has better space-frequency localization properties than the pixel, and many operations, such as the X-ray transform, can be evaluated analytically and are highly parallelizable (on GPU platform). Compared to the classical Kaisser-Bessel blob, the new basis has a multi-scale structure: an image is the sum of dilated and translated radial Mexican hat functions. The typical medical objects are compressible under this basis, so the sparse representation system used in the ordinary CS algorithms is no more needed. 2D simulations show that the existing TV and l1 algorithms are more efficient and the reconstructions have better visual quality than the equivalent approach based on the pixel or wavelet basis. The new approach has also been validated on 2D experimental data, where we have observed that in general the number of projections can be reduced to about 50%, without compromising the image quality. (author) [French] Afin d'ameliorer la surete (faible dose) et la productivite (acquisition rapide) du

  16. Minimizing image noise in on-board CT reconstruction using both kilovoltage and megavoltage beam projections

    International Nuclear Information System (INIS)

    Zhang Junan; Yin Fangfang

    2007-01-01

    We studied a recently proposed aggregated CT reconstruction technique which combines the complementary advantages of kilovoltage (kV) and megavoltage (MV) x-ray imaging. Various phantoms were imaged to study the effects of beam orientations and geometry of the imaging object on image quality of reconstructed CT. It was shown that the quality of aggregated CT was correlated with both kV and MV beam orientations and the degree of this correlation depended upon the geometry of the imaging object. The results indicated that the optimal orientations were those when kV beams pass through the thinner portion and MV beams pass through the thicker portion of the imaging object. A special preprocessing procedure was also developed to perform contrast conversions between kV and MV information prior to image reconstruction. The performance of two reconstruction methods, one filtered backprojection method and one iterative method, were compared. The effects of projection number, beam truncation, and contrast conversion on the CT image quality were investigated

  17. Improved detection of chronic myocardial infarction with Fourier amplitude and phase imaging in two projections

    International Nuclear Information System (INIS)

    Akins, E.W.; Scott, E.A.; Williams, C.M.

    1987-01-01

    Twenty-seven patients with 33 chronic myocaridal infarctions underwent MR imaging and radionuclide ventriculography at rest. The radionuclide ventriculographs, in left anterior oblique (LAO) and left posterior oblique (LPO) projections, were analyzed by two independent observers by visual inspection and combined Fourier-transformed amplitude and phase imaging. Only 15 (45%) of the 33 infarctions were detected by visual inspection, but 21 (64%) were detected on the LAO Fourier-transformed images along. Thirty (91%) were detected by using both LAO and LPO Fourier-transformed images. On MR imaging, 28 (85%) of the myocardial infarctions appeared as areas of focal wall thinning. Combined Fourier-transformed amplitude and phase imaging in both LAO and LPO views discloses more myocardial infarctions than visual inspection or LAO Fourier-transformed images alone because inferior infarctions, which are frequently missed in the LAO view, are easily seen in the LPO view

  18. Fast backprojection-based reconstruction of spectral-spatial EPR images from projections with the constant sweep of a magnetic field.

    Science.gov (United States)

    Komarov, Denis A; Hirata, Hiroshi

    2017-08-01

    In this paper, we introduce a procedure for the reconstruction of spectral-spatial EPR images using projections acquired with the constant sweep of a magnetic field. The application of a constant field-sweep and a predetermined data sampling rate simplifies the requirements for EPR imaging instrumentation and facilitates the backprojection-based reconstruction of spectral-spatial images. The proposed approach was applied to the reconstruction of a four-dimensional numerical phantom and to actual spectral-spatial EPR measurements. Image reconstruction using projections with a constant field-sweep was three times faster than the conventional approach with the application of a pseudo-angle and a scan range that depends on the applied field gradient. Spectral-spatial EPR imaging with a constant field-sweep for data acquisition only slightly reduces the signal-to-noise ratio or functional resolution of the resultant images and can be applied together with any common backprojection-based reconstruction algorithm. Copyright © 2017 Elsevier Inc. All rights reserved.

  19. Visualization Development of the Ballistic Threat Geospatial Optimization

    Science.gov (United States)

    2015-07-01

    topographic globes, Keyhole Markup Language (KML), and Collada files. World Wind gives the user the ability to import 3-D models and navigate...present. After the first person view window is closed , the images stored in memory are then converted to a QuickTime movie (.MOV). The video will be...processing unit HPC high-performance computing JOGL Java implementation of OpenGL KML Keyhole Markup Language NASA National Aeronautics and Space

  20. Effect of number of of projections on inverse radon transform based image reconstruction by using filtered back-projection for parallel beam transmission tomography

    International Nuclear Information System (INIS)

    Qureshi, S.A.; Mirza, S.M.; Arif, M.

    2007-01-01

    This paper present the effect of number of projections on inverse Radon transform (IRT) estimation using filtered back-projection (FBP) technique for parallel beam transmission tomography. The head phantom and the lung phantom have been used in this work. Various filters used in this study include Ram-Lak, Shepp-Logan, Cosin, Hamming and Hanning filters. The slices have been reconstructed by increasing the number of projections through parallel beam transmission tomography keeping the projections uniformly distributed. The Euclidean and Mean Squared errors and peak signal-to-noise ratio (PSNR) have been analyzed for their sensitiveness as functions of number of projections. It has found that image quality improves with the number of projections but at the cost of the computer time. The error has been minimized to get the best approximation of inverse Radon transform (IRT) as the number of projections is enhanced. The value of PSNR has been found to increase from 8.20 to 24.53 dB as the number of projections is raised from 5 to 180 for head phantom. (author)

  1. Block matching sparsity regularization-based image reconstruction for incomplete projection data in computed tomography

    Science.gov (United States)

    Cai, Ailong; Li, Lei; Zheng, Zhizhong; Zhang, Hanming; Wang, Linyuan; Hu, Guoen; Yan, Bin

    2018-02-01

    In medical imaging many conventional regularization methods, such as total variation or total generalized variation, impose strong prior assumptions which can only account for very limited classes of images. A more reasonable sparse representation frame for images is still badly needed. Visually understandable images contain meaningful patterns, and combinations or collections of these patterns can be utilized to form some sparse and redundant representations which promise to facilitate image reconstructions. In this work, we propose and study block matching sparsity regularization (BMSR) and devise an optimization program using BMSR for computed tomography (CT) image reconstruction for an incomplete projection set. The program is built as a constrained optimization, minimizing the L1-norm of the coefficients of the image in the transformed domain subject to data observation and positivity of the image itself. To solve the program efficiently, a practical method based on the proximal point algorithm is developed and analyzed. In order to accelerate the convergence rate, a practical strategy for tuning the BMSR parameter is proposed and applied. The experimental results for various settings, including real CT scanning, have verified the proposed reconstruction method showing promising capabilities over conventional regularization.

  2. Public financing of research projects in Poland – its image and consequences?

    Directory of Open Access Journals (Sweden)

    Feldy Marzena

    2016-12-01

    Full Text Available Both the size of appropriation as well as their distribution have had a profound impact on the shape and activities of the science sector. The creation of a fair system of distribution of public resources to research that will also facilitate the effective implementation of the pursued scientific policy goals represents a major challenge. The issue of the determination of the right proportions of individual distribution channels remains critical. Despite this task being the responsibility of the State, establishing cooperation in this respect with the scientific community is desirable. The implementation of solutions that raise the concerns of scientists leads to system instability and reduced effectiveness which is manifest among others in a lower level of indicators of scientific excellence and innovation in the country. These observations are pertinent to Poland where the manner in which scientific institutes operate were changed under the 2009–2011 reform. A neoliberal operating model based on competitiveness and rewarding of top rated scientific establishments and scientists was implemented. In light of these facts, the initiation of research that will provide information on how the implemented changes are perceived by the scientific community seems to be appropriate. The aim of this article is in particlar presenting how the project model of financing laid down under the reform is perceived and what kind of image has been shaped among Polish scientists. In order to gain a comprehensive picture of the situation, both the rational and emotional image was subject to analysis. The conclusions regarding the perception of the project model were drawn on the basis of empirical materials collected in a qualitative study the specifics of which will be presented in the chapter on methodology. Prior to that, the author discusses the basic models for the distribution of state support for science and characterises the most salient features of the

  3. A study on projection angles for an optimal image of PNS water's view on children

    International Nuclear Information System (INIS)

    Son, Sang Hyuk; Song, Young Geun; Kim, Sung Kyu; Hong, Sang Woo; Kim, Je Bong

    2007-01-01

    This study is to calculate the proper angle for the optimal image of PNS Water's view on children, comparing and analyzing the PNS Water's projection angles between children and adults at every age. This study randomly selected 50 patients who visited the Medical Center from January to May in 2005, and examined the incidence path of central ray, taking a PNS Water's and skull trans-Lat. view in Water's filming position while attaching a lead ball mark on the Orbit, EAM, and acanthion of the patient's skull. And then, we calculated the incidence angles (angle A) of the line connected from OML and the petrous ridge to the inferior margin of maxilla on general (random) patient's skull image, following the incidence path of central ray. Finally, we analyzed two pieces of the graphs at ages, developing out the patient's ideal images at PNS Water's filming position taken by a digital camera, and calculating the angle (angle B) between OML and IP(Image Plate). The angle between OML and IP is about 43 .deg. in 4-years-old children, which is higher than 37 .deg. as age increases the angle decreases, it goes to 37 .deg. around 30 years of age. That is similar result to maxillary growth period. We can get better quality of Water's image for children when taking the PNS Water's view if we change the projection angles, considering maxillary growth for patients in every age stage

  4. Computerized tomographic simulation compared with clinical mark-up in palliative radiotherapy: A prospective study

    International Nuclear Information System (INIS)

    Haddad, Peiman; Cheung, Fred; Pond, Gregory; Easton, Debbie; Cops, Frederick; Bezjak, Andrea; McLean, Michael; Levin, Wilfred; Billingsley, Susan; Williams, Diane; Wong, Rebecca

    2006-01-01

    Purpose To evaluate the impact of computed tomographic (CT) planning in comparison to clinical mark-up (CM) for palliative radiation of chest wall metastases. Methods and Materials In patients treated with CM for chest wall bone metastases (without conventional simulation/fluoroscopy), two consecutive planning CT scans were acquired with and without an external marker to delineate the CM treatment field. The two sets of scans were fused for evaluation of clinical tumor volume (CTV) coverage by the CM technique. Under-coverage was defined as the proportion of CTV not covered by the CM 80% isodose. Results Twenty-one treatments (ribs 17, sternum 2, and scapula 2) formed the basis of our study. Due to technical reasons, comparable data between CM and CT plans were available for 19 treatments only. CM resulted in a mean CTV under-coverage of 36%. Eleven sites (58%) had an under-coverage of >20%. Mean volume of normal tissues receiving ≥80% of the dose was 5.4% in CM and 9.3% in CT plans (p = 0.017). Based on dose-volume histogram comparisons, CT planning resulted in a change of treatment technique from direct apposition to a tangential pair in 7 of 19 cases. Conclusions CT planning demonstrated a 36% under-coverage of CTV with CM of ribs and chest wall metastases

  5. Reconstruction of computed tomographic image from a few x-ray projections by means of accelerative gradient method

    International Nuclear Information System (INIS)

    Kobayashi, Fujio; Yamaguchi, Shoichiro

    1982-01-01

    A method of the reconstruction of computed tomographic images was proposed to reduce the exposure dose to X-ray. The method is the small number of X-ray projection method by accelerative gradient method. The procedures of computation are described. The algorithm of these procedures is simple, the convergence of the computation is fast, and the required memory capacity is small. Numerical simulation was carried out to conform the validity of this method. A sample of simple shape was considered, projection data were given, and the images were reconstructed from 6 views. Good results were obtained, and the method is considered to be useful. (Kato, T.)

  6. Grid Databases for Shared Image Analysis in the MammoGrid Project

    CERN Document Server

    Amendolia, S R; Hauer, T; Manset, D; McClatchey, R; Odeh, M; Reading, T; Rogulin, D; Schottlander, D; Solomonides, T

    2004-01-01

    The MammoGrid project aims to prove that Grid infrastructures can be used for collaborative clinical analysis of database-resident but geographically distributed medical images. This requires: a) the provision of a clinician-facing front-end workstation and b) the ability to service real-world clinician queries across a distributed and federated database. The MammoGrid project will prove the viability of the Grid by harnessing its power to enable radiologists from geographically dispersed hospitals to share standardized mammograms, to compare diagnoses (with and without computer aided detection of tumours) and to perform sophisticated epidemiological studies across national boundaries. This paper outlines the approach taken in MammoGrid to seamlessly connect radiologist workstations across a Grid using an "information infrastructure" and a DICOM-compliant object model residing in multiple distributed data stores in Italy and the UK

  7. Imaging Seismic Source Variations Using Back-Projection Methods at El Tatio Geyser Field, Northern Chile

    Science.gov (United States)

    Kelly, C. L.; Lawrence, J. F.

    2014-12-01

    During October 2012, 51 geophones and 6 broadband seismometers were deployed in an ~50x50m region surrounding a periodically erupting columnar geyser in the El Tatio Geyser Field, Chile. The dense array served as the seismic framework for a collaborative project to study the mechanics of complex hydrothermal systems. Contemporaneously, complementary geophysical measurements (including down-hole temperature and pressure, discharge rates, thermal imaging, water chemistry, and video) were also collected. Located on the western flanks of the Andes Mountains at an elevation of 4200m, El Tatio is the third largest geyser field in the world. Its non-pristine condition makes it an ideal location to perform minutely invasive geophysical studies. The El Jefe Geyser was chosen for its easily accessible conduit and extremely periodic eruption cycle (~120s). During approximately 2 weeks of continuous recording, we recorded ~2500 nighttime eruptions which lack cultural noise from tourism. With ample data, we aim to study how the source varies spatially and temporally during each phase of the geyser's eruption cycle. We are developing a new back-projection processing technique to improve source imaging for diffuse signals. Our method was previously applied to the Sierra Negra Volcano system, which also exhibits repeating harmonic and diffuse seismic sources. We back-project correlated seismic signals from the receivers back to their sources, assuming linear source to receiver paths and a known velocity model (obtained from ambient noise tomography). We apply polarization filters to isolate individual and concurrent geyser energy associated with P and S phases. We generate 4D, time-lapsed images of the geyser source field that illustrate how the source distribution changes through the eruption cycle. We compare images for pre-eruption, co-eruption, post-eruption and quiescent periods. We use our images to assess eruption mechanics in the system (i.e. top-down vs. bottom-up) and

  8. Semi-automatic Citation Correction with Lemon8-XML

    Directory of Open Access Journals (Sweden)

    MJ Suhonos

    2009-03-01

    Full Text Available The Lemon8-XML software application, developed by the Public Knowledge Project (PKP, provides an open-source, computer-assisted interface for reliable citation structuring and validation. Lemon8-XML combines citation parsing algorithms with freely-available online indexes such as PubMed, WorldCat, and OAIster. Fully-automated markup of entire bibliographies may be a genuine possibility using this approach. Automated markup of citations would increase bibliographic accuracy while reducing copyediting demands.

  9. A sparsity-based iterative algorithm for reconstruction of micro-CT images from highly undersampled projection datasets obtained with a synchrotron X-ray source

    Science.gov (United States)

    Melli, S. Ali; Wahid, Khan A.; Babyn, Paul; Cooper, David M. L.; Gopi, Varun P.

    2016-12-01

    Synchrotron X-ray Micro Computed Tomography (Micro-CT) is an imaging technique which is increasingly used for non-invasive in vivo preclinical imaging. However, it often requires a large number of projections from many different angles to reconstruct high-quality images leading to significantly high radiation doses and long scan times. To utilize this imaging technique further for in vivo imaging, we need to design reconstruction algorithms that reduce the radiation dose and scan time without reduction of reconstructed image quality. This research is focused on using a combination of gradient-based Douglas-Rachford splitting and discrete wavelet packet shrinkage image denoising methods to design an algorithm for reconstruction of large-scale reduced-view synchrotron Micro-CT images with acceptable quality metrics. These quality metrics are computed by comparing the reconstructed images with a high-dose reference image reconstructed from 1800 equally spaced projections spanning 180°. Visual and quantitative-based performance assessment of a synthetic head phantom and a femoral cortical bone sample imaged in the biomedical imaging and therapy bending magnet beamline at the Canadian Light Source demonstrates that the proposed algorithm is superior to the existing reconstruction algorithms. Using the proposed reconstruction algorithm to reduce the number of projections in synchrotron Micro-CT is an effective way to reduce the overall radiation dose and scan time which improves in vivo imaging protocols.

  10. SU-F-I-08: CT Image Ring Artifact Reduction Based On Prior Image

    Energy Technology Data Exchange (ETDEWEB)

    Yuan, C; Qi, H; Chen, Z; Wu, S; Xu, Y; Zhou, L [Southern Medical University, Guangzhou, Guangdong (China)

    2016-06-15

    Purpose: In computed tomography (CT) system, CT images with ring artifacts will be reconstructed when some adjacent bins of detector don’t work. The ring artifacts severely degrade CT image quality. We present a useful CT ring artifacts reduction based on projection data correction, aiming at estimating the missing data of projection data accurately, thus removing the ring artifacts of CT images. Methods: The method consists of ten steps: 1) Identification of abnormal pixel line in projection sinogram; 2) Linear interpolation within the pixel line of projection sinogram; 3) FBP reconstruction using interpolated projection data; 4) Filtering FBP image using mean filter; 5) Forwarding projection of filtered FBP image; 6) Subtraction forwarded projection from original projection; 7) Linear interpolation of abnormal pixel line area in the subtraction projection; 8) Adding the interpolated subtraction projection on the forwarded projection; 9) FBP reconstruction using corrected projection data; 10) Return to step 4 until the pre-set iteration number is reached. The method is validated on simulated and real data to restore missing projection data and reconstruct ring artifact-free CT images. Results: We have studied impact of amount of dead bins of CT detector on the accuracy of missing data estimation in projection sinogram. For the simulated case with a resolution of 256 by 256 Shepp-Logan phantom, three iterations are sufficient to restore projection data and reconstruct ring artifact-free images when the dead bins rating is under 30%. The dead-bin-induced artifacts are substantially reduced. More iteration number is needed to reconstruct satisfactory images while the rating of dead bins increases. Similar results were found for a real head phantom case. Conclusion: A practical CT image ring artifact correction scheme based on projection data is developed. This method can produce ring artifact-free CT images feasibly and effectively.

  11. SU-F-I-08: CT Image Ring Artifact Reduction Based On Prior Image

    International Nuclear Information System (INIS)

    Yuan, C; Qi, H; Chen, Z; Wu, S; Xu, Y; Zhou, L

    2016-01-01

    Purpose: In computed tomography (CT) system, CT images with ring artifacts will be reconstructed when some adjacent bins of detector don’t work. The ring artifacts severely degrade CT image quality. We present a useful CT ring artifacts reduction based on projection data correction, aiming at estimating the missing data of projection data accurately, thus removing the ring artifacts of CT images. Methods: The method consists of ten steps: 1) Identification of abnormal pixel line in projection sinogram; 2) Linear interpolation within the pixel line of projection sinogram; 3) FBP reconstruction using interpolated projection data; 4) Filtering FBP image using mean filter; 5) Forwarding projection of filtered FBP image; 6) Subtraction forwarded projection from original projection; 7) Linear interpolation of abnormal pixel line area in the subtraction projection; 8) Adding the interpolated subtraction projection on the forwarded projection; 9) FBP reconstruction using corrected projection data; 10) Return to step 4 until the pre-set iteration number is reached. The method is validated on simulated and real data to restore missing projection data and reconstruct ring artifact-free CT images. Results: We have studied impact of amount of dead bins of CT detector on the accuracy of missing data estimation in projection sinogram. For the simulated case with a resolution of 256 by 256 Shepp-Logan phantom, three iterations are sufficient to restore projection data and reconstruct ring artifact-free images when the dead bins rating is under 30%. The dead-bin-induced artifacts are substantially reduced. More iteration number is needed to reconstruct satisfactory images while the rating of dead bins increases. Similar results were found for a real head phantom case. Conclusion: A practical CT image ring artifact correction scheme based on projection data is developed. This method can produce ring artifact-free CT images feasibly and effectively.

  12. The Systems Biology Markup Language (SBML) Level 3 Package: Layout, Version 1 Core.

    Science.gov (United States)

    Gauges, Ralph; Rost, Ursula; Sahle, Sven; Wengler, Katja; Bergmann, Frank Thomas

    2015-09-04

    Many software tools provide facilities for depicting reaction network diagrams in a visual form. Two aspects of such a visual diagram can be distinguished: the layout (i.e.: the positioning and connections) of the elements in the diagram, and the graphical form of the elements (for example, the glyphs used for symbols, the properties of the lines connecting them, and so on). For software tools that also read and write models in SBML (Systems Biology Markup Language) format, a common need is to store the network diagram together with the SBML representation of the model. This in turn raises the question of how to encode the layout and the rendering of these diagrams. The SBML Level 3 Version 1 Core specification does not provide a mechanism for explicitly encoding diagrams, but it does provide a mechanism for SBML packages to extend the Core specification and add additional syntactical constructs. The Layout package for SBML Level 3 adds the necessary features to SBML so that diagram layouts can be encoded in SBML files, and a companion package called SBML Rendering specifies how the graphical rendering of elements can be encoded. The SBML Layout package is based on the principle that reaction network diagrams should be described as representations of entities such as species and reactions (with direct links to the underlying SBML elements), and not as arbitrary drawings or graphs; for this reason, existing languages for the description of vector drawings (such as SVG) or general graphs (such as GraphML) cannot be used.

  13. Improving image quality in Electrical Impedance Tomography (EIT using Projection Error Propagation-based Regularization (PEPR technique: A simulation study

    Directory of Open Access Journals (Sweden)

    Tushar Kanti Bera

    2011-03-01

    Full Text Available A Projection Error Propagation-based Regularization (PEPR method is proposed and the reconstructed image quality is improved in Electrical Impedance Tomography (EIT. A projection error is produced due to the misfit of the calculated and measured data in the reconstruction process. The variation of the projection error is integrated with response matrix in each iterations and the reconstruction is carried out in EIDORS. The PEPR method is studied with the simulated boundary data for different inhomogeneity geometries. Simulated results demonstrate that the PEPR technique improves image reconstruction precision in EIDORS and hence it can be successfully implemented to increase the reconstruction accuracy in EIT.>doi:10.5617/jeb.158 J Electr Bioimp, vol. 2, pp. 2-12, 2011

  14. Diaphragm motion quantification in megavoltage cone-beam CT projection images.

    Science.gov (United States)

    Chen, Mingqing; Siochi, R Alfredo

    2010-05-01

    To quantify diaphragm motion in megavoltage (MV) cone-beam computed tomography (CBCT) projections. User identified ipsilateral hemidiaphragm apex (IHDA) positions in two full exhale and inhale frames were used to create bounding rectangles in all other frames of a CBCT scan. The bounding rectangle was enlarged to create a region of interest (ROI). ROI pixels were associated with a cost function: The product of image gradients and a gradient direction matching function for an ideal hemidiaphragm determined from 40 training sets. A dynamic Hough transform (DHT) models a hemidiaphragm as a contour made of two parabola segments with a common vertex (the IHDA). The images within the ROIs are transformed into Hough space where a contour's Hough value is the sum of the cost function over all contour pixels. Dynamic programming finds the optimal trajectory of the common vertex in Hough space subject to motion constraints between frames, and an active contour model further refines the result. Interpolated ray tracing converts the positions to room coordinates. Root-mean-square (RMS) distances between these positions and those resulting from an expert's identification of the IHDA were determined for 21 Siemens MV CBCT scans. Computation time on a 2.66 GHz CPU was 30 s. The average craniocaudal RMS error was 1.38 +/- 0.67 mm. While much larger errors occurred in a few near-sagittal frames on one patient's scans, adjustments to algorithm constraints corrected them. The DHT based algorithm can compute IHDA trajectories immediately prior to radiation therapy on a daily basis using localization MVCBCT projection data. This has potential for calibrating external motion surrogates against diaphragm motion.

  15. Imaging Local Polarization in Ferroelectric Thin Films by Coherent X-Ray Bragg Projection Ptychography

    Science.gov (United States)

    Hruszkewycz, S. O.; Highland, M. J.; Holt, M. V.; Kim, Dongjin; Folkman, C. M.; Thompson, Carol; Tripathi, A.; Stephenson, G. B.; Hong, Seungbum; Fuoss, P. H.

    2013-04-01

    We used x-ray Bragg projection ptychography (BPP) to map spatial variations of ferroelectric polarization in thin film PbTiO3, which exhibited a striped nanoscale domain pattern on a high-miscut (001) SrTiO3 substrate. By converting the reconstructed BPP phase image to picometer-scale ionic displacements in the polar unit cell, a quantitative polarization map was made that was consistent with other characterization. The spatial resolution of 5.7 nm demonstrated here establishes BPP as an important tool for nanoscale ferroelectric domain imaging, especially in complex environments accessible with hard x rays.

  16. IMAGE CONSTRUCTION TO AUTOMATION OF PROJECTIVE TECHNIQUES FOR PSYCHOPHYSIOLOGICAL ANALYSIS

    Directory of Open Access Journals (Sweden)

    Natalia Pavlova

    2018-04-01

    Full Text Available The search for a solution of automation of the process of assessment of a psychological analysis of the person drawings created by it from an available set of some templates are presented at this article. It will allow to reveal more effectively infringements of persons mentality. In particular, such decision can be used for work with children who possess the developed figurative thinking, but are not yet capable of an accurate statement of the thoughts and experiences. For automation of testing by using a projective method, we construct interactive environment for visualization of compositions of the several images and then analyse

  17. Accelerated perturbation-resilient block-iterative projection methods with application to image reconstruction.

    Science.gov (United States)

    Nikazad, T; Davidi, R; Herman, G T

    2012-03-01

    We study the convergence of a class of accelerated perturbation-resilient block-iterative projection methods for solving systems of linear equations. We prove convergence to a fixed point of an operator even in the presence of summable perturbations of the iterates, irrespective of the consistency of the linear system. For a consistent system, the limit point is a solution of the system. In the inconsistent case, the symmetric version of our method converges to a weighted least squares solution. Perturbation resilience is utilized to approximate the minimum of a convex functional subject to the equations. A main contribution, as compared to previously published approaches to achieving similar aims, is a more than an order of magnitude speed-up, as demonstrated by applying the methods to problems of image reconstruction from projections. In addition, the accelerated algorithms are illustrated to be better, in a strict sense provided by the method of statistical hypothesis testing, than their unaccelerated versions for the task of detecting small tumors in the brain from X-ray CT projection data.

  18. Representing Information in Patient Reports Using Natural Language Processing and the Extensible Markup Language

    Science.gov (United States)

    Friedman, Carol; Hripcsak, George; Shagina, Lyuda; Liu, Hongfang

    1999-01-01

    Objective: To design a document model that provides reliable and efficient access to clinical information in patient reports for a broad range of clinical applications, and to implement an automated method using natural language processing that maps textual reports to a form consistent with the model. Methods: A document model that encodes structured clinical information in patient reports while retaining the original contents was designed using the extensible markup language (XML), and a document type definition (DTD) was created. An existing natural language processor (NLP) was modified to generate output consistent with the model. Two hundred reports were processed using the modified NLP system, and the XML output that was generated was validated using an XML validating parser. Results: The modified NLP system successfully processed all 200 reports. The output of one report was invalid, and 199 reports were valid XML forms consistent with the DTD. Conclusions: Natural language processing can be used to automatically create an enriched document that contains a structured component whose elements are linked to portions of the original textual report. This integrated document model provides a representation where documents containing specific information can be accurately and efficiently retrieved by querying the structured components. If manual review of the documents is desired, the salient information in the original reports can also be identified and highlighted. Using an XML model of tagging provides an additional benefit in that software tools that manipulate XML documents are readily available. PMID:9925230

  19. The Yosemite Extreme Panoramic Imaging Project: Monitoring Rockfall in Yosemite Valley with High-Resolution, Three-Dimensional Imagery

    Science.gov (United States)

    Stock, G. M.; Hansen, E.; Downing, G.

    2008-12-01

    Yosemite Valley experiences numerous rockfalls each year, with over 600 rockfall events documented since 1850. However, monitoring rockfall activity has proved challenging without high-resolution "basemap" imagery of the Valley walls. The Yosemite Extreme Panoramic Imaging Project, a partnership between the National Park Service and xRez Studio, has created an unprecedented image of Yosemite Valley's walls by utilizing gigapixel panoramic photography, LiDAR-based digital terrain modeling, and three-dimensional computer rendering. Photographic capture was accomplished by 20 separate teams shooting from key overlapping locations throughout Yosemite Valley. The shots were taken simultaneously in order to ensure uniform lighting, with each team taking over 500 overlapping shots from each vantage point. Each team's shots were then assembled into 20 gigapixel panoramas. In addition, all 20 gigapixel panoramas were projected onto a 1 meter resolution digital terrain model in three-dimensional rendering software, unifying Yosemite Valley's walls into a vertical orthographic view. The resulting image reveals the geologic complexity of Yosemite Valley in high resolution and represents one of the world's largest photographic captures of a single area. Several rockfalls have already occurred since image capture, and repeat photography of these areas clearly delineates rockfall source areas and failure dynamics. Thus, the imagery has already proven to be a valuable tool for monitoring and understanding rockfall in Yosemite Valley. It also sets a new benchmark for the quality of information a photographic image, enabled with powerful new imaging technology, can provide for the earth sciences.

  20. Compositional images from the Diffraction Enhanced Imaging technique

    International Nuclear Information System (INIS)

    Hasnah, M.O.; Zhong, Z.; Parham, C.; Zhang, H.; Chapman, D.

    2007-01-01

    Diffraction Enhanced Imaging (DEI) derives X-ray contrast from absorption, refraction, and extinction. While the refraction angle image of DEI represents the gradient of the projected mass density of the object, the absorption image measures the projected attenuation (μt)-bar of an object. Using a simple integral method it has been shown that a mass density image (ρt)-bar can be obtained from the refraction angle image. It then is a simple matter to develop a combinational image by dividing these two images to create a μ/ρ image. The μ/ρ is a fundamental property of a material and is therefore useful for identifying the composition of an object. In projection X-ray imaging the μ/ρ image identifies the integrated composition of the elements along the beam path. When applied to DEI computed tomography (CT), the image identifies the composition in each voxel. This method presents a new type of spectroscopy based in radiography. We present the method of obtaining the compositional image, the results of experiments in which we verify the method with known standards and an application of the method to breast cancer imaging

  1. The price of surgery: markup of operative procedures in the United States.

    Science.gov (United States)

    Gani, Faiz; Makary, Martin A; Pawlik, Timothy M

    2017-02-01

    Despite cost containment efforts, the price for surgery is not subject to any regulations. We sought to characterize and compare variability in pricing for commonly performed major surgical procedures across the United States. Medicare claims corresponding to eight major surgical procedures (aortic aneurysm repair, aortic valvuloplasty, carotid endartectomy, coronary artery bypass grafting, esophagectomy, pancreatectomy, liver resection, and colectomy) were identified using the Medicare Provider Utilization and Payment Data Physician and Other Supplier Public Use File for 2013. For each procedure, total charges, Medicare-allowable costs, and total payments were recorded. A procedure-specific markup ratio (MR; ratio of total charges to Medicare-allowable costs) was calculated and compared between procedures and across states. Variation in MR was compared using a coefficient of variation (CoV). Among all providers, the median MR was 3.5 (interquartile range: 3.1-4.0). MR was noted to vary by procedure; ranging from 3.0 following colectomy to 6.0 following carotid endartectomy (P < 0.001). MR also varied for the same procedure; varying the least after liver resection (CoV = 0.24), while coronary artery bypass grafting pricing demonstrated the greatest variation in MR (CoV = 0.53). Compared with the national average, MR varied by 36% between states ranging from 1.8 to 13.0. Variation in MR was also noted within the same state varying by 15% within the state of Arkansas (CoV = 0.15) compared with 51% within the state of Wisconsin (CoV = 0.51). Significant variation was noted for the price of surgery by procedure as well as between and within different geographical regions. Greater scrutiny and transparency in the price of surgery is required to promote cost containment. Copyright © 2016 Elsevier Inc. All rights reserved.

  2. Effects of defect pixel correction algorithms for x-ray detectors on image quality in planar projection and volumetric CT data sets

    International Nuclear Information System (INIS)

    Kuttig, Jan; Steiding, Christian; Hupfer, Martin; Karolczak, Marek; Kolditz, Daniel

    2015-01-01

    In this study we compared various defect pixel correction methods for reducing artifact appearance within projection images used for computed tomography (CT) reconstructions.Defect pixel correction algorithms were examined with respect to their artifact behaviour within planar projection images as well as in volumetric CT reconstructions. We investigated four algorithms: nearest neighbour, linear and adaptive linear interpolation, and a frequency-selective spectral-domain approach.To characterise the quality of each algorithm in planar image data, we inserted line defects of varying widths and orientations into images. The structure preservation of each algorithm was analysed by corrupting and correcting the image of a slit phantom pattern and by evaluating its line spread function (LSF). The noise preservation was assessed by interpolating corrupted flat images and estimating the noise power spectrum (NPS) of the interpolated region.For the volumetric investigations, we examined the structure and noise preservation within a structured aluminium foam, a mid-contrast cone-beam phantom and a homogeneous Polyurethane (PUR) cylinder.The frequency-selective algorithm showed the best structure and noise preservation for planar data of the correction methods tested. For volumetric data it still showed the best noise preservation, whereas the structure preservation was outperformed by the linear interpolation.The frequency-selective spectral-domain approach in the correction of line defects is recommended for planar image data, but its abilities within high-contrast volumes are restricted. In that case, the application of a simple linear interpolation might be the better choice to correct line defects within projection images used for CT. (paper)

  3. Multidetector CT evaluation of central airways stenoses: Comparison of virtual bronchoscopy, minimal-intensity projection, and multiplanar reformatted images

    Directory of Open Access Journals (Sweden)

    Dinesh K Sundarakumar

    2011-01-01

    Full Text Available Aims: To evaluate the diagnostic utility of virtual bronchoscopy, multiplanar reformatted images, and minimal-intensity projection in assessing airway stenoses. Settings and Design: It was a prospective study involving 150 patients with symptoms of major airway disease. Materials and Methods: Fifty-six patients were selected for analysis based on the detection of major airway lesions on fiber-optic bronchoscopy (FB or routine axial images. Comparisons were made between axial images, virtual bronchoscopy (VB, minimal-intensity projection (minIP, and multiplanar reformatted (MPR images using FB as the gold standard. Lesions were evaluated in terms of degree of airway narrowing, distance from carina, length of the narrowed segment and visualization of airway distal to the lesion. Results: MPR images had the highest degree of agreement with FB (Κ = 0.76 in the depiction of degree of narrowing. minIP had the least degree of agreement with FB (Κ = 0.51 in this regard. The distal visualization was best on MPR images (84.2%, followed by axial images (80.7%, whereas FB could visualize the lesions only in 45.4% of the cases. VB had the best agreement with FB in assessing the segment length (Κ = 0.62. Overall there were no statistically significant differences in the measurement of the distance from the carina in the axial, minIP, and MPR images. MPR images had the highest overall degree of confidence, namely, 70.17% (n = 40. Conclusion: Three-dimensional reconstruction techniques were found to improve lesion evaluation compared with axial images alone. The technique of MPR images was the most useful for lesion evaluation and provided additional information useful for surgical and airway interventions in tracheobronchial stenosis. minIP was useful in the overall depiction of airway anatomy.

  4. Automatic tracking of implanted fiducial markers in cone beam CT projection images

    International Nuclear Information System (INIS)

    Marchant, T. E.; Skalski, A.; Matuszewski, B. J.

    2012-01-01

    Purpose: This paper describes a novel method for simultaneous intrafraction tracking of multiple fiducial markers. Although the proposed method is generic and can be adopted for a number of applications including fluoroscopy based patient position monitoring and gated radiotherapy, the tracking results presented in this paper are specific to tracking fiducial markers in a sequence of cone beam CT projection images. Methods: The proposed method is accurate and robust thanks to utilizing the mean shift and random sampling principles, respectively. The performance of the proposed method was evaluated with qualitative and quantitative methods, using data from two pancreatic and one prostate cancer patients and a moving phantom. The ground truth, for quantitative evaluation, was calculated based on manual tracking preformed by three observers. Results: The average dispersion of marker position error calculated from the tracking results for pancreas data (six markers tracked over 640 frames, 3840 marker identifications) was 0.25 mm (at iscoenter), compared with an average dispersion for the manual ground truth estimated at 0.22 mm. For prostate data (three markers tracked over 366 frames, 1098 marker identifications), the average error was 0.34 mm. The estimated tracking error in the pancreas data was < 1 mm (2 pixels) in 97.6% of cases where nearby image clutter was detected and in 100.0% of cases with no nearby image clutter. Conclusions: The proposed method has accuracy comparable to that of manual tracking and, in combination with the proposed batch postprocessing, superior robustness. Marker tracking in cone beam CT (CBCT) projections is useful for a variety of purposes, such as providing data for assessment of intrafraction motion, target tracking during rotational treatment delivery, motion correction of CBCT, and phase sorting for 4D CBCT.

  5. Mammography with and without radiolucent positioning sheets: Comparison of projected breast area, pain experience, radiation dose and technical image quality

    NARCIS (Netherlands)

    Timmers, Janine; ten Voorde, Marloes; van Engen, Ruben E.; van Landsveld-Verhoeven, Cary; Pijnappel, Ruud; Droogh-de Greve, Kitty; den Heeten, Gerard J.; Broeders, Mireille J. M.

    2015-01-01

    To compare projected breast area, image quality, pain experience and radiation dose between mammography performed with and without radiolucent positioning sheets. 184 women screened in the Dutch breast screening programme (May-June 2012) provided written informed consent to have one additional image

  6. Intelligent and interactive computer image of a nuclear power plant: The ImagIn project

    International Nuclear Information System (INIS)

    Haubensack, D.; Malvache, P.; Valleix, P.

    1998-01-01

    The ImagIn project consists in a method and a set of computer tools apt to bring perceptible and assessable improvements in the operational safety of a nuclear plant. Its aim is to design an information system that would maintain a highly detailed computerized representation of a nuclear plant in its initial state and throughout its in-service life. It is not a tool to drive or help driving the nuclear plant, but a tool that manages concurrent operations that modify the plant configuration in a very general was (maintenance for example). The configuration of the plant, as well as rules and constraints about it, are described in a object-oriented knowledge database, which is built using a generic ImagIn meta-model based on the semantical network theory. An inference engine works on this database and is connected to reality through interfaces to operators and captors on the installation; it verifies constantly in real-time the consistency of the database according to its inner rules, and reports eventual problems to concerned operators. A special effort is made on interfaces to provide natural and intuitive tools (using virtual reality, natural language, voice recognition and synthesis). A laboratory application on a fictive but realistic installation already exists and is used to simulate various tests and scenarii. A real application is being constructed on Siloe, an experimental reactor of the CEA. (author)

  7. A general approach to flaw simulation in castings by superimposing projections of 3D models onto real X-ray images

    International Nuclear Information System (INIS)

    Hahn, D.; Mery, D.

    2003-01-01

    In order to evaluate the sensitivity of defect inspection systems, it is convenient to examine simulated data. This gives the possibility to tune the parameters of the inspection method and to test the performance of the system in critical cases. In this paper, a practical method for the simulation of defects in radioscopic images of aluminium castings is presented. The approach simulates only the flaws and not the whole radioscopic image of the object under test. A 3D mesh is used to model a flaw with complex geometry, which is projected and superimposed onto real radioscopic images of a homogeneous object according to the exponential attenuation law for X- rays. The new grey value of a pixel, where the 3D flaw is projected, depends only on four parameters: (a) the grey value of the original X-ray image without flaw; (b) the linear absorption coefficient of the examined material; (c) the maximal thickness observable in the radioscopic image; and (d) the length of the intersection of the 3D flaw with the modelled X-ray beam, that is projected into the pixel. A simulation of a complex flaw modelled as a 3D mesh can be performed in any position of the castings by using the algorithm described in this paper. This allows the evaluation of the performance of defect inspection systems in cases where the detection is known to be difficult. In this paper, we show experimental results on real X-ray images of aluminium wheels, in which 3D flaws like blowholes, cracks and inclusions are simulated

  8. CT image reconstruction of steel pipe section from few projections using the method of rotating polar-coordinate

    International Nuclear Information System (INIS)

    Peng Shuaijun; Wu Zhifang

    2008-01-01

    Fast online inspection in steel pipe production is a big challenge. Radiographic CT imaging technology, a high performance non-destructive testing method, is quite appropriate for inspection and quality control of steel pipes. The method of rotating polar-coordinate is used to reconstruct the steel pipe section from few projections with the purpose of inspecting it online. It reduces the projection number needed and the data collection time, and accelerates the reconstruction algorithm and saves the inspection time evidently. The results of simulation experiment and actual experiment indicate that the image quality and reconstruction time of rotating polar-coordinate method meet the requirements of inspecting the steel tube section online basically. The study is of some theoretical significance and the method is expected to be widely used in practice. (authors)

  9. New K-edge-balanced contrast phantom for image quality assurance in projection radiography

    Science.gov (United States)

    Cresens, Marc; Schaetzing, Ralph

    2003-06-01

    X-ray-absorber step-wedge phantoms serve in projection radiography to assess a detection system's overall exposure-related signal-to-noise ratio performance and contrast response. Data derived from a phantom image, created by exposing a step-wedge onto the image receptor, are compared with predefined acceptance criteria during periodic image quality assurance (QA). For contrast-related measurements, in particular, the x-ray tube potential requires accurate setting and low ripple, since small deviations from the specified kVp, causing energy spectrum changes, lead to significant image signal variation at high contrast ratios. A K-edge-balanced, rare-earth-metal contrast phantom can generate signals that are significantly more robust to the spectral variability and instability of exposure equipment in the field. The image signals from a hafnium wedge, for example, are up to eight times less sensitive to spectral fluctuations than those of today"s copper phantoms for a 200:1 signal ratio. At 120 kVp (RQA 9), the hafnium phantom still preserves 70% of the subject contrast present at 75 kVp (RQA 5). A copper wedge preserves only 7% of its contrast over the same spectral range. Spectral simulations and measurements on prototype systems, as well as potential uses of this new class of phantoms (e.g., QA, single-shot exposure response characterization) are described.

  10. A new EU-funded project for enhanced real-time imaging for radiotherapy

    CERN Multimedia

    KTT Life Sciences Unit

    2011-01-01

    ENTERVISION (European training network for digital imaging for radiotherapy) is a new Marie Curie Initial Training Network coordinated by CERN, which brings together multidisciplinary researchers to carry out R&D in physics techniques for application in the clinical environment.   ENTERVISION was established in response to a critical need to reinforce research in online 3D digital imaging and to train professionals in order to deliver some of the key elements for early detection and more precise treatment of tumours. The main goal of the project is to train researchers who will help contribute to technical developments in an exciting multidisciplinary field, where expertise from physics, medicine, electronics, informatics, radiobiology and engineering merges and catalyses the advancement of cancer treatment. With this aim in mind, ENTERVISION brings together ten academic institutes and research centres, as well as the two leading European companies in particle therapy, IBA and Siemens. &ldq...

  11. Concave omnidirectional imaging device for cylindrical object based on catadioptric panoramic imaging

    Science.gov (United States)

    Wu, Xiaojun; Wu, Yumei; Wen, Peizhi

    2018-03-01

    To obtain information on the outer surface of a cylinder object, we propose a catadioptric panoramic imaging system based on the principle of uniform spatial resolution for vertical scenes. First, the influence of the projection-equation coefficients on the spatial resolution and astigmatism of the panoramic system are discussed, respectively. Through parameter optimization, we obtain the appropriate coefficients for the projection equation, and so the imaging quality of the entire imaging system can reach an optimum value. Finally, the system projection equation is calibrated, and an undistorted rectangular panoramic image is obtained using the cylindrical-surface projection expansion method. The proposed 360-deg panoramic-imaging device overcomes the shortcomings of existing surface panoramic-imaging methods, and it has the advantages of low cost, simple structure, high imaging quality, and small distortion, etc. The experimental results show the effectiveness of the proposed method.

  12. A new instrument to assess physician skill at thoracic ultrasound, including pleural effusion markup.

    Science.gov (United States)

    Salamonsen, Matthew; McGrath, David; Steiler, Geoff; Ware, Robert; Colt, Henri; Fielding, David

    2013-09-01

    To reduce complications and increase success, thoracic ultrasound is recommended to guide all chest drainage procedures. Despite this, no tools currently exist to assess proceduralist training or competence. This study aims to validate an instrument to assess physician skill at performing thoracic ultrasound, including effusion markup, and examine its validity. We developed an 11-domain, 100-point assessment sheet in line with British Thoracic Society guidelines: the Ultrasound-Guided Thoracentesis Skills and Tasks Assessment Test (UGSTAT). The test was used to assess 22 participants (eight novices, seven intermediates, seven advanced) on two occasions while performing thoracic ultrasound on a pleural effusion phantom. Each test was scored by two blinded expert examiners. Validity was examined by assessing the ability of the test to stratify participants according to expected skill level (analysis of variance) and demonstrating test-retest and intertester reproducibility by comparison of repeated scores (mean difference [95% CI] and paired t test) and the intraclass correlation coefficient. Mean scores for the novice, intermediate, and advanced groups were 49.3, 73.0, and 91.5 respectively, which were all significantly different (P < .0001). There were no significant differences between repeated scores. Procedural training on mannequins prior to unsupervised performance on patients is rapidly becoming the standard in medical education. This study has validated the UGSTAT, which can now be used to determine the adequacy of thoracic ultrasound training prior to clinical practice. It is likely that its role could be extended to live patients, providing a way to document ongoing procedural competence.

  13. The PixFEL project: Progress towards a fine pitch X-ray imaging camera for next generation FEL facilities

    Energy Technology Data Exchange (ETDEWEB)

    Rizzo, G., E-mail: giuliana.rizzo@pi.infn.it [Università di Pisa, Dipartimento di Fisica, I-56127 Pisa (Italy); INFN, Sezione di Pisa, I-56127 Pisa (Italy); Batignani, G. [Università di Pisa, Dipartimento di Fisica, I-56127 Pisa (Italy); INFN, Sezione di Pisa, I-56127 Pisa (Italy); Benkechkache, M.A. [Università di Trento, Dipartimento di Ingegneria Industriale, I-38123 Trento (Italy); University Constantine 1, Department of Electronics in the Science and Technology Faculty, I-25017, Constantine (Algeria); Bettarini, S.; Casarosa, G. [Università di Pisa, Dipartimento di Fisica, I-56127 Pisa (Italy); INFN, Sezione di Pisa, I-56127 Pisa (Italy); Comotti, D. [Università di Pavia, Dipartimento di Ingegneria Industriale e dell' Informazione, I-27100 Pavia (Italy); INFN Sezione di Pavia, I-27100 Pavia (Italy); Dalla Betta, G.-F. [Università di Trento, Dipartimento di Ingegneria Industriale, I-38123 Trento (Italy); TIFPA INFN, I-38123 Trento (Italy); Fabris, L. [INFN Sezione di Pavia, I-27100 Pavia (Italy); Università di Bergamo, Dipartimento di Ingegneria e Scienze Applicate, I-24044 Dalmine (Italy); Forti, F. [Università di Pisa, Dipartimento di Fisica, I-56127 Pisa (Italy); INFN, Sezione di Pisa, I-56127 Pisa (Italy); Grassi, M.; Lodola, L.; Malcovati, P. [Università di Pavia, Dipartimento di Ingegneria Industriale e dell' Informazione, I-27100 Pavia (Italy); INFN Sezione di Pavia, I-27100 Pavia (Italy); Manghisoni, M. [INFN Sezione di Pavia, I-27100 Pavia (Italy); Università di Bergamo, Dipartimento di Ingegneria e Scienze Applicate, I-24044 Dalmine (Italy); and others

    2016-07-11

    The INFN PixFEL project is developing the fundamental building blocks for a large area X-ray imaging camera to be deployed at next generation free electron laser (FEL) facilities with unprecedented intensity. Improvement in performance beyond the state of art in imaging instrumentation will be explored adopting advanced technologies like active edge sensors, a 65 nm node CMOS process and vertical integration. These are the key ingredients of the PixFEL project to realize a seamless large area focal plane instrument composed by a matrix of multilayer four-side buttable tiles. In order to minimize the dead area and reduce ambiguities in image reconstruction, a fine pitch active edge thick sensor is being optimized to cope with very high intensity photon flux, up to 10{sup 4} photons per pixel, in the range from 1 to 10 keV. A low noise analog front-end channel with this wide dynamic range and a novel dynamic compression feature, together with a low power 10 bit analog to digital conversion up to 5 MHz, has been realized in a 110 μm pitch with a 65 nm CMOS process. Vertical interconnection of two CMOS tiers will be also explored in the future to build a four-side buttable readout chip with high density memories. In the long run the objective of the PixFEL project is to build a flexible X-ray imaging camera for operation both in burst mode, like at the European X-FEL, or in continuous mode with the high frame rates anticipated for future FEL facilities.

  14. The PixFEL project: Progress towards a fine pitch X-ray imaging camera for next generation FEL facilities

    International Nuclear Information System (INIS)

    Rizzo, G.; Batignani, G.; Benkechkache, M.A.; Bettarini, S.; Casarosa, G.; Comotti, D.; Dalla Betta, G.-F.; Fabris, L.; Forti, F.; Grassi, M.; Lodola, L.; Malcovati, P.; Manghisoni, M.

    2016-01-01

    The INFN PixFEL project is developing the fundamental building blocks for a large area X-ray imaging camera to be deployed at next generation free electron laser (FEL) facilities with unprecedented intensity. Improvement in performance beyond the state of art in imaging instrumentation will be explored adopting advanced technologies like active edge sensors, a 65 nm node CMOS process and vertical integration. These are the key ingredients of the PixFEL project to realize a seamless large area focal plane instrument composed by a matrix of multilayer four-side buttable tiles. In order to minimize the dead area and reduce ambiguities in image reconstruction, a fine pitch active edge thick sensor is being optimized to cope with very high intensity photon flux, up to 10"4 photons per pixel, in the range from 1 to 10 keV. A low noise analog front-end channel with this wide dynamic range and a novel dynamic compression feature, together with a low power 10 bit analog to digital conversion up to 5 MHz, has been realized in a 110 μm pitch with a 65 nm CMOS process. Vertical interconnection of two CMOS tiers will be also explored in the future to build a four-side buttable readout chip with high density memories. In the long run the objective of the PixFEL project is to build a flexible X-ray imaging camera for operation both in burst mode, like at the European X-FEL, or in continuous mode with the high frame rates anticipated for future FEL facilities.

  15. STS Case Study Development Support

    Science.gov (United States)

    Rosa de Jesus, Dan A.; Johnson, Grace K.

    2013-01-01

    The Shuttle Case Study Collection (SCSC) has been developed using lessons learned documented by NASA engineers, analysts, and contractors. The SCSC provides educators with a new tool to teach real-world engineering processes with the goal of providing unique educational materials that enhance critical thinking, decision-making and problem-solving skills. During this third phase of the project, responsibilities included: the revision of the Hyper Text Markup Language (HTML) source code to ensure all pages follow World Wide Web Consortium (W3C) standards, and the addition and edition of website content, including text, documents, and images. Basic HTML knowledge was required, as was basic knowledge of photo editing software, and training to learn how to use NASA's Content Management System for website design. The outcome of this project was its release to the public.

  16. Engaging stakeholder communities as body image intervention partners: The Body Project as a case example.

    Science.gov (United States)

    Becker, Carolyn Black; Perez, Marisol; Kilpela, Lisa Smith; Diedrichs, Phillippa C; Trujillo, Eva; Stice, Eric

    2017-04-01

    Despite recent advances in developing evidence-based psychological interventions, substantial changes are needed in the current system of intervention delivery to impact mental health on a global scale (Kazdin & Blase, 2011). Prevention offers one avenue for reaching large populations because prevention interventions often are amenable to scaling-up strategies, such as task-shifting to lay providers, which further facilitate community stakeholder partnerships. This paper discusses the dissemination and implementation of the Body Project, an evidence-based body image prevention program, across 6 diverse stakeholder partnerships that span academic, non-profit and business sectors at national and international levels. The paper details key elements of the Body Project that facilitated partnership development, dissemination and implementation, including use of community-based participatory research methods and a blended train-the-trainer and task-shifting approach. We observed consistent themes across partnerships, including: sharing decision making with community partners, engaging of community leaders as gatekeepers, emphasizing strengths of community partners, working within the community's structure, optimizing non-traditional and/or private financial resources, placing value on cost-effectiveness and sustainability, marketing the program, and supporting flexibility and creativity in developing strategies for evolution within the community and in research. Ideally, lessons learned with the Body Project can be generalized to implementation of other body image and eating disorder prevention programs. Copyright © 2015 Elsevier Ltd. All rights reserved.

  17. ABrIL - Advanced Brain Imaging Lab : a cloud based computation environment for cooperative neuroimaging projects.

    Science.gov (United States)

    Neves Tafula, Sérgio M; Moreira da Silva, Nádia; Rozanski, Verena E; Silva Cunha, João Paulo

    2014-01-01

    Neuroscience is an increasingly multidisciplinary and highly cooperative field where neuroimaging plays an important role. Neuroimaging rapid evolution is demanding for a growing number of computing resources and skills that need to be put in place at every lab. Typically each group tries to setup their own servers and workstations to support their neuroimaging needs, having to learn from Operating System management to specific neuroscience software tools details before any results can be obtained from each setup. This setup and learning process is replicated in every lab, even if a strong collaboration among several groups is going on. In this paper we present a new cloud service model - Brain Imaging Application as a Service (BiAaaS) - and one of its implementation - Advanced Brain Imaging Lab (ABrIL) - in the form of an ubiquitous virtual desktop remote infrastructure that offers a set of neuroimaging computational services in an interactive neuroscientist-friendly graphical user interface (GUI). This remote desktop has been used for several multi-institution cooperative projects with different neuroscience objectives that already achieved important results, such as the contribution to a high impact paper published in the January issue of the Neuroimage journal. The ABrIL system has shown its applicability in several neuroscience projects with a relatively low-cost, promoting truly collaborative actions and speeding up project results and their clinical applicability.

  18. Image and diagnosis quality of X-ray image transmission via cell phone camera: a project study evaluating quality and reliability.

    Directory of Open Access Journals (Sweden)

    Hans Goost

    Full Text Available INTRODUCTION: Developments in telemedicine have not produced any relevant benefits for orthopedics and trauma surgery to date. For the present project study, several parameters were examined during assessment of x-ray images, which had been photographed and transmitted via cell phone. MATERIALS AND METHODS: A total of 100 x-ray images of various body regions were photographed with a Nokia cell phone and transmitted via email or MMS. Next, the transmitted photographs were reviewed on a laptop computer by five medical specialists and assessed regarding quality and diagnosis. RESULTS: Due to their poor quality, the transmitted MMS images could not be evaluated and this path of transmission was therefore excluded. Mean size of transmitted x-ray email images was 394 kB (range: 265-590 kB, SD ± 59, average transmission time was 3.29 min ± 8 (CI 95%: 1.7-4.9. Applying a score from 1-10 (very poor - excellent, mean image quality was 5.8. In 83.2 ± 4% (mean value ± SD of cases (median 82; 80-89%, there was agreement between final diagnosis and assessment by the five medical experts who had received the images. However, there was a markedly low concurrence ratio in the thoracic area and in pediatric injuries. DISCUSSION: While the rate of accurate diagnosis and indication for surgery was high with a concurrence ratio of 83%, considerable differences existed between the assessed regions, with lowest values for thoracic images. Teleradiology is a cost-effective, rapid method which can be applied wherever wireless cell phone reception is available. In our opinion, this method is in principle suitable for clinical use, enabling the physician on duty to agree on appropriate measures with colleagues located elsewhere via x-ray image transmission on a cell phone.

  19. Techniques for virtual lung nodule insertion: volumetric and morphometric comparison of projection-based and image-based methods for quantitative CT

    Science.gov (United States)

    Robins, Marthony; Solomon, Justin; Sahbaee, Pooyan; Sedlmair, Martin; Choudhury, Kingshuk Roy; Pezeshk, Aria; Sahiner, Berkman; Samei, Ehsan

    2017-09-01

    Virtual nodule insertion paves the way towards the development of standardized databases of hybrid CT images with known lesions. The purpose of this study was to assess three methods (an established and two newly developed techniques) for inserting virtual lung nodules into CT images. Assessment was done by comparing virtual nodule volume and shape to the CT-derived volume and shape of synthetic nodules. 24 synthetic nodules (three sizes, four morphologies, two repeats) were physically inserted into the lung cavity of an anthropomorphic chest phantom (KYOTO KAGAKU). The phantom was imaged with and without nodules on a commercial CT scanner (SOMATOM Definition Flash, Siemens) using a standard thoracic CT protocol at two dose levels (1.4 and 22 mGy CTDIvol). Raw projection data were saved and reconstructed with filtered back-projection and sinogram affirmed iterative reconstruction (SAFIRE, strength 5) at 0.6 mm slice thickness. Corresponding 3D idealized, virtual nodule models were co-registered with the CT images to determine each nodule’s location and orientation. Virtual nodules were voxelized, partial volume corrected, and inserted into nodule-free CT data (accounting for system imaging physics) using two methods: projection-based Technique A, and image-based Technique B. Also a third Technique C based on cropping a region of interest from the acquired image of the real nodule and blending it into the nodule-free image was tested. Nodule volumes were measured using a commercial segmentation tool (iNtuition, TeraRecon, Inc.) and deformation was assessed using the Hausdorff distance. Nodule volumes and deformations were compared between the idealized, CT-derived and virtual nodules using a linear mixed effects regression model which utilized the mean, standard deviation, and coefficient of variation (Mea{{n}RHD} , ST{{D}RHD} and C{{V}RHD}{) }~ of the regional Hausdorff distance. Overall, there was a close concordance between the volumes of the CT-derived and

  20. BIRD: Bio-Image Referral Database. Design and implementation of a new web based and patient multimedia data focused system for effective medical diagnosis and therapy.

    Science.gov (United States)

    Pinciroli, Francesco; Masseroli, Marco; Acerbo, Livio A; Bonacina, Stefano; Ferrari, Roberto; Marchente, Mario

    2004-01-01

    This paper presents a low cost software platform prototype supporting health care personnel in retrieving patient referral multimedia data. These information are centralized in a server machine and structured by using a flexible eXtensible Markup Language (XML) Bio-Image Referral Database (BIRD). Data are distributed on demand to requesting client in an Intranet network and transformed via eXtensible Stylesheet Language (XSL) to be visualized in an uniform way on market browsers. The core server operation software has been developed in PHP Hypertext Preprocessor scripting language, which is very versatile and useful for crafting a dynamic Web environment.

  1. Implementation of a high-resolution workstation for primary diagnosis of projection radiography images

    Science.gov (United States)

    Good, Walter F.; Herron, John M.; Maitz, Glenn S.; Gur, David; Miller, Stephen L.; Straub, William H.; Fuhrman, Carl R.

    1990-08-01

    We designed and implemented a high-resolution video workstation as the central hardware component in a comprehensive multi-project program comparing the use of digital and film modalities. The workstation utilizes a 1.8 GByte real-time disk (RCI) capable of storing 400 full-resolution images and two Tektronix (GMA251) display controllers with 19" monitors (GMA2O2). The display is configured in a portrait format with a resolution of 1536 x 2048 x 8 bit, and operates at 75 Hz in a noninterlaced mode. Transmission of data through a 12 to 8 bit lookup table into the display controllers occurs at 20 MBytes/second (.35 seconds per image). The workstation allows easy use of brightness (level) and contrast (window) to be manipulated with a trackball, and various processing options can be selected using push buttons. Display of any of the 400 images is also performed at 20MBytes/sec (.35 sec/image). A separate text display provides for the automatic display of patient history data and for a scoring form through which readers can interact with the system by means of a computer mouse. In addition, the workstation provides for the randomization of cases and for the immediate entry of diagnostic responses into a master database. Over the past year this workstation has been used for over 10,000 readings in diagnostic studies related to 1) image resolution; 2) film vs. soft display; 3) incorporation of patient history data into the reading process; and 4) usefulness of image processing.

  2. IMAGE ACQUISITION CONSTRAINTS FOR PANORAMIC FRAME CAMERA IMAGING

    Directory of Open Access Journals (Sweden)

    H. Kauhanen

    2012-07-01

    Full Text Available The paper describes an approach to quantify the amount of projective error produced by an offset of projection centres in a panoramic imaging workflow. We have limited this research to such panoramic workflows in which several sub-images using planar image sensor are taken and then stitched together as a large panoramic image mosaic. The aim is to simulate how large the offset can be before it introduces significant error to the dataset. The method uses geometrical analysis to calculate the error in various cases. Constraints for shooting distance, focal length and the depth of the area of interest are taken into account. Considering these constraints, it is possible to safely use even poorly calibrated panoramic camera rig with noticeable offset in projection centre locations. The aim is to create datasets suited for photogrammetric reconstruction. Similar constraints can be used also for finding recommended areas from the image planes for automatic feature matching and thus improve stitching of sub-images into full panoramic mosaics. The results are mainly designed to be used with long focal length cameras where the offset of projection centre of sub-images can seem to be significant but on the other hand the shooting distance is also long. We show that in such situations the error introduced by the offset of the projection centres results only in negligible error when stitching a metric panorama. Even if the main use of the results is with cameras of long focal length, they are feasible for all focal lengths.

  3. Factors affecting the effectiveness of a projection dephaser in 2D gradient-echo imaging

    International Nuclear Information System (INIS)

    Bakker, Chris J G; Peters, Nicky H G M; Vincken, Koen L; Bom, Martijn van der; Seppenwoolde, Jan-Henry

    2007-01-01

    Projection dephasers are often used for background suppression and dynamic range improvement in thick-slab 2D imaging in order to promote the visibility of subslice structures, e.g., blood vessels and interventional devices. In this study, we explored the factors that govern the effectiveness of a projection dephaser by simulations and phantom experiments. This was done for the ideal case of a single subslice hyper- or hypointensity against a uniform background in the absence of susceptibility effects. Simulations and experiments revealed a pronounced influence of the slice profile, the nominal flip angle and the TE and TR of the acquisition, the size, intraslice position and MR properties of the subslice structure, and T 1 of the background. The complexity of the ideal case points to the necessity of additional explorations when considering the use of projection dephasers under less ideal conditions, e.g., in the presence of tissue heterogeneities and susceptibility gradients

  4. Femtosecond few- to single-electron point-projection microscopy for nanoscale dynamic imaging

    Directory of Open Access Journals (Sweden)

    A. R. Bainbridge

    2016-03-01

    Full Text Available Femtosecond electron microscopy produces real-space images of matter in a series of ultrafast snapshots. Pulses of electrons self-disperse under space-charge broadening, so without compression, the ideal operation mode is a single electron per pulse. Here, we demonstrate femtosecond single-electron point projection microscopy (fs-ePPM in a laser-pump fs-e-probe configuration. The electrons have an energy of only 150 eV and take tens of picoseconds to propagate to the object under study. Nonetheless, we achieve a temporal resolution with a standard deviation of 114 fs (equivalent to a full-width at half-maximum of 269 ± 40 fs combined with a spatial resolution of 100 nm, applied to a localized region of charge at the apex of a nanoscale metal tip induced by 30 fs 800 nm laser pulses at 50 kHz. These observations demonstrate real-space imaging of reversible processes, such as tracking charge distributions, is feasible whilst maintaining femtosecond resolution. Our findings could find application as a characterization method, which, depending on geometry, could resolve tens of femtoseconds and tens of nanometres. Dynamically imaging electric and magnetic fields and charge distributions on sub-micron length scales opens new avenues of ultrafast dynamics. Furthermore, through the use of active compression, such pulses are an ideal seed for few-femtosecond to attosecond imaging applications which will access sub-optical cycle processes in nanoplasmonics.

  5. Virtual autopsy using imaging: bridging radiologic and forensic sciences. A review of the Virtopsy and similar projects

    International Nuclear Information System (INIS)

    Bolliger, Stephan A.; Thali, Michael J.; Ross, Steffen; Buck, Ursula; Naether, Silvio; Vock, Peter

    2008-01-01

    The transdisciplinary research project Virtopsy is dedicated to implementing modern imaging techniques into forensic medicine and pathology in order to augment current examination techniques or even to offer alternative methods. Our project relies on three pillars: three-dimensional (3D) surface scanning for the documentation of body surfaces, and both multislice computed tomography (MSCT) and magnetic resonance imaging (MRI) to visualise the internal body. Three-dimensional surface scanning has delivered remarkable results in the past in the 3D documentation of patterned injuries and of objects of forensic interest as well as whole crime scenes. Imaging of the interior of corpses is performed using MSCT and/or MRI. MRI, in addition, is also well suited to the examination of surviving victims of assault, especially choking, and helps visualise internal injuries not seen at external examination of the victim. Apart from the accuracy and three-dimensionality that conventional documentations lack, these techniques allow for the re-examination of the corpse and the crime scene even decades later, after burial of the corpse and liberation of the crime scene. We believe that this virtual, non-invasive or minimally invasive approach will improve forensic medicine in the near future. (orig.)

  6. The Mammographic Head Demonstrator Developed in the Framework of the “IMI” Project:. First Imaging Tests Results

    Science.gov (United States)

    Bisogni, Maria Giuseppina

    2006-04-01

    In this paper we report on the performances and the first imaging test results of a digital mammographic demonstrator based on GaAs pixel detectors. The heart of this prototype is the X-ray detection unit, which is a GaAs pixel sensor read-out by the PCC/MEDIPIXI circuit. Since the active area of the sensor is 1 cm2, 18 detectors have been organized in two staggered rows of nine chips each. To cover the typical mammographic format (18 × 24 cm2) a linear scanning is performed by means of a stepper motor. The system is integrated in mammographic equipment comprehending the X-ray tube, the bias and data acquisition systems and the PC-based control system. The prototype has been developed in the framework of the integrated Mammographic Imaging (IMI) project, an industrial research activity aiming to develop innovative instrumentation for morphologic and functional imaging. The project has been supported by the Italian Ministry of Education, University and Research (MIUR) and by five Italian High Tech companies in collaboration with the universities of Ferrara, Roma “La Sapienza”, Pisa and the INFN.

  7. Improved superficial brain hemorrhage visualization in susceptibility weighted images by constrained minimum intensity projection

    Science.gov (United States)

    Castro, Marcelo A.; Pham, Dzung L.; Butman, John

    2016-03-01

    Minimum intensity projection is a technique commonly used to display magnetic resonance susceptibility weighted images, allowing the observer to better visualize hemorrhages and vasculature. The technique displays the minimum intensity in a given projection within a thick slab, allowing different connectivity patterns to be easily revealed. Unfortunately, the low signal intensity of the skull within the thick slab can mask superficial tissues near the skull base and other regions. Because superficial microhemorrhages are a common feature of traumatic brain injury, this effect limits the ability to proper diagnose and follow up patients. In order to overcome this limitation, we developed a method to allow minimum intensity projection to properly display superficial tissues adjacent to the skull. Our approach is based on two brain masks, the largest of which includes extracerebral voxels. The analysis of the rind within both masks containing the actual brain boundary allows reclassification of those voxels initially missed in the smaller mask. Morphological operations are applied to guarantee accuracy and topological correctness, and the mean intensity within the mask is assigned to all outer voxels. This prevents bone from dominating superficial regions in the projection, enabling superior visualization of cortical hemorrhages and vessels.

  8. A System for Acquisition, Processing and Visualization of Image Time Series from Multiple Camera Networks

    Directory of Open Access Journals (Sweden)

    Cemal Melih Tanis

    2018-06-01

    Full Text Available A system for multiple camera networks is proposed for continuous monitoring of ecosystems by processing image time series. The system is built around the Finnish Meteorological Image PROcessing Toolbox (FMIPROT, which includes data acquisition, processing and visualization from multiple camera networks. The toolbox has a user-friendly graphical user interface (GUI for which only minimal computer knowledge and skills are required to use it. Images from camera networks are acquired and handled automatically according to the common communication protocols, e.g., File Transfer Protocol (FTP. Processing features include GUI based selection of the region of interest (ROI, automatic analysis chain, extraction of ROI based indices such as the green fraction index (GF, red fraction index (RF, blue fraction index (BF, green-red vegetation index (GRVI, and green excess (GEI index, as well as a custom index defined by a user-provided mathematical formula. Analysis results are visualized on interactive plots both on the GUI and hypertext markup language (HTML reports. The users can implement their own developed algorithms to extract information from digital image series for any purpose. The toolbox can also be run in non-GUI mode, which allows running series of analyses in servers unattended and scheduled. The system is demonstrated using an environmental camera network in Finland.

  9. Tomographic Image Reconstruction Using Training Images with Matrix and Tensor Formulations

    DEFF Research Database (Denmark)

    Soltani, Sara

    the image resolution compared to a classical reconstruction method such as Filtered Back Projection (FBP). Some priors for the tomographic reconstruction take the form of cross-section images of similar objects, providing a set of the so-called training images, that hold the key to the structural......Reducing X-ray exposure while maintaining the image quality is a major challenge in computed tomography (CT); since the imperfect data produced from the few view and/or low intensity projections results in low-quality images that are suffering from severe artifacts when using conventional...... information about the solution. The training images must be reliable and application-specific. This PhD project aims at providing a mathematical and computational framework for the use of training sets as non-parametric priors for the solution in tomographic image reconstruction. Through an unsupervised...

  10. SU-C-209-05: Monte Carlo Model of a Prototype Backscatter X-Ray (BSX) Imager for Projective and Selective Object-Plane Imaging

    International Nuclear Information System (INIS)

    Rolison, L; Samant, S; Baciak, J; Jordan, K

    2016-01-01

    Purpose: To develop a Monte Carlo N-Particle (MCNP) model for the validation of a prototype backscatter x-ray (BSX) imager, and optimization of BSX technology for medical applications, including selective object-plane imaging. Methods: BSX is an emerging technology that represents an alternative to conventional computed tomography (CT) and projective digital radiography (DR). It employs detectors located on the same side as the incident x-ray source, making use of backscatter and avoiding ring geometry to enclose the imaging object. Current BSX imagers suffer from low spatial resolution. A MCNP model was designed to replicate a BSX prototype used for flaw detection in industrial materials. This prototype consisted of a 1.5mm diameter 60kVp pencil beam surrounded by a ring of four 5.0cm diameter NaI scintillation detectors. The imaging phantom consisted of a 2.9cm thick aluminum plate with five 0.6cm diameter holes drilled halfway. The experimental image was created using a raster scanning motion (in 1.5mm increments). Results: A qualitative comparison between the physical and simulated images showed very good agreement with 1.5mm spatial resolution in plane perpendicular to incident x-ray beam. The MCNP model developed the concept of radiography by selective plane detection (RSPD) for BSX, whereby specific object planes can be imaged by varying kVp. 10keV increments in mean x-ray energy yielded 4mm thick slice resolution in the phantom. Image resolution in the MCNP model can be further increased by increasing the number of detectors, and decreasing raster step size. Conclusion: MCNP modelling was used to validate a prototype BSX imager and introduce the RSPD concept, allowing for selective object-plane imaging. There was very good visual agreement between the experimental and MCNP imaging. Beyond optimizing system parameters for the existing prototype, new geometries can be investigated for volumetric image acquisition in medical applications. This material is

  11. SU-C-209-05: Monte Carlo Model of a Prototype Backscatter X-Ray (BSX) Imager for Projective and Selective Object-Plane Imaging

    Energy Technology Data Exchange (ETDEWEB)

    Rolison, L; Samant, S; Baciak, J; Jordan, K [University of Florida, Gainesville, FL (United States)

    2016-06-15

    Purpose: To develop a Monte Carlo N-Particle (MCNP) model for the validation of a prototype backscatter x-ray (BSX) imager, and optimization of BSX technology for medical applications, including selective object-plane imaging. Methods: BSX is an emerging technology that represents an alternative to conventional computed tomography (CT) and projective digital radiography (DR). It employs detectors located on the same side as the incident x-ray source, making use of backscatter and avoiding ring geometry to enclose the imaging object. Current BSX imagers suffer from low spatial resolution. A MCNP model was designed to replicate a BSX prototype used for flaw detection in industrial materials. This prototype consisted of a 1.5mm diameter 60kVp pencil beam surrounded by a ring of four 5.0cm diameter NaI scintillation detectors. The imaging phantom consisted of a 2.9cm thick aluminum plate with five 0.6cm diameter holes drilled halfway. The experimental image was created using a raster scanning motion (in 1.5mm increments). Results: A qualitative comparison between the physical and simulated images showed very good agreement with 1.5mm spatial resolution in plane perpendicular to incident x-ray beam. The MCNP model developed the concept of radiography by selective plane detection (RSPD) for BSX, whereby specific object planes can be imaged by varying kVp. 10keV increments in mean x-ray energy yielded 4mm thick slice resolution in the phantom. Image resolution in the MCNP model can be further increased by increasing the number of detectors, and decreasing raster step size. Conclusion: MCNP modelling was used to validate a prototype BSX imager and introduce the RSPD concept, allowing for selective object-plane imaging. There was very good visual agreement between the experimental and MCNP imaging. Beyond optimizing system parameters for the existing prototype, new geometries can be investigated for volumetric image acquisition in medical applications. This material is

  12. The body project 4 all: A pilot randomized controlled trial of a mixed-gender dissonance-based body image program.

    Science.gov (United States)

    Kilpela, Lisa Smith; Blomquist, Kerstin; Verzijl, Christina; Wilfred, Salomé; Beyl, Robbie; Becker, Carolyn Black

    2016-06-01

    The Body Project is a cognitive dissonance-based body image improvement program with ample research support among female samples. More recently, researchers have highlighted the extent of male body dissatisfaction and disordered eating behaviors; however, boys/men have not been included in the majority of body image improvement programs. This study aims to explore the efficacy of a mixed-gender Body Project compared with the historically female-only body image intervention program. Participants included male and female college students (N = 185) across two sites. We randomly assigned women to a mixed-gender modification of the two-session, peer-led Body Project (MG), the two-session, peer-led, female-only (FO) Body Project, or a waitlist control (WL), and men to either MG or WL. Participants completed self-report measures assessing negative affect, appearance-ideal internalization, body satisfaction, and eating disorder pathology at baseline, post-test, and at 2- and 6-month follow-up. Linear mixed effects modeling to estimate the change from baseline over time for each dependent variable across conditions were used. For women, results were mixed regarding post-intervention improvement compared with WL, and were largely non-significant compared with WL at 6-month follow-up. Alternatively, results indicated that men in MG consistently improved compared with WL through 6-month follow-up on all measures except negative affect and appearance-ideal internalization. Results differed markedly between female and male samples, and were more promising for men than for women. Various explanations are provided, and further research is warranted prior to drawing firm conclusions regarding mixed-gender programming of the Body Project. © 2016 Wiley Periodicals, Inc.(Int J Eat Disord 2016; 49:591-602). © 2016 Wiley Periodicals, Inc.

  13. Strain Imaging of Nanoscale Semiconductor Heterostructures with X-Ray Bragg Projection Ptychography

    Science.gov (United States)

    Holt, Martin V.; Hruszkewycz, Stephan O.; Murray, Conal E.; Holt, Judson R.; Paskiewicz, Deborah M.; Fuoss, Paul H.

    2014-04-01

    We report the imaging of nanoscale distributions of lattice strain and rotation in complementary components of lithographically engineered epitaxial thin film semiconductor heterostructures using synchrotron x-ray Bragg projection ptychography (BPP). We introduce a new analysis method that enables lattice rotation and out-of-plane strain to be determined independently from a single BPP phase reconstruction, and we apply it to two laterally adjacent, multiaxially stressed materials in a prototype channel device. These results quantitatively agree with mechanical modeling and demonstrate the ability of BPP to map out-of-plane lattice dilatation, a parameter critical to the performance of electronic materials.

  14. A BML Based Embodied Conversational Agent for a Personality Detection Program

    NARCIS (Netherlands)

    Solano Méndez, Guillermo; Reidsma, Dennis; Vilhjálmsson, Hannes Högni; Kopp, Stefan; Marsella, Stacy; Thórisson, Kristinn R.

    2011-01-01

    This paper presents a project in which an Embodied Conversational Agent (ECA) attemtps to detect the personality of the user by asking him (or her) a series of questions. The project uses the Elckerlyc platform, a Behavior Markup Language (BML) compliant behavior realizer for ECAs [1]. BML allows

  15. The evolution of the CUAHSI Water Markup Language (WaterML)

    Science.gov (United States)

    Zaslavsky, I.; Valentine, D.; Maidment, D.; Tarboton, D. G.; Whiteaker, T.; Hooper, R.; Kirschtel, D.; Rodriguez, M.

    2009-04-01

    The CUAHSI Hydrologic Information System (HIS, his.cuahsi.org) uses web services as the core data exchange mechanism which provides programmatic connection between many heterogeneous sources of hydrologic data and a variety of online and desktop client applications. The service message schema follows the CUAHSI Water Markup Language (WaterML) 1.x specification (see OGC Discussion Paper 07-041r1). Data sources that can be queried via WaterML-compliant water data services include national and international repositories such as USGS NWIS (National Water Information System), USEPA STORET (Storage & Retrieval), USDA SNOTEL (Snowpack Telemetry), NCDC ISH and ISD(Integrated Surface Hourly and Daily Data), MODIS (Moderate Resolution Imaging Spectroradiometer), and DAYMET (Daily Surface Weather Data and Climatological Summaries). Besides government data sources, CUAHSI HIS provides access to a growing number of academic hydrologic observation networks. These networks are registered by researchers associated with 11 hydrologic observatory testbeds around the US, and other research, government and commercial groups wishing to join the emerging CUAHSI Water Data Federation. The Hydrologic Information Server (HIS Server) software stack deployed at NSF-supported hydrologic observatory sites and other universities around the country, supports a hydrologic data publication workflow which includes the following steps: (1) observational data are loaded from static files or streamed from sensors into a local instance of an Observations Data Model (ODM) database; (2) a generic web service template is configured for the new ODM instance to expose the data as a WaterML-compliant water data service, and (3) the new water data service is registered at the HISCentral registry (hiscentral.cuahsi.org), its metadata are harvested and semantically tagged using concepts from a hydrologic ontology. As a result, the new service is indexed in the CUAHSI central metadata catalog, and becomes

  16. Relational Data Modelling of Textual Corpora: The Skaldic Project and its Extensions

    DEFF Research Database (Denmark)

    Wills, Tarrin Jon

    2015-01-01

    Skaldic poetry is a highly complex textual phenomenon both in terms of the intricacy of the poetry and its contextual environment. Extensible Markup Language (XML) applications such as that of the Text Encoding Initiative provide a means of semantic representation of some of these complexities. XML...

  17. Ontology-based image navigation: exploring 3.0-T MR neurography of the brachial plexus using AIM and RadLex.

    Science.gov (United States)

    Wang, Kenneth C; Salunkhe, Aditya R; Morrison, James J; Lee, Pearlene P; Mejino, José L V; Detwiler, Landon T; Brinkley, James F; Siegel, Eliot L; Rubin, Daniel L; Carrino, John A

    2015-01-01

    Disorders of the peripheral nervous system have traditionally been evaluated using clinical history, physical examination, and electrodiagnostic testing. In selected cases, imaging modalities such as magnetic resonance (MR) neurography may help further localize or characterize abnormalities associated with peripheral neuropathies, and the clinical importance of such techniques is increasing. However, MR image interpretation with respect to peripheral nerve anatomy and disease often presents a diagnostic challenge because the relevant knowledge base remains relatively specialized. Using the radiology knowledge resource RadLex®, a series of RadLex queries, the Annotation and Image Markup standard for image annotation, and a Web services-based software architecture, the authors developed an application that allows ontology-assisted image navigation. The application provides an image browsing interface, allowing users to visually inspect the imaging appearance of anatomic structures. By interacting directly with the images, users can access additional structure-related information that is derived from RadLex (eg, muscle innervation, muscle attachment sites). These data also serve as conceptual links to navigate from one portion of the imaging atlas to another. With 3.0-T MR neurography of the brachial plexus as the initial area of interest, the resulting application provides support to radiologists in the image interpretation process by allowing efficient exploration of the MR imaging appearance of relevant nerve segments, muscles, bone structures, vascular landmarks, anatomic spaces, and entrapment sites, and the investigation of neuromuscular relationships. RSNA, 2015

  18. A semi-symmetric image encryption scheme based on the function projective synchronization of two hyperchaotic systems.

    Directory of Open Access Journals (Sweden)

    Xiaoqiang Di

    Full Text Available Both symmetric and asymmetric color image encryption have advantages and disadvantages. In order to combine their advantages and try to overcome their disadvantages, chaos synchronization is used to avoid the key transmission for the proposed semi-symmetric image encryption scheme. Our scheme is a hybrid chaotic encryption algorithm, and it consists of a scrambling stage and a diffusion stage. The control law and the update rule of function projective synchronization between the 3-cell quantum cellular neural networks (QCNN response system and the 6th-order cellular neural network (CNN drive system are formulated. Since the function projective synchronization is used to synchronize the response system and drive system, Alice and Bob got the key by two different chaotic systems independently and avoid the key transmission by some extra security links, which prevents security key leakage during the transmission. Both numerical simulations and security analyses such as information entropy analysis, differential attack are conducted to verify the feasibility, security, and efficiency of the proposed scheme.

  19. 3D palmprint and hand imaging system based on full-field composite color sinusoidal fringe projection technique.

    Science.gov (United States)

    Zhang, Zonghua; Huang, Shujun; Xu, Yongjia; Chen, Chao; Zhao, Yan; Gao, Nan; Xiao, Yanjun

    2013-09-01

    Palmprint and hand shape, as two kinds of important biometric characteristics, have been widely studied and applied to human identity recognition. The existing research is based mainly on 2D images, which lose the third-dimensional information. The biological features extracted from 2D images are distorted by pressure and rolling, so the subsequent feature matching and recognition are inaccurate. This paper presents a method to acquire accurate 3D shapes of palmprint and hand by projecting full-field composite color sinusoidal fringe patterns and the corresponding color texture information. A 3D imaging system is designed to capture and process the full-field composite color fringe patterns on hand surface. Composite color fringe patterns having the optimum three fringe numbers are generated by software and projected onto the surface of human hand by a digital light processing projector. From another viewpoint, a color CCD camera captures the deformed fringe patterns and saves them for postprocessing. After compensating for the cross talk and chromatic aberration between color channels, three fringe patterns are extracted from three color channels of a captured composite color image. Wrapped phase information can be calculated from the sinusoidal fringe patterns with high precision. At the same time, the absolute phase of each pixel is determined by the optimum three-fringe selection method. After building up the relationship between absolute phase map and 3D shape data, the 3D palmprint and hand are obtained. Color texture information can be directly captured or demodulated from the captured composite fringe pattern images. Experimental results show that the proposed method and system can yield accurate 3D shape and color texture information of the palmprint and hand shape.

  20. Projection radiography. Chapter 6

    Energy Technology Data Exchange (ETDEWEB)

    Poletti, J. L. [UNITEC Institute of Technology, Auckland (New Zealand)

    2014-09-15

    In its simplest form, X ray imaging is the collection of attenuation shadows that are projected from an ideal X ray point source on to an image receptor. This simple form is true for all X ray imaging modalities, including complex ones that involve source and receptor movement, such as computed tomography (CT). This simplified view, however, is made vastly more complex by the non-ideal point source, by the consequences of projecting a 3-D object on to a 2-D detector and by the presence of scattered radiation, generated within the patient, which will degrade any image that is captured.

  1. Detection of pulmonary nodules at paediatric CT: maximum intensity projections and axial source images are complementary

    International Nuclear Information System (INIS)

    Kilburn-Toppin, Fleur; Arthurs, Owen J.; Tasker, Angela D.; Set, Patricia A.K.

    2013-01-01

    Maximum intensity projection (MIP) images might be useful in helping to differentiate small pulmonary nodules from adjacent vessels on thoracic multidetector CT (MDCT). The aim was to evaluate the benefits of axial MIP images over axial source images for the paediatric chest in an interobserver variability study. We included 46 children with extra-pulmonary solid organ malignancy who had undergone thoracic MDCT. Three radiologists independently read 2-mm axial and 10-mm MIP image datasets, recording the number of nodules, size and location, overall time taken and confidence. There were 83 nodules (249 total reads among three readers) in 46 children (mean age 10.4 ± 4.98 years, range 0.3-15.9 years; 24 boys). Consensus read was used as the reference standard. Overall, three readers recorded significantly more nodules on MIP images (228 vs. 174; P < 0.05), improving sensitivity from 67% to 77.5% (P < 0.05) but with lower positive predictive value (96% vs. 85%, P < 0.005). MIP images took significantly less time to read (71.6 ± 43.7 s vs. 92.9 ± 48.7 s; P < 0.005) but did not improve confidence levels. Using 10-mm axial MIP images for nodule detection in the paediatric chest enhances diagnostic performance, improving sensitivity and reducing reading time when compared with conventional axial thin-slice images. Axial MIP and axial source images are complementary in thoracic nodule detection. (orig.)

  2. Evaluation of radiological workstations and web-browser-based image distribution clients for a PACS project in hands-on workshops

    International Nuclear Information System (INIS)

    Boehm, Thomas; Handgraetinger, Oliver; Voellmy, Daniel R.; Marincek, Borut; Wildermuth, Simon; Link, Juergen; Ploner, Ricardo

    2004-01-01

    The methodology and outcome of a hands-on workshop for the evaluation of PACS (picture archiving and communication system) software for a multihospital PACS project are described. The following radiological workstations and web-browser-based image distribution software clients were evaluated as part of a multistep evaluation of PACS vendors in March 2001: Impax DS 3000 V 4.1/Impax Web1000 (Agfa-Gevaert, Mortsel, Belgium); PathSpeed V 8.0/PathSpeed Web (GE Medical Systems, Milwaukee, Wis., USA); ID Report/ID Web (Image Devices, Idstein, Germany); EasyVision DX/EasyWeb (Philips Medical Systems, Eindhoven, Netherlands); and MagicView 1000 VB33a/MagicWeb (Siemens Medical Systems, Erlangen, Germany). A set of anonymized DICOM test data was provided to enable direct image comparison. Radiologists (n=44) evaluated the radiological workstations and nonradiologists (n=53) evaluated the image distribution software clients using different questionnaires. One vendor was not able to import the provided DICOM data set. Another vendor had problems in displaying imported cross-sectional studies in the correct stack order. Three vendors (Agfa-Gevaert, GE, Philips) presented server-client solutions with web access. Two (Siemens, Image Devices) presented stand-alone solutions. The highest scores in the class of radiological workstations were achieved by ID Report from Image Devices (p<0.005). In the class of image distribution clients, the differences were statistically not significant. Questionnaire-based evaluation was shown to be useful for guaranteeing systematic assessment. The workshop was a great success in raising interest in the PACS project in a large group of future clinical users. The methodology used in the present study may be useful for other hospitals evaluating PACS. (orig.)

  3. IDP camp evolvement analysis in Darfur using VHSR optical satellite image time series and scientific visualization on virtual globes

    Science.gov (United States)

    Tiede, Dirk; Lang, Stefan

    2010-11-01

    In this paper we focus on the application of transferable, object-based image analysis algorithms for dwelling extraction in a camp for internally displaced people (IDP) in Darfur, Sudan along with innovative means for scientific visualisation of the results. Three very high spatial resolution satellite images (QuickBird: 2002, 2004, 2008) were used for: (1) extracting different types of dwellings and (2) calculating and visualizing added-value products such as dwelling density and camp structure. The results were visualized on virtual globes (Google Earth and ArcGIS Explorer) revealing the analysis results (analytical 3D views,) transformed into the third dimension (z-value). Data formats depend on virtual globe software including KML/KMZ (keyhole mark-up language) and ESRI 3D shapefiles streamed as ArcGIS Server-based globe service. In addition, means for improving overall performance of automated dwelling structures using grid computing techniques are discussed using examples from a similar study.

  4. Some key techniques of SPOT-5 image processing in new national land and resources investigation project

    Science.gov (United States)

    Xue, Changsheng; Li, Qingquan; Li, Deren

    2004-02-01

    In 1988, the detail information on land resource was investigated in China. Fourteen years later, it has changed a lot. It is necessary that the second land resource detailed investigation should be implemented. On this condition, the New National Land and Resources Investigation Project in China, which will last 12 years, has been started since 1999. The project is directly under the administration of the Ministry of Land and Resource (MLR). It was organized and implemented By China Geological, China Land Surveying and Planning Institute (CLSPI) and Information Center of MLR. It is a grand and cross century project supported by the Central Finance, based on State and public interests and strategic characteristics. Up to now, "Land Use Dynamic Monitoring By Remote Sensing," "Arable Land Resource Investigation," "Rural Collective Land Property Right Investgiation," "Establishment of Public Consulting Standardization of Cadastral Information," "Land Resource Fundamental Maps and Data Updating," "Urban Land Price Investigation and Intensive Utilization Potential Capacity Evaluation," "Farmland Classification, Gradation, and Evaluation," "Land Use Database Construction at City or County Level" 8 subprojects have had the preliminary achievements. In this project, SPOT-1/2/4 and Landsat-7 TM data were always applied to monitor land use dynamic change as the main data resource. Certainly, IRS, CBERS-2, and IKONOS data also were tested in small areas. In 2002, the SPOT-5 data, whose spatial resolution of the panchromatic image is 2.5 meters and the spectral one is 10 meters, were applied into update the land use base map at the 1:10000 scale in 26 Chinese cities. The purpose in this paper is to communicate the experience of SPOT-5 image processing with the colleagues.

  5. Domain-specific markup languages and descriptive metadata: their functions in scientific resource discoveryLinguagens de marcação específicas por domínio e metadados descritivos: funções para a descoberta de recursos científicos

    Directory of Open Access Journals (Sweden)

    Marcia Lei Zeng

    2010-01-01

    Full Text Available While metadata has been a strong focus within information professionals‟ publications, projects, and initiatives during the last two decades, a significant number of domain-specific markup languages have also been developing on a parallel path at the same rate as metadata standards; yet, they do not receive comparable attention. This essay discusses the functions of these two kinds of approaches in scientific resource discovery and points out their potential complementary roles through appropriate interoperability approaches.Enquanto os metadados tiveram grande foco em publicações, projetos e iniciativas dos profissionais da informação durante as últimas duas décadas, um número significativo de linguagens de marcação específicas por domínio também se desenvolveram paralelamente a uma taxa equivalente aos padrões de metadados; mas ainda não recebem atenção comparável. Esse artigo discute as funções desses dois tipos de abordagens na descoberta de recursos científicos e aponta papéis potenciais e complementares por meio de abordagens de interoperabilidade apropriadas.

  6. Sub-Angstrom microscopy through incoherent imaging and image reconstruction

    International Nuclear Information System (INIS)

    Pennycook, S.J.; Jesson, D.E.; Chisholm, M.F.; Ferridge, A.G.; Seddon, M.J.

    1992-03-01

    Z-contrast scanning transmission electron microscopy (STEM) with a high-angle annular detector breaks the coherence of the imaging process, and provides an incoherent image of a crystal projection. Even in the presence of strong dynamical diffraction, the image can be accurately described as a convolution between an object function, sharply peaked at the projected atomic sites, and the probe intensity profile. Such an image can be inverted intuitively without the need for model structures, and therefore provides the important capability to reveal unanticipated interfacial arrangements. It represents a direct image of the crystal projection, revealing the location of the atomic columns and their relative high-angle scattering power. Since no phase is associated with a peak in the object function or the contrast transfer function, extension to higher resolution is also straightforward. Image restoration techniques such as maximum entropy, in conjunction with the 1.3 Angstrom probe anticipated for a 300 kV STEM, appear to provide a simple and robust route to the achievement of sub-Angstrom resolution electron microscopy

  7. MR-guided PET motion correction in LOR space using generic projection data for image reconstruction with PRESTO

    International Nuclear Information System (INIS)

    Scheins, J.; Ullisch, M.; Tellmann, L.; Weirich, C.; Rota Kops, E.; Herzog, H.; Shah, N.J.

    2013-01-01

    The BrainPET scanner from Siemens, designed as hybrid MR/PET system for simultaneous acquisition of both modalities, provides high-resolution PET images with an optimum resolution of 3 mm. However, significant head motion often compromises the achievable image quality, e.g. in neuroreceptor studies of human brain. This limitation can be omitted when tracking the head motion and accurately correcting measured Lines-of-Response (LORs). For this purpose, we present a novel method, which advantageously combines MR-guided motion tracking with the capabilities of the reconstruction software PRESTO (PET Reconstruction Software Toolkit) to convert motion-corrected LORs into highly accurate generic projection data. In this way, the high-resolution PET images achievable with PRESTO can also be obtained in presence of severe head motion

  8. Laser display system for multi-depth screen projection scenarios.

    Science.gov (United States)

    La Torre, J Pablo; Mayes, Nathan; Riza, Nabeel A

    2017-11-10

    Proposed is a laser projection display system that uses an electronically controlled variable focus lens (ECVFL) to achieve sharp and in-focus image projection over multi-distance three-dimensional (3D) conformal screens. The system also functions as an embedded distance sensor that enables 3D mapping of the multi-level screen platform before the desired laser scanned beam focused/defocused projected spot sizes are matched to the different localized screen distances on the 3D screen. Compared to conventional laser scanning and spatial light modulator (SLM) based projection systems, the proposed design offers in-focus non-distorted projection over a multi-distance screen zone with varying depths. An experimental projection system for a screen depth variation of 65 cm is demonstrated using a 633 nm laser beam, 3 KHz scan speed galvo-scanning mirrors, and a liquid-based ECVFL. As a basic demonstration, an in-house developed MATLAB based graphic user interface is deployed to work along with the laser projection display, enabling user inputs like text strings or predefined image projection. The user can specify projection screen distance, scanned laser linewidth, projected text font size, projected image dimensions, and laser scanning rate. Projected images are shown highlighting the 3D control capabilities of the display, including the production of a non-distorted image onto two-depths versus a distorted image via dominant prior-art projection methods.

  9. Renal artery origins: best angiographic projection angles.

    Science.gov (United States)

    Verschuyl, E J; Kaatee, R; Beek, F J; Patel, N H; Fontaine, A B; Daly, C P; Coldwell, D M; Bush, W H; Mali, W P

    1997-10-01

    To determine the best projection angles for imaging the renal artery origins in profile. A mathematical model of the anatomy at the renal artery origins in the transverse plane was used to analyze the amount of aortic lumen that projects over the renal artery origins at various projection angles. Computed tomographic (CT) angiographic data about the location of 400 renal artery origins in 200 patients were statistically analyzed. In patients with an abdominal aortic diameter no larger than 3.0 cm, approximately 0.5 mm of the proximal part of the renal artery and origin may be hidden from view if there is a projection error of +/-10 degrees from the ideal image. A combination of anteroposterior and 20 degrees and 40 degrees left anterior oblique projections resulted in a 92% yield of images that adequately profiled the renal artery origins. Right anterior oblique projections resulted in the least useful images. An error in projection angle of +/-10 degrees is acceptable for angiographic imaging of the renal artery origins. Patients sex, site of interest (left or right artery), and local diameter of the abdominal aorta are important factors to consider.

  10. A Methodology and Implementation for Annotating Digital Images for Context-appropriate Use in an Academic Health Care Environment

    Science.gov (United States)

    Goede, Patricia A.; Lauman, Jason R.; Cochella, Christopher; Katzman, Gregory L.; Morton, David A.; Albertine, Kurt H.

    2004-01-01

    Use of digital medical images has become common over the last several years, coincident with the release of inexpensive, mega-pixel quality digital cameras and the transition to digital radiology operation by hospitals. One problem that clinicians, medical educators, and basic scientists encounter when handling images is the difficulty of using business and graphic arts commercial-off-the-shelf (COTS) software in multicontext authoring and interactive teaching environments. The authors investigated and developed software-supported methodologies to help clinicians, medical educators, and basic scientists become more efficient and effective in their digital imaging environments. The software that the authors developed provides the ability to annotate images based on a multispecialty methodology for annotation and visual knowledge representation. This annotation methodology is designed by consensus, with contributions from the authors and physicians, medical educators, and basic scientists in the Departments of Radiology, Neurobiology and Anatomy, Dermatology, and Ophthalmology at the University of Utah. The annotation methodology functions as a foundation for creating, using, reusing, and extending dynamic annotations in a context-appropriate, interactive digital environment. The annotation methodology supports the authoring process as well as output and presentation mechanisms. The annotation methodology is the foundation for a Windows implementation that allows annotated elements to be represented as structured eXtensible Markup Language and stored separate from the image(s). PMID:14527971

  11. The Advanced Rapid Imaging and Analysis (ARIA) Project: Status of SAR products for Earthquakes, Floods, Volcanoes and Groundwater-related Subsidence

    Science.gov (United States)

    Owen, S. E.; Yun, S. H.; Hua, H.; Agram, P. S.; Liu, Z.; Sacco, G. F.; Manipon, G.; Linick, J. P.; Fielding, E. J.; Lundgren, P.; Farr, T. G.; Webb, F.; Rosen, P. A.; Simons, M.

    2017-12-01

    The Advanced Rapid Imaging and Analysis (ARIA) project for Natural Hazards is focused on rapidly generating high-level geodetic imaging products and placing them in the hands of the solid earth science and local, national, and international natural hazard communities by providing science product generation, exploration, and delivery capabilities at an operational level. Space-based geodetic measurement techniques including Interferometric Synthetic Aperture Radar (InSAR), differential Global Positioning System, and SAR-based change detection have become critical additions to our toolset for understanding and mapping the damage and deformation caused by earthquakes, volcanic eruptions, floods, landslides, and groundwater extraction. Up until recently, processing of these data sets has been handcrafted for each study or event and has not generated products rapidly and reliably enough for response to natural disasters or for timely analysis of large data sets. The ARIA project, a joint venture co-sponsored by the California Institute of Technology and by NASA through the Jet Propulsion Laboratory, has been capturing the knowledge applied to these responses and building it into an automated infrastructure to generate imaging products in near real-time that can improve situational awareness for disaster response. In addition to supporting the growing science and hazard response communities, the ARIA project has developed the capabilities to provide automated imaging and analysis capabilities necessary to keep up with the influx of raw SAR data from geodetic imaging missions such as ESA's Sentinel-1A/B, now operating with repeat intervals as short as 6 days, and the upcoming NASA NISAR mission. We will present the progress and results we have made on automating the analysis of Sentinel-1A/B SAR data for hazard monitoring and response, with emphasis on recent developments and end user engagement in flood extent mapping and deformation time series for both volcano

  12. Analytical simulation platform describing projections in computed tomography systems

    International Nuclear Information System (INIS)

    Youn, Hanbean; Kim, Ho Kyung

    2013-01-01

    To reduce the patient dose, several approaches such as spectral imaging using photon counting detectors and statistical image reconstruction, are being considered. Although image-reconstruction algorithms may significantly enhance image quality in reconstructed images with low dose, true signal-to-noise properties are mainly determined by image quality in projections. We are developing an analytical simulation platform describing projections to investigate how quantum-interaction physics in each component configuring CT systems affect image quality in projections. This simulator will be very useful for an improved design or optimization of CT systems in economy as well as the development of novel image-reconstruction algorithms. In this study, we present the progress of development of the simulation platform with an emphasis on the theoretical framework describing the generation of projection data. We have prepared the analytical simulation platform describing projections in computed tomography systems. The remained further study before the meeting includes the following: Each stage in the cascaded signal-transfer model for obtaining projections will be validated by the Monte Carlo simulations. We will build up energy-dependent scatter and pixel-crosstalk kernels, and show their effects on image quality in projections and reconstructed images. We will investigate the effects of projections obtained from various imaging conditions and system (or detector) operation parameters on reconstructed images. It is challenging to include the interaction physics due to photon-counting detectors into the simulation platform. Detailed descriptions of the simulator will be presented with discussions on its performance and limitation as well as Monte Carlo validations. Computational cost will also be addressed in detail. The proposed method in this study is simple and can be used conveniently in lab environment

  13. Comparison analysis between filtered back projection and algebraic reconstruction technique on microwave imaging

    Science.gov (United States)

    Ramadhan, Rifqi; Prabowo, Rian Gilang; Aprilliyani, Ria; Basari

    2018-02-01

    Victims of acute cancer and tumor are growing each year and cancer becomes one of the causes of human deaths in the world. Cancers or tumor tissue cells are cells that grow abnormally and turn to take over and damage the surrounding tissues. At the beginning, cancers or tumors do not have definite symptoms in its early stages, and can even attack the tissues inside of the body. This phenomena is not identifiable under visual human observation. Therefore, an early detection system which is cheap, quick, simple, and portable is essensially required to anticipate the further development of cancer or tumor. Among all of the modalities, microwave imaging is considered to be a cheaper, simple, and portable system method. There are at least two simple image reconstruction algorithms i.e. Filtered Back Projection (FBP) and Algebraic Reconstruction Technique (ART), which have been adopted in some common modalities. In this paper, both algorithms will be compared by reconstructing the image from an artificial tissue model (i.e. phantom), which has two different dielectric distributions. We addressed two performance comparisons, namely quantitative and qualitative analysis. Qualitative analysis includes the smoothness of the image and also the success in distinguishing dielectric differences by observing the image with human eyesight. In addition, quantitative analysis includes Histogram, Structural Similarity Index (SSIM), Mean Squared Error (MSE), and Peak Signal-to-Noise Ratio (PSNR) calculation were also performed. As a result, quantitative parameters of FBP might show better values than the ART. However, ART is likely more capable to distinguish two different dielectric value than FBP, due to higher contrast in ART and wide distribution grayscale level.

  14. THE EUROSDR PROJECT "RADIOMETRIC ASPECTS OF DIGITAL PHOTOGRAMMETRIC IMAGES" – RESULTS OF THE EMPIRICAL PHASE

    Directory of Open Access Journals (Sweden)

    E. Honkavaara

    2012-09-01

    Full Text Available This article presents the empirical research carried out in the context of the multi-site EuroSDR project "Radiometric aspects of digital photogrammetric images" and provides highlights of the results. The investigations have considered the vicarious radiometric and spatial resolution validation and calibration of the sensor system, radiometric processing of the image blocks either by performing relative radiometric block equalization or into absolutely reflectance calibrated products, and finally aspects of practical applications on NDVI layer generation and tree species classification. The data sets were provided by Leica Geosystems ADS40 and Intergraph DMC and the participants represented stakeholders in National Mapping Authorities, software development and research. The investigations proved the stability and quality of evaluated imaging systems with respect to radiometry and optical system. The first new-generation methods for reflectance calibration and equalization of photogrammetric image block data provided promising accuracy and were also functional from the productivity and usability points of view. The reflectance calibration methods provided up to 5% accuracy without any ground reference. Application oriented results indicated that automatic interpretation methods will benefit from the optimal use of radiometrically accurate multi-view photogrammetric imagery.

  15. Lesion insertion in the projection domain: Methods and initial results

    International Nuclear Information System (INIS)

    Chen, Baiyu; Leng, Shuai; Yu, Lifeng; Yu, Zhicong; Ma, Chi; McCollough, Cynthia

    2015-01-01

    Purpose: To perform task-based image quality assessment in CT, it is desirable to have a large number of realistic patient images with known diagnostic truth. One effective way of achieving this objective is to create hybrid images that combine patient images with inserted lesions. Because conventional hybrid images generated in the image domain fails to reflect the impact of scan and reconstruction parameters on lesion appearance, this study explored a projection-domain approach. Methods: Lesions were segmented from patient images and forward projected to acquire lesion projections. The forward-projection geometry was designed according to a commercial CT scanner and accommodated both axial and helical modes with various focal spot movement patterns. The energy employed by the commercial CT scanner for beam hardening correction was measured and used for the forward projection. The lesion projections were inserted into patient projections decoded from commercial CT projection data. The combined projections were formatted to match those of commercial CT raw data, loaded onto a commercial CT scanner, and reconstructed to create the hybrid images. Two validations were performed. First, to validate the accuracy of the forward-projection geometry, images were reconstructed from the forward projections of a virtual ACR phantom and compared to physically acquired ACR phantom images in terms of CT number accuracy and high-contrast resolution. Second, to validate the realism of the lesion in hybrid images, liver lesions were segmented from patient images and inserted back into the same patients, each at a new location specified by a radiologist. The inserted lesions were compared to the original lesions and visually assessed for realism by two experienced radiologists in a blinded fashion. Results: For the validation of the forward-projection geometry, the images reconstructed from the forward projections of the virtual ACR phantom were consistent with the images physically

  16. Lesion insertion in the projection domain: Methods and initial results

    Energy Technology Data Exchange (ETDEWEB)

    Chen, Baiyu; Leng, Shuai; Yu, Lifeng; Yu, Zhicong; Ma, Chi; McCollough, Cynthia, E-mail: mccollough.cynthia@mayo.edu [Department of Radiology, Mayo Clinic, Rochester, Minnesota 55905 (United States)

    2015-12-15

    Purpose: To perform task-based image quality assessment in CT, it is desirable to have a large number of realistic patient images with known diagnostic truth. One effective way of achieving this objective is to create hybrid images that combine patient images with inserted lesions. Because conventional hybrid images generated in the image domain fails to reflect the impact of scan and reconstruction parameters on lesion appearance, this study explored a projection-domain approach. Methods: Lesions were segmented from patient images and forward projected to acquire lesion projections. The forward-projection geometry was designed according to a commercial CT scanner and accommodated both axial and helical modes with various focal spot movement patterns. The energy employed by the commercial CT scanner for beam hardening correction was measured and used for the forward projection. The lesion projections were inserted into patient projections decoded from commercial CT projection data. The combined projections were formatted to match those of commercial CT raw data, loaded onto a commercial CT scanner, and reconstructed to create the hybrid images. Two validations were performed. First, to validate the accuracy of the forward-projection geometry, images were reconstructed from the forward projections of a virtual ACR phantom and compared to physically acquired ACR phantom images in terms of CT number accuracy and high-contrast resolution. Second, to validate the realism of the lesion in hybrid images, liver lesions were segmented from patient images and inserted back into the same patients, each at a new location specified by a radiologist. The inserted lesions were compared to the original lesions and visually assessed for realism by two experienced radiologists in a blinded fashion. Results: For the validation of the forward-projection geometry, the images reconstructed from the forward projections of the virtual ACR phantom were consistent with the images physically

  17. Lesion insertion in the projection domain: Methods and initial results.

    Science.gov (United States)

    Chen, Baiyu; Leng, Shuai; Yu, Lifeng; Yu, Zhicong; Ma, Chi; McCollough, Cynthia

    2015-12-01

    To perform task-based image quality assessment in CT, it is desirable to have a large number of realistic patient images with known diagnostic truth. One effective way of achieving this objective is to create hybrid images that combine patient images with inserted lesions. Because conventional hybrid images generated in the image domain fails to reflect the impact of scan and reconstruction parameters on lesion appearance, this study explored a projection-domain approach. Lesions were segmented from patient images and forward projected to acquire lesion projections. The forward-projection geometry was designed according to a commercial CT scanner and accommodated both axial and helical modes with various focal spot movement patterns. The energy employed by the commercial CT scanner for beam hardening correction was measured and used for the forward projection. The lesion projections were inserted into patient projections decoded from commercial CT projection data. The combined projections were formatted to match those of commercial CT raw data, loaded onto a commercial CT scanner, and reconstructed to create the hybrid images. Two validations were performed. First, to validate the accuracy of the forward-projection geometry, images were reconstructed from the forward projections of a virtual ACR phantom and compared to physically acquired ACR phantom images in terms of CT number accuracy and high-contrast resolution. Second, to validate the realism of the lesion in hybrid images, liver lesions were segmented from patient images and inserted back into the same patients, each at a new location specified by a radiologist. The inserted lesions were compared to the original lesions and visually assessed for realism by two experienced radiologists in a blinded fashion. For the validation of the forward-projection geometry, the images reconstructed from the forward projections of the virtual ACR phantom were consistent with the images physically acquired for the ACR

  18. On proton CT reconstruction using MVCT-converted virtual proton projections

    Energy Technology Data Exchange (ETDEWEB)

    Wang Dongxu; Mackie, T. Rockwell; Tome, Wolfgang A. [Department of Medical Physics, University of Wisconsin School of Medicine and Public Health, Madison, Wisconsin 53705 and Department of Radiation Oncology, University of Iowa Hospitals and Clinics, Iowa City, Iowa 52242 (United States); Department of Medical Physics, University of Wisconsin School of Medicine and Public Health, Madison, Wisconsin 53705 and Morgridge Institute of Research, University of Wisconsin, Madison, Wisconsin 53715 (United States); Department of Medical Physics, University of Wisconsin School of Medicine and Public Health, Madison, Wisconsin 53705 and Oncophysics Institute, Albert Einstein College of Medicine, Yeshiva University, Bronx, New York 10461 (United States)

    2012-06-15

    Purpose: To describe a novel methodology of converting megavoltage x-ray projections into virtual proton projections that are otherwise missing due to the proton range limit. These converted virtual proton projections can be used in the reconstruction of proton computed tomography (pCT). Methods: Relations exist between proton projections and multispectral megavoltage x-ray projections for human tissue. Based on these relations, these tissues can be categorized into: (a) adipose tissue; (b) nonadipose soft tissues; and (c) bone. These three tissue categories can be visibly identified on a regular megavoltage x-ray computed tomography (MVCT) image. With an MVCT image and its projection data available, the x-ray projections through heterogeneous anatomy can be converted to the corresponding proton projections using predetermined calibration curves for individual materials, aided by a coarse segmentation on the x-ray CT image. To show the feasibility of this approach, mathematical simulations were carried out. The converted proton projections, plotted on a proton sinogram, were compared to the simulated ground truth. Proton stopping power images were reconstructed using either the virtual proton projections only or a blend of physically available proton projections and virtual proton projections that make up for those missing due to the range limit. These images were compared to a reference image reconstructed from theoretically calculated proton projections. Results: The converted virtual projections had an uncertainty of {+-}0.8% compared to the calculated ground truth. Proton stopping power images reconstructed using a blend of converted virtual projections (48%) and physically available projections (52%) had an uncertainty of {+-}0.86% compared with that reconstructed from theoretically calculated projections. Reconstruction solely from converted virtual proton projections had an uncertainty of {+-}1.1% compared with that reconstructed from theoretical projections

  19. On proton CT reconstruction using MVCT-converted virtual proton projections

    International Nuclear Information System (INIS)

    Wang Dongxu; Mackie, T. Rockwell; Tomé, Wolfgang A.

    2012-01-01

    Purpose: To describe a novel methodology of converting megavoltage x-ray projections into virtual proton projections that are otherwise missing due to the proton range limit. These converted virtual proton projections can be used in the reconstruction of proton computed tomography (pCT). Methods: Relations exist between proton projections and multispectral megavoltage x-ray projections for human tissue. Based on these relations, these tissues can be categorized into: (a) adipose tissue; (b) nonadipose soft tissues; and (c) bone. These three tissue categories can be visibly identified on a regular megavoltage x-ray computed tomography (MVCT) image. With an MVCT image and its projection data available, the x-ray projections through heterogeneous anatomy can be converted to the corresponding proton projections using predetermined calibration curves for individual materials, aided by a coarse segmentation on the x-ray CT image. To show the feasibility of this approach, mathematical simulations were carried out. The converted proton projections, plotted on a proton sinogram, were compared to the simulated ground truth. Proton stopping power images were reconstructed using either the virtual proton projections only or a blend of physically available proton projections and virtual proton projections that make up for those missing due to the range limit. These images were compared to a reference image reconstructed from theoretically calculated proton projections. Results: The converted virtual projections had an uncertainty of ±0.8% compared to the calculated ground truth. Proton stopping power images reconstructed using a blend of converted virtual projections (48%) and physically available projections (52%) had an uncertainty of ±0.86% compared with that reconstructed from theoretically calculated projections. Reconstruction solely from converted virtual proton projections had an uncertainty of ±1.1% compared with that reconstructed from theoretical projections. If

  20. Fundamental remote science research program. Part 2: Status report of the mathematical pattern recognition and image analysis project

    Science.gov (United States)

    Heydorn, R. P.

    1984-01-01

    The Mathematical Pattern Recognition and Image Analysis (MPRIA) Project is concerned with basic research problems related to the study of he Earth from remotely sensed measurements of its surface characteristics. The program goal is to better understand how to analyze the digital image that represents the spatial, spectral, and temporal arrangement of these measurements for purposing of making selected inferences about the Earth. This report summarizes the progress that has been made toward this program goal by each of the principal investigators in the MPRIA Program.

  1. ImageX: new and improved image explorer for astronomical images and beyond

    Science.gov (United States)

    Hayashi, Soichi; Gopu, Arvind; Kotulla, Ralf; Young, Michael D.

    2016-08-01

    The One Degree Imager - Portal, Pipeline, and Archive (ODI-PPA) has included the Image Explorer interactive image visualization tool since it went operational. Portal users were able to quickly open up several ODI images within any HTML5 capable web browser, adjust the scaling, apply color maps, and perform other basic image visualization steps typically done on a desktop client like DS9. However, the original design of the Image Explorer required lossless PNG tiles to be generated and stored for all raw and reduced ODI images thereby taking up tens of TB of spinning disk space even though a small fraction of those images were being accessed by portal users at any given time. It also caused significant overhead on the portal web application and the Apache webserver used by ODI-PPA. We found it hard to merge in improvements made to a similar deployment in another project's portal. To address these concerns, we re-architected Image Explorer from scratch and came up with ImageX, a set of microservices that are part of the IU Trident project software suite, with rapid interactive visualization capabilities useful for ODI data and beyond. We generate a full resolution JPEG image for each raw and reduced ODI FITS image before producing a JPG tileset, one that can be rendered using the ImageX frontend code at various locations as appropriate within a web portal (for example: on tabular image listings, views allowing quick perusal of a set of thumbnails or other image sifting activities). The new design has decreased spinning disk requirements, uses AngularJS for the client side Model/View code (instead of depending on backend PHP Model/View/Controller code previously used), OpenSeaDragon to render the tile images, and uses nginx and a lightweight NodeJS application to serve tile images thereby significantly decreasing the Time To First Byte latency by a few orders of magnitude. We plan to extend ImageX for non-FITS images including electron microscopy and radiology scan

  2. Radiology Architecture Project Primer.

    Science.gov (United States)

    Sze, Raymond W; Hogan, Laurie; Teshima, Satoshi; Davidson, Scott

    2017-12-19

    The rapid pace of technologic advancement and increasing expectations for patient- and family-friendly environments make it common for radiology leaders to be involved in imaging remodel and construction projects. Most radiologists and business directors lack formal training in architectural and construction processes but are expected to play significant and often leading roles in all phases of an imaging construction project. Avoidable mistakes can result in significant increased costs and scheduling delays; knowledgeable participation and communication can result in a final product that enhances staff workflow and morale and improves patient care and experience. This article presents practical guidelines for preparing for and leading a new imaging architectural and construction project. We share principles derived from the radiology and nonradiology literature and our own experience over the past decade completely remodeling a large pediatric radiology department and building a full-service outpatient imaging center. Copyright © 2017 American College of Radiology. Published by Elsevier Inc. All rights reserved.

  3. THERMAL IMAGING OF Si, GaAs AND GaN -BASED DEVICES WITHIN THE MICROTHERM PROJECT

    OpenAIRE

    Pavageau , S.; Tessier , G.; Filloy , C.; Jerosolimski , G.; Fournier , D.; Polignano , M.-L.; Mica , I.; Cassette , S.; Aubry , R.; Durand , O.

    2005-01-01

    Submitted on behalf of EDA Publishing Association (http://irevues.inist.fr/handle/2042/5920); International audience; Within the european project Microtherm, we have developed a CCD-based thermoreflectance system which delivers thermal images of working integrated circuits with high spatial and thermal resolutions (down to 350 nm and 0.1 K respectively). We illustrate the performances of this set-up on several classes of semiconductor devices including high power transistors and transistor ar...

  4. Mammography image compression using Wavelet

    International Nuclear Information System (INIS)

    Azuhar Ripin; Md Saion Salikin; Wan Hazlinda Ismail; Asmaliza Hashim; Norriza Md Isa

    2004-01-01

    Image compression plays an important role in many applications like medical imaging, televideo conferencing, remote sensing, document and facsimile transmission, which depend on the efficient manipulation, storage, and transmission of binary, gray scale, or color images. In Medical imaging application such Picture Archiving and Communication System (PACs), the image size or image stream size is too large and requires a large amount of storage space or high bandwidth for communication. Image compression techniques are divided into two categories namely lossy and lossless data compression. Wavelet method used in this project is a lossless compression method. In this method, the exact original mammography image data can be recovered. In this project, mammography images are digitized by using Vider Sierra Plus digitizer. The digitized images are compressed by using this wavelet image compression technique. Interactive Data Language (IDLs) numerical and visualization software is used to perform all of the calculations, to generate and display all of the compressed images. Results of this project are presented in this paper. (Author)

  5. Correction for polychromatic X-ray image distortion in computer tomography images

    International Nuclear Information System (INIS)

    1979-01-01

    A method and apparatus are described which correct the polychromatic distortion of CT images that is produced by the non-linear interaction of body constituents with a polychromatic X-ray beam. A CT image is processed to estimate the proportion of the attenuation coefficients of the constituents in each pixel element. A multiplicity of projections for each constituent are generated from the original image and are combined utilizing a multidimensional polynomial which approximates the non-linear interaction involved. An error image is then generated from the combined projections and is subtracted from the original image to correct for the polychromatic distortion. (Auth.)

  6. MO-FG-204-08: Optimization-Based Image Reconstruction From Unevenly Distributed Sparse Projection Views

    International Nuclear Information System (INIS)

    Xie, Huiqiao; Yang, Yi; Tang, Xiangyang; Niu, Tianye; Ren, Yi

    2015-01-01

    Purpose: Optimization-based reconstruction has been proposed and investigated for reconstructing CT images from sparse views, as such the radiation dose can be substantially reduced while maintaining acceptable image quality. The investigation has so far focused on reconstruction from evenly distributed sparse views. Recognizing the clinical situations wherein only unevenly sparse views are available, e.g., image guided radiation therapy, CT perfusion and multi-cycle cardiovascular imaging, we investigate the performance of optimization-based image reconstruction from unevenly sparse projection views in this work. Methods: The investigation is carried out using the FORBILD and an anthropomorphic head phantoms. In the study, 82 views, which are evenly sorted out from a full (360°) axial CT scan consisting of 984 views, form sub-scan I. Another 82 views are sorted out in a similar manner to form sub-scan II. As such, a CT scan with sparse (164) views at 1:6 ratio are formed. By shifting the two sub-scans relatively in view angulation, a CT scan with unevenly distributed sparse (164) views at 1:6 ratio are formed. An optimization-based method is implemented to reconstruct images from the unevenly distributed views. By taking the FBP reconstruction from the full scan (984 views) as the reference, the root mean square (RMS) between the reference and the optimization-based reconstruction is used to evaluate the performance quantitatively. Results: In visual inspection, the optimization-based method outperforms the FBP substantially in the reconstruction from unevenly distributed, which are quantitatively verified by the RMS gauged globally and in ROIs in both the FORBILD and anthropomorphic head phantoms. The RMS increases with increasing severity in the uneven angular distribution, especially in the case of anthropomorphic head phantom. Conclusion: The optimization-based image reconstruction can save radiation dose up to 12-fold while providing acceptable image quality

  7. An extension to artifact-free projection overlaps

    International Nuclear Information System (INIS)

    Lin, Jianyu

    2015-01-01

    Purpose: In multipinhole single photon emission computed tomography, the overlapping of projections has been used to increase sensitivity. Avoiding artifacts in the reconstructed image associated with projection overlaps (multiplexing) is a critical issue. In our previous report, two types of artifact-free projection overlaps, i.e., projection overlaps that do not lead to artifacts in the reconstructed image, were formally defined and proved, and were validated via simulations. In this work, a new proposition is introduced to extend the previously defined type-II artifact-free projection overlaps so that a broader range of artifact-free overlaps is accommodated. One practical purpose of the new extension is to design a baffle window multipinhole system with artifact-free projection overlaps. Methods: First, the extended type-II artifact-free overlap was theoretically defined and proved. The new proposition accommodates the situation where the extended type-II artifact-free projection overlaps can be produced with incorrectly reconstructed portions in the reconstructed image. Next, to validate the theory, the extended-type-II artifact-free overlaps were employed in designing the multiplexing multipinhole spiral orbit imaging systems with a baffle window. Numerical validations were performed via simulations, where the corresponding 1-pinhole nonmultiplexing reconstruction results were used as the benchmark for artifact-free reconstructions. The mean square error (MSE) was the metric used for comparisons of noise-free reconstructed images. Noisy reconstructions were also performed as part of the validations. Results: Simulation results show that for noise-free reconstructions, the MSEs of the reconstructed images of the artifact-free multiplexing systems are very similar to those of the corresponding 1-pinhole systems. No artifacts were observed in the reconstructed images. Therefore, the testing results for artifact-free multiplexing systems designed using the

  8. “Una imagen real de la Argentina”: Image-based Counter-narratives through the Walking Archive and the Project Hegemony.

    Directory of Open Access Journals (Sweden)

    Elena Rosauro

    2013-07-01

    Full Text Available After the 1978 World Cup in Argentina, José Alfredo Martínez de Hoz —then Minister of Economy under the Military Dictatorship— promoted an initiative to publish an article in Time Magazine in which “a real image of Argentina would be given” —in the words of businessman Carlos Pedro Blaquier, strongly close to the military regime—. But who deines the extent of reality of an image? Who makes these “real images” and how do they articulate within the construction of national history and identity? Along this article, and departing from the construction of the past in Argentina through the “real images” produced within the economic and artistic institutions, we will examine the image-based counternarratives propounded by Eduardo Molinari through his Walking Archive and the collaborative project Hegemony. These two contemporary artistic projects focus mainly on the last decades of the 20th century in order to give visibility to the existent relations among economic groups, the military, politicians and the cultural system in Argentina. These relations have provided legitimacy to certain processes of construction of “real” narratives and also to certain artistic practices, while rejecting others.

  9. Projections onto Convex Sets Super-Resolution Reconstruction Based on Point Spread Function Estimation of Low-Resolution Remote Sensing Images

    Directory of Open Access Journals (Sweden)

    Chong Fan

    2017-02-01

    Full Text Available To solve the problem on inaccuracy when estimating the point spread function (PSF of the ideal original image in traditional projection onto convex set (POCS super-resolution (SR reconstruction, this paper presents an improved POCS SR algorithm based on PSF estimation of low-resolution (LR remote sensing images. The proposed algorithm can improve the spatial resolution of the image and benefit agricultural crop visual interpolation. The PSF of the highresolution (HR image is unknown in reality. Therefore, analysis of the relationship between the PSF of the HR image and the PSF of the LR image is important to estimate the PSF of the HR image by using multiple LR images. In this study, the linear relationship between the PSFs of the HR and LR images can be proven. In addition, the novel slant knife-edge method is employed, which can improve the accuracy of the PSF estimation of LR images. Finally, the proposed method is applied to reconstruct airborne digital sensor 40 (ADS40 three-line array images and the overlapped areas of two adjacent GF-2 images by embedding the estimated PSF of the HR image to the original POCS SR algorithm. Experimental results show that the proposed method yields higher quality of reconstructed images than that produced by the blind SR method and the bicubic interpolation method.

  10. Identification of retinal ganglion cells and their projections involved in central transmission of information about upward and downward image motion.

    Directory of Open Access Journals (Sweden)

    Keisuke Yonehara

    Full Text Available The direction of image motion is coded by direction-selective (DS ganglion cells in the retina. Particularly, the ON DS ganglion cells project their axons specifically to terminal nuclei of the accessory optic system (AOS responsible for optokinetic reflex (OKR. We recently generated a knock-in mouse in which SPIG1 (SPARC-related protein containing immunoglobulin domains 1-expressing cells are visualized with GFP, and found that retinal ganglion cells projecting to the medial terminal nucleus (MTN, the principal nucleus of the AOS, are comprised of SPIG1+ and SPIG1(- ganglion cells distributed in distinct mosaic patterns in the retina. Here we examined light responses of these two subtypes of MTN-projecting cells by targeted electrophysiological recordings. SPIG1+ and SPIG1(- ganglion cells respond preferentially to upward motion and downward motion, respectively, in the visual field. The direction selectivity of SPIG1+ ganglion cells develops normally in dark-reared mice. The MTN neurons are activated by optokinetic stimuli only of the vertical motion as shown by Fos expression analysis. Combination of genetic labeling and conventional retrograde labeling revealed that axons of SPIG1+ and SPIG1(- ganglion cells project to the MTN via different pathways. The axon terminals of the two subtypes are organized into discrete clusters in the MTN. These results suggest that information about upward and downward image motion transmitted by distinct ON DS cells is separately processed in the MTN, if not independently. Our findings provide insights into the neural mechanisms of OKR, how information about the direction of image motion is deciphered by the AOS.

  11. Accuracy of Dual-Energy Virtual Monochromatic CT Numbers: Comparison between the Single-Source Projection-Based and Dual-Source Image-Based Methods.

    Science.gov (United States)

    Ueguchi, Takashi; Ogihara, Ryota; Yamada, Sachiko

    2018-03-21

    To investigate the accuracy of dual-energy virtual monochromatic computed tomography (CT) numbers obtained by two typical hardware and software implementations: the single-source projection-based method and the dual-source image-based method. A phantom with different tissue equivalent inserts was scanned with both single-source and dual-source scanners. A fast kVp-switching feature was used on the single-source scanner, whereas a tin filter was used on the dual-source scanner. Virtual monochromatic CT images of the phantom at energy levels of 60, 100, and 140 keV were obtained by both projection-based (on the single-source scanner) and image-based (on the dual-source scanner) methods. The accuracy of virtual monochromatic CT numbers for all inserts was assessed by comparing measured values to their corresponding true values. Linear regression analysis was performed to evaluate the dependency of measured CT numbers on tissue attenuation, method, and their interaction. Root mean square values of systematic error over all inserts at 60, 100, and 140 keV were approximately 53, 21, and 29 Hounsfield unit (HU) with the single-source projection-based method, and 46, 7, and 6 HU with the dual-source image-based method, respectively. Linear regression analysis revealed that the interaction between the attenuation and the method had a statistically significant effect on the measured CT numbers at 100 and 140 keV. There were attenuation-, method-, and energy level-dependent systematic errors in the measured virtual monochromatic CT numbers. CT number reproducibility was comparable between the two scanners, and CT numbers had better accuracy with the dual-source image-based method at 100 and 140 keV. Copyright © 2018 The Association of University Radiologists. Published by Elsevier Inc. All rights reserved.

  12. A nuclear medicine information system that allows reporting and sending images through intranet

    International Nuclear Information System (INIS)

    Anselmi, C.E.; Anselmi, O.E.

    2002-01-01

    A nuclear medicine information system that allows reporting and sending images through intranet. Aim: This system was developed in order to improve the processes of typing, correcting, verifying and distribution of the reports and images, improving the efficiency of the personnel in the nuclear medicine department and reducing the time between the creation of the report and its reading by the referring physician. Materials and Methods: The system runs a web server (Personal Web Server, Microsoft) which serves web pages written in hypertext markup language (HTML) and active server pages (ASP). The database utilized is Microsoft Access 97. The whole communication between the web server and the database is performed by the programs written in ASP. Integrating the images from the patients is done through a 486 ibm-pc running Red Hat Linux, which serves as an intermediary between the isolated nuclear medicine network and the hospital's network. Results: The time from report verification and referring physician reading has decreased from approximately 24 hours to 12 hours. It is possible to run queries in the system in order to get productivity reports or clinical research. Imaging storage allows for correlation of current and previous studies. Conclusion: Bureaucratic processes have diminished to a certain extent in the department. Reports are now online as soon as they are verified by the nuclear medicine physician. There is no need to install dedicated software in the viewing stations since the whole system runs in the server

  13. Classification of cryo electron microscopy images, noisy tomographic images recorded with unknown projection directions, by simultaneously estimating reconstructions and application to an assembly mutant of Cowpea Chlorotic Mottle Virus and portals of the bacteriophage P22

    Science.gov (United States)

    Lee, Junghoon; Zheng, Yili; Yin, Zhye; Doerschuk, Peter C.; Johnson, John E.

    2010-08-01

    Cryo electron microscopy is frequently used on biological specimens that show a mixture of different types of object. Because the electron beam rapidly destroys the specimen, the beam current is minimized which leads to noisy images (SNR substantially less than 1) and only one projection image per object (with an unknown projection direction) is collected. For situations where the objects can reasonably be described as coming from a finite set of classes, an approach based on joint maximum likelihood estimation of the reconstruction of each class and then use of the reconstructions to label the class of each image is described and demonstrated on two challenging problems: an assembly mutant of Cowpea Chlorotic Mottle Virus and portals of the bacteriophage P22.

  14. Image processing in medical ultrasound

    DEFF Research Database (Denmark)

    Hemmsen, Martin Christian

    This Ph.D project addresses image processing in medical ultrasound and seeks to achieve two major scientific goals: First to develop an understanding of the most significant factors influencing image quality in medical ultrasound, and secondly to use this knowledge to develop image processing...... multiple imaging setups. This makes the system well suited for development of new processing methods and for clinical evaluations, where acquisition of the exact same scan location for multiple methods is important. The second project addressed implementation, development and evaluation of SASB using...... methods for enhancing the diagnostic value of medical ultrasound. The project is an industrial Ph.D project co-sponsored by BK Medical ApS., with the commercial goal to improve the image quality of BK Medicals scanners. Currently BK Medical employ a simple conventional delay-and-sum beamformer to generate...

  15. Optimizing 4DCBCT projection allocation to respiratory bins

    International Nuclear Information System (INIS)

    O’Brien, Ricky T; Kipritidis, John; Shieh, Chun-Chien; Keall, Paul J

    2014-01-01

    4D cone beam computed tomography (4DCBCT) is an emerging image guidance strategy used in radiotherapy where projections acquired during a scan are sorted into respiratory bins based on the respiratory phase or displacement. 4DCBCT reduces the motion blur caused by respiratory motion but increases streaking artefacts due to projection under-sampling as a result of the irregular nature of patient breathing and the binning algorithms used. For displacement binning the streak artefacts are so severe that displacement binning is rarely used clinically. The purpose of this study is to investigate if sharing projections between respiratory bins and adjusting the location of respiratory bins in an optimal manner can reduce or eliminate streak artefacts in 4DCBCT images. We introduce a mathematical optimization framework and a heuristic solution method, which we will call the optimized projection allocation algorithm, to determine where to position the respiratory bins and which projections to source from neighbouring respiratory bins. Five 4DCBCT datasets from three patients were used to reconstruct 4DCBCT images. Projections were sorted into respiratory bins using equispaced, equal density and optimized projection allocation. The standard deviation of the angular separation between projections was used to assess streaking and the consistency of the segmented volume of a fiducial gold marker was used to assess motion blur. The standard deviation of the angular separation between projections using displacement binning and optimized projection allocation was 30%–50% smaller than conventional phase based binning and 59%–76% smaller than conventional displacement binning indicating more uniformly spaced projections and fewer streaking artefacts. The standard deviation in the marker volume was 20%–90% smaller when using optimized projection allocation than using conventional phase based binning suggesting more uniform marker segmentation and less motion blur. Images

  16. Imaging arrangement and microscope

    Science.gov (United States)

    Pertsinidis, Alexandros; Chu, Steven

    2015-12-15

    An embodiment of the present invention is an imaging arrangement that includes imaging optics, a fiducial light source, and a control system. In operation, the imaging optics separate light into first and second tight by wavelength and project the first and second light onto first and second areas within first and second detector regions, respectively. The imaging optics separate fiducial light from the fiducial light source into first and second fiducial light and project the first and second fiducial light onto third and fourth areas within the first and second detector regions, respectively. The control system adjusts alignment of the imaging optics so that the first and second fiducial light projected onto the first and second detector regions maintain relatively constant positions within the first and second detector regions, respectively. Another embodiment of the present invention is a microscope that includes the imaging arrangement.

  17. A 3D Kinematic Measurement of Knee Prosthesis Using X-ray Projection Images

    Science.gov (United States)

    Hirokawa, Shunji; Ariyoshi, Shogo; Hossain, Mohammad Abrar

    We have developed a technique for estimating 3D motion of knee prosthesis from its 2D perspective projections. As Fourier descriptors were used for compact representation of library templates and contours extracted from the prosthetic X-ray images, the entire silhouette contour of each prosthetic component was required. This caused such a problem as our algorithm did not function when the silhouettes of tibio and femoral components overlapped with each other. Here we planned a novel method to overcome it; which was processed in two steps. First, the missing part of silhouette contour due to overlap was interpolated using a free-formed curvature such as Bezier. Then the first step position/orientation estimation was performed. In the next step, a clipping window was set in the projective coordinate so as to separate the overlapped silhouette drawn using the first step estimates. After that the localized library whose templates were clipped in shape was prepared and the second step estimation was performed. Computer model simulation demonstrated sufficient accuracies of position/orientation estimation even for overlapped silhouettes; equivalent to those without overlap.

  18. XML: Ejemplos de uso (presentación)

    OpenAIRE

    Luján Mora, Sergio

    2011-01-01

    XML (eXtensible Markup Language, Lenguaje de marcas extensible) - Aplicación XML = Lenguaje de marcado = Vocabulario - Ejemplos: DocBook, Chemical Markup Language, Keyhole Markup Language, Mathematical Markup Language, Open Document, Open XML Format, Scalable Vector Graphics, Systems Byology Markup Language.

  19. XML: Ejemplos de uso

    OpenAIRE

    Luján Mora, Sergio

    2011-01-01

    XML (eXtensible Markup Language, Lenguaje de marcas extensible) - Aplicación XML = Lenguaje de marcado = Vocabulario - Ejemplos: DocBook, Chemical Markup Language, Keyhole Markup Language, Mathematical Markup Language, Open Document, Open XML Format, Scalable Vector Graphics, Systems Byology Markup Language.

  20. The Image Quality Translator – A Way to Support Specification of Imaging Requirements

    DEFF Research Database (Denmark)

    Kejser, Ulla Bøgvad; Bech, Mogens

    2015-01-01

    Archives, libraries, and museums run numerous imaging projects to digitize physical works and collections of cultural heritage. This study presents a tool called the 'Image Quality Translator' that is being designed at the Royal Library to support the planning of digitization projects and to make...... the process of specifying and controlling imaging requirements more efficient. The tool seeks to translate between the language used by collection managers and curators to express needs for image quality, and the more technical terms and metrics used by imaging experts and photographers to express...

  1. scikit-image: image processing in Python.

    Science.gov (United States)

    van der Walt, Stéfan; Schönberger, Johannes L; Nunez-Iglesias, Juan; Boulogne, François; Warner, Joshua D; Yager, Neil; Gouillart, Emmanuelle; Yu, Tony

    2014-01-01

    scikit-image is an image processing library that implements algorithms and utilities for use in research, education and industry applications. It is released under the liberal Modified BSD open source license, provides a well-documented API in the Python programming language, and is developed by an active, international team of collaborators. In this paper we highlight the advantages of open source to achieve the goals of the scikit-image library, and we showcase several real-world image processing applications that use scikit-image. More information can be found on the project homepage, http://scikit-image.org.

  2. scikit-image: image processing in Python

    Directory of Open Access Journals (Sweden)

    Stéfan van der Walt

    2014-06-01

    Full Text Available scikit-image is an image processing library that implements algorithms and utilities for use in research, education and industry applications. It is released under the liberal Modified BSD open source license, provides a well-documented API in the Python programming language, and is developed by an active, international team of collaborators. In this paper we highlight the advantages of open source to achieve the goals of the scikit-image library, and we showcase several real-world image processing applications that use scikit-image. More information can be found on the project homepage, http://scikit-image.org.

  3. MaRIE 1.0: The Matter-Radiation Interactions in Extremes Project, and the Challenge of Dynamic Mesoscale Imaging

    Energy Technology Data Exchange (ETDEWEB)

    Barnes, Cris William [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Barber, John L. [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Kober, Edward Martin [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Lookman, Turab [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Sandberg, Richard L. [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Shlachter, Jack S. [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Sheffield, Richard L. [Los Alamos National Lab. (LANL), Los Alamos, NM (United States)

    2015-02-23

    The Matter-Radiation Interactions in Extremes project will build the experimental facility for the time-dependent control of dynamic material performance. An x-ray free electron laser at up to 42-keV fundamental energy and with photon pulses down to sub-nanosecond spacing, MaRIE 1.0 is designed to meet the challenges of time-dependent mesoscale materials science. Those challenges will be outlined, the techniques of coherent diffractive imaging and dynamic polycrystalline diffraction described, and the resulting requirements defined for a coherent x-ray source. The talk concludes with the role of the MaRIE project and science in the future.

  4. Image quality in thoracic 4D cone-beam CT: A sensitivity analysis of respiratory signal, binning method, reconstruction algorithm, and projection angular spacing

    OpenAIRE

    Shieh, Chun-Chien; Kipritidis, John; O’Brien, Ricky T.; Kuncic, Zdenka; Keall, Paul J.

    2014-01-01

    Purpose: Respiratory signal, binning method, and reconstruction algorithm are three major controllable factors affecting image quality in thoracic 4D cone-beam CT (4D-CBCT), which is widely used in image guided radiotherapy (IGRT). Previous studies have investigated each of these factors individually, but no integrated sensitivity analysis has been performed. In addition, projection angular spacing is also a key factor in reconstruction, but how it affects image quality is not obvious. An inv...

  5. A projective surgical navigation system for cancer resection

    Science.gov (United States)

    Gan, Qi; Shao, Pengfei; Wang, Dong; Ye, Jian; Zhang, Zeshu; Wang, Xinrui; Xu, Ronald

    2016-03-01

    Near infrared (NIR) fluorescence imaging technique can provide precise and real-time information about tumor location during a cancer resection surgery. However, many intraoperative fluorescence imaging systems are based on wearable devices or stand-alone displays, leading to distraction of the surgeons and suboptimal outcome. To overcome these limitations, we design a projective fluorescence imaging system for surgical navigation. The system consists of a LED excitation light source, a monochromatic CCD camera, a host computer, a mini projector and a CMOS camera. A software program is written by C++ to call OpenCV functions for calibrating and correcting fluorescence images captured by the CCD camera upon excitation illumination of the LED source. The images are projected back to the surgical field by the mini projector. Imaging performance of this projective navigation system is characterized in a tumor simulating phantom. Image-guided surgical resection is demonstrated in an ex-vivo chicken tissue model. In all the experiments, the projected images by the projector match well with the locations of fluorescence emission. Our experimental results indicate that the proposed projective navigation system can be a powerful tool for pre-operative surgical planning, intraoperative surgical guidance, and postoperative assessment of surgical outcome. We have integrated the optoelectronic elements into a compact and miniaturized system in preparation for further clinical validation.

  6. Mammography with and without radiolucent positioning sheets : Comparison of projected breast area, pain experience, radiation dose and technical image quality

    NARCIS (Netherlands)

    Timmers, Janine; ten Voorde, Marloes; van Engen, Ruben E.; van Landsveld-Verhoeven, Cary; Pijnappel, Ruud; Droogh-de Greve, Kitty; den Heeten, Gerard J.; Broeders, Mireille J. M.

    2015-01-01

    Purpose: To compare projected breast area, image quality, pain experience and radiation dose between mammography performed with and without radiolucent positioning sheets. Methods: 184 women screened in the Dutch breast screening programme (May-June 2012) provided written informed consent to have

  7. Increasing Cone-beam projection usage by temporal fitting

    DEFF Research Database (Denmark)

    Lyksborg, Mark; Hansen, Mads Fogtmann; Larsen, Rasmus

    2010-01-01

    A Cone-beam CT system can be used to image the lung region. The system records 2D projections which will allow 3D reconstruction however a reconstruction based on all projections will lead to a blurred reconstruction in regions were respiratory motion occur. To avoid this the projections are typi......A Cone-beam CT system can be used to image the lung region. The system records 2D projections which will allow 3D reconstruction however a reconstruction based on all projections will lead to a blurred reconstruction in regions were respiratory motion occur. To avoid this the projections...... in [6] prior knowledge of the lung deformation estimated from the planning CT could be used to include all projections into the reconstruction. It has also been attempted to estimate both the motion and 3D volume simultaneously in [4]. Problems with motion estimation are ill-posed leading to suboptimal...... motion which in return affects the reconstruction. By directly including time into the image representation the effect of suboptimal motion fields are avoided and we are still capable of using phase neighbour projections. The 4D image model is fitted by solving a statistical cost function based...

  8. Localizing intracavitary brachytherapy applicators from cone-beam CT x-ray projections via a novel iterative forward projection matching algorithm

    International Nuclear Information System (INIS)

    Pokhrel, Damodar; Murphy, Martin J.; Todor, Dorin A.; Weiss, Elisabeth; Williamson, Jeffrey F.

    2011-01-01

    Purpose: To present a novel method for reconstructing the 3D pose (position and orientation) of radio-opaque applicators of known but arbitrary shape from a small set of 2D x-ray projections in support of intraoperative brachytherapy planning. Methods: The generalized iterative forward projection matching (gIFPM) algorithm finds the six degree-of-freedom pose of an arbitrary rigid object by minimizing the sum-of-squared-intensity differences (SSQD) between the computed and experimentally acquired autosegmented projection of the objects. Starting with an initial estimate of the object's pose, gIFPM iteratively refines the pose parameters (3D position and three Euler angles) until the SSQD converges. The object, here specialized to a Fletcher-Weeks intracavitary brachytherapy (ICB) applicator, is represented by a fine mesh of discrete points derived from complex combinatorial geometric models of the actual applicators. Three pairs of computed and measured projection images with known imaging geometry are used. Projection images of an intrauterine tandem and colpostats were acquired from an ACUITY cone-beam CT digital simulator. An image postprocessing step was performed to create blurred binary applicators only images. To quantify gIFPM accuracy, the reconstructed 3D pose of the applicator model was forward projected and overlaid with the measured images and empirically calculated the nearest-neighbor applicator positional difference for each image pair. Results: In the numerical simulations, the tandem and colpostats positions (x,y,z) and orientations (α,β,γ) were estimated with accuracies of 0.6 mm and 2 deg., respectively. For experimentally acquired images of actual applicators, the residual 2D registration error was less than 1.8 mm for each image pair, corresponding to about 1 mm positioning accuracy at isocenter, with a total computation time of less than 1.5 min on a 1 GHz processor. Conclusions: This work describes a novel, accurate, fast, and completely

  9. Localizing intracavitary brachytherapy applicators from cone-beam CT x-ray projections via a novel iterative forward projection matching algorithm

    Energy Technology Data Exchange (ETDEWEB)

    Pokhrel, Damodar; Murphy, Martin J.; Todor, Dorin A.; Weiss, Elisabeth; Williamson, Jeffrey F. [Department of Radiation Oncology, School of Medicine, Virginia Commonwealth University, Richmond, Virginia 23298 (United States)

    2011-02-15

    Purpose: To present a novel method for reconstructing the 3D pose (position and orientation) of radio-opaque applicators of known but arbitrary shape from a small set of 2D x-ray projections in support of intraoperative brachytherapy planning. Methods: The generalized iterative forward projection matching (gIFPM) algorithm finds the six degree-of-freedom pose of an arbitrary rigid object by minimizing the sum-of-squared-intensity differences (SSQD) between the computed and experimentally acquired autosegmented projection of the objects. Starting with an initial estimate of the object's pose, gIFPM iteratively refines the pose parameters (3D position and three Euler angles) until the SSQD converges. The object, here specialized to a Fletcher-Weeks intracavitary brachytherapy (ICB) applicator, is represented by a fine mesh of discrete points derived from complex combinatorial geometric models of the actual applicators. Three pairs of computed and measured projection images with known imaging geometry are used. Projection images of an intrauterine tandem and colpostats were acquired from an ACUITY cone-beam CT digital simulator. An image postprocessing step was performed to create blurred binary applicators only images. To quantify gIFPM accuracy, the reconstructed 3D pose of the applicator model was forward projected and overlaid with the measured images and empirically calculated the nearest-neighbor applicator positional difference for each image pair. Results: In the numerical simulations, the tandem and colpostats positions (x,y,z) and orientations ({alpha},{beta},{gamma}) were estimated with accuracies of 0.6 mm and 2 deg., respectively. For experimentally acquired images of actual applicators, the residual 2D registration error was less than 1.8 mm for each image pair, corresponding to about 1 mm positioning accuracy at isocenter, with a total computation time of less than 1.5 min on a 1 GHz processor. Conclusions: This work describes a novel, accurate

  10. Localizing intracavitary brachytherapy applicators from cone-beam CT x-ray projections via a novel iterative forward projection matching algorithm.

    Science.gov (United States)

    Pokhrel, Damodar; Murphy, Martin J; Todor, Dorin A; Weiss, Elisabeth; Williamson, Jeffrey F

    2011-02-01

    To present a novel method for reconstructing the 3D pose (position and orientation) of radio-opaque applicators of known but arbitrary shape from a small set of 2D x-ray projections in support of intraoperative brachytherapy planning. The generalized iterative forward projection matching (gIFPM) algorithm finds the six degree-of-freedom pose of an arbitrary rigid object by minimizing the sum-of-squared-intensity differences (SSQD) between the computed and experimentally acquired autosegmented projection of the objects. Starting with an initial estimate of the object's pose, gIFPM iteratively refines the pose parameters (3D position and three Euler angles) until the SSQD converges. The object, here specialized to a Fletcher-Weeks intracavitary brachytherapy (ICB) applicator, is represented by a fine mesh of discrete points derived from complex combinatorial geometric models of the actual applicators. Three pairs of computed and measured projection images with known imaging geometry are used. Projection images of an intrauterine tandem and colpostats were acquired from an ACUITY cone-beam CT digital simulator. An image postprocessing step was performed to create blurred binary applicators only images. To quantify gIFPM accuracy, the reconstructed 3D pose of the applicator model was forward projected and overlaid with the measured images and empirically calculated the nearest-neighbor applicator positional difference for each image pair. In the numerical simulations, the tandem and colpostats positions (x,y,z) and orientations (alpha, beta, gamma) were estimated with accuracies of 0.6 mm and 2 degrees, respectively. For experimentally acquired images of actual applicators, the residual 2D registration error was less than 1.8 mm for each image pair, corresponding to about 1 mm positioning accuracy at isocenter, with a total computation time of less than 1.5 min on a 1 GHz processor. This work describes a novel, accurate, fast, and completely automatic method to

  11. Development of Simultaneous Beta-and-Coincidence-Gamma Imager for Plant Imaging Research

    Energy Technology Data Exchange (ETDEWEB)

    Tai, Yuan-Chuan [Washington Univ., St. Louis, MO (United States). School of Medicine

    2016-09-30

    The goal of this project is to develop a novel imaging system that can simultaneously acquire beta and coincidence gamma images of positron sources in thin objects such as leaves of plants. This hybrid imager can be used to measure carbon assimilation in plants quantitatively and in real-time after C-11 labeled carbon-dioxide is administered. A better understanding of carbon assimilation, particularly under the increasingly elevated atmospheric CO2 level, is extremely critical for plant scientists who study food crop and biofuel production. Phase 1 of this project is focused on the technology development with 3 specific aims: (1) develop a hybrid detector that can detect beta and gamma rays simultaneously; (2) develop an imaging system that can differentiate these two types of radiation and acquire beta and coincidence gamma images in real-time; (3) develop techniques to quantify radiotracer distribution using beta and gamma images. Phase 2 of this project is to apply technologies developed in phase 1 to study plants using positron-emitting radionuclide such as 11C to study carbon assimilation in biofuel plants.

  12. Seismic calibration shots conducted in 2009 in the Imperial Valley, southern California, for the Salton Seismic Imaging Project (SSIP)

    Science.gov (United States)

    Murphy, Janice; Goldman, Mark; Fuis, Gary; Rymer, Michael; Sickler, Robert; Miller, Summer; Butcher, Lesley; Ricketts, Jason; Criley, Coyn; Stock, Joann; Hole, John; Chavez, Greg

    2011-01-01

    Rupture of the southern section of the San Andreas Fault, from the Coachella Valley to the Mojave Desert, is believed to be the greatest natural hazard facing California in the near future. With an estimated magnitude between 7.2 and 8.1, such an event would result in violent shaking, loss of life, and disruption of lifelines (freeways, aqueducts, power, petroleum, and communication lines) that would bring much of southern California to a standstill. As part of the Nation's efforts to prevent a catastrophe of this magnitude, a number of projects are underway to increase our knowledge of Earth processes in the area and to mitigate the effects of such an event. One such project is the Salton Seismic Imaging Project (SSIP), which is a collaborative venture between the United States Geological Survey (USGS), California Institute of Technology (Caltech), and Virginia Polytechnic Institute and State University (Virginia Tech). This project will generate and record seismic waves that travel through the crust and upper mantle of the Salton Trough. With these data, we will construct seismic images of the subsurface, both reflection and tomographic images. These images will contribute to the earthquake-hazard assessment in southern California by helping to constrain fault locations, sedimentary basin thickness and geometry, and sedimentary seismic velocity distributions. Data acquisition is currently scheduled for winter and spring of 2011. The design and goals of SSIP resemble those of the Los Angeles Region Seismic Experiment (LARSE) of the 1990's. LARSE focused on examining the San Andreas Fault system and associated thrust-fault systems of the Transverse Ranges. LARSE was successful in constraining the geometry of the San Andreas Fault at depth and in relating this geometry to mid-crustal, flower-structure-like decollements in the Transverse Ranges that splay upward into the network of hazardous thrust faults that caused the 1971 M 6.7 San Fernando and 1987 M 5

  13. Radiography of the knee joint: A comparative study of the standing partial flexion PA projection and the standing fully extended AP projection using visual grading characteristics (VGC)

    International Nuclear Information System (INIS)

    Farrugia Wismayer, E.; Zarb, F.

    2016-01-01

    Objectives: To compare the diagnostic information in detection and assessment of knee pathology from knee radiographs using either the PA standing with partial flexion projection or AP fully extended standing projection. Method: A set of 32 knee radiographs was retrospectively compiled from 16 adult patients imaged using both projections over a 2-year period (PA: n = 16 and AP: n = 16). Repeat radiographs (n = 6) were added to the image set facilitating inter and intra observer reliability. Image evaluation was performed by 5 orthopaedic surgeons performing Absolute Visual Grading Analysis assessing image quality based on 6 anatomical image quality criteria specifically developed to evaluate and compare the two projections. The resulting image quality scores were analysed using Visual Grading Characteristics. Results: Image quality scores were higher for the PA projection but variation between the two projections was not significant (p > 0.05). The PA projection was significantly (p < 0.05) better in the visualization of 2 anatomical image quality criteria involving the joint space width and tibial spines. Conclusion: Both projections can be used for general evaluation of the knee joint, however the PA partial flexion projection is preferred for the investigation of specific knee pathology. Recommendations for minimizing variations in radiographic positioning technique are also highlighted. - Highlights: • AP/PA radiographic knee projections are comparable for most clinical indications. • PA knee projection is better in visualizing joint space/tibial spines. • PA projection is more standardized if used with a positioning frame. • Use of anatomical criteria facilitates evaluation of quality of knee radiographs.

  14. TME2/342: The Role of the EXtensible Markup Language (XML) for Future Healthcare Application Development

    Science.gov (United States)

    Noelle, G; Dudeck, J

    1999-01-01

    Two years, since the World Wide Web Consortium (W3C) has published the first specification of the eXtensible Markup Language (XML) there exist some concrete tools and applications to work with XML-based data. In particular, new generation Web browsers offer great opportunities to develop new kinds of medical, web-based applications. There are several data-exchange formats in medicine, which have been established in the last years: HL-7, DICOM, EDIFACT and, in the case of Germany, xDT. Whereas communication and information exchange becomes increasingly important, the development of appropriate and necessary interfaces causes problems, rising costs and effort. It has been also recognised that it is difficult to define a standardised interchange format, for one of the major future developments in medical telematics: the electronic patient record (EPR) and its availability on the Internet. Whereas XML, especially in an industrial environment, is celebrated as a generic standard and a solution for all problems concerning e-commerce, in a medical context there are only few applications developed. Nevertheless, the medical environment is an appropriate area for building XML applications: as the information and communication management becomes increasingly important in medical businesses, the role of the Internet changes quickly from an information to a communication medium. The first XML based applications in healthcare show us the advantage for a future engagement of the healthcare industry in XML: such applications are open, easy to extend and cost-effective. Additionally, XML is much more than a simple new data interchange format: many proposals for data query (XQL), data presentation (XSL) and other extensions have been proposed to the W3C and partly realised in medical applications.

  15. Physics meets fine arts: a project-based learning path on infrared imaging

    Science.gov (United States)

    Bonanno, A.; Bozzo, G.; Sapia, P.

    2018-03-01

    Infrared imaging represents a noninvasive tool for cultural heritage diagnostics, based on the capability of IR radiation to penetrate the most external layers of different objects (as for example paintings), revealing hidden features of artworks. From an educational viewpoint, this diagnostic technique offers teachers the opportunity to address manifold topics pertaining to the physics and technology of electromagnetic radiation, with particular emphasis on the nature of color and its physical correlates. Moreover, the topic provides interesting interdisciplinary bridges towards the human sciences. In this framework, we present a hands-on learning sequence, suitable for both high school students and university freshmen, inspired by the project-based learning (PBL) paradigm, designed and implemented in the context of an Italian national project aimed at offering students the opportunity to participate in educational activities within a real working context. In a preliminary test we involved a group of 23 high school students while they were working as apprentices in the Laboratory of Applied Physics for Cultural Heritage (ArcheoLab) at the University of Calabria. Consistently with the PBL paradigm, students were given well-defined practical goals to be achieved. As final goals they were asked (i) to construct and to test a low cost device (based on a disused commercial camera) appropriate for performing educational-grade IR investigations on paintings, and (ii) to prepare a device working as a simple spectrometer (recycling the optical components of a disused video projector), suitable for characterizing various light sources in order to identify the most appropriate for infrared imaging. The proposed learning path has shown (in the preliminary test) to be effective in fostering students’ interest towards physics and its technological applications, especially because pupils perceived the context (i.e. physics applied to the protection and restoration of cultural

  16. Image alignment for tomography reconstruction from synchrotron X-ray microscopic images.

    Directory of Open Access Journals (Sweden)

    Chang-Chieh Cheng

    Full Text Available A synchrotron X-ray microscope is a powerful imaging apparatus for taking high-resolution and high-contrast X-ray images of nanoscale objects. A sufficient number of X-ray projection images from different angles is required for constructing 3D volume images of an object. Because a synchrotron light source is immobile, a rotational object holder is required for tomography. At a resolution of 10 nm per pixel, the vibration of the holder caused by rotating the object cannot be disregarded if tomographic images are to be reconstructed accurately. This paper presents a computer method to compensate for the vibration of the rotational holder by aligning neighboring X-ray images. This alignment process involves two steps. The first step is to match the "projected feature points" in the sequence of images. The matched projected feature points in the x-θ plane should form a set of sine-shaped loci. The second step is to fit the loci to a set of sine waves to compute the parameters required for alignment. The experimental results show that the proposed method outperforms two previously proposed methods, Xradia and SPIDER. The developed software system can be downloaded from the URL, http://www.cs.nctu.edu.tw/~chengchc/SCTA or http://goo.gl/s4AMx.

  17. Image alignment for tomography reconstruction from synchrotron X-ray microscopic images.

    Science.gov (United States)

    Cheng, Chang-Chieh; Chien, Chia-Chi; Chen, Hsiang-Hsin; Hwu, Yeukuang; Ching, Yu-Tai

    2014-01-01

    A synchrotron X-ray microscope is a powerful imaging apparatus for taking high-resolution and high-contrast X-ray images of nanoscale objects. A sufficient number of X-ray projection images from different angles is required for constructing 3D volume images of an object. Because a synchrotron light source is immobile, a rotational object holder is required for tomography. At a resolution of 10 nm per pixel, the vibration of the holder caused by rotating the object cannot be disregarded if tomographic images are to be reconstructed accurately. This paper presents a computer method to compensate for the vibration of the rotational holder by aligning neighboring X-ray images. This alignment process involves two steps. The first step is to match the "projected feature points" in the sequence of images. The matched projected feature points in the x-θ plane should form a set of sine-shaped loci. The second step is to fit the loci to a set of sine waves to compute the parameters required for alignment. The experimental results show that the proposed method outperforms two previously proposed methods, Xradia and SPIDER. The developed software system can be downloaded from the URL, http://www.cs.nctu.edu.tw/~chengchc/SCTA or http://goo.gl/s4AMx.

  18. Prevalence of incidental findings on magnetic resonance imaging: Cuban project to map the human brain

    International Nuclear Information System (INIS)

    Hernandez Gonzalez, Gertrudis de los Angeles; Alvarez Sanchez, Marilet; Jordan Gonzalez, Jose

    2010-01-01

    To determine the prevalence of incidental findings in healthy subjects of the Cuban Human Brain Mapping Project sample, it was performed a retrospective descriptive study of the magnetic resonance imaging (MRI) obtained from 394 healthy subjects that make up the sample of the project, between 2006-2007, with an age range of 18 to 68 years (mean 33,12), of which 269 (68,27 %) are male and 125 (31,73 %) are women. It was shown that 40,36 % had one or more anomaly in the magnetic resonance imaging (MRI). In total, the number of incidental findings was 188, 23,6 % of which were brain findings and 24,11 % were non-brain findings, among the latter, were the sinusopathy with 20,81 % and maxillary polyps with 3,30 %. The most prevalent brain findings were: intrasellar arachnoidocele, 11,93 %, followed by the prominence of the pituitary gland, 5,84 %, ventricular asymmetry, 1,77 % and bone defects, 1,02 %. Other brain abnormalities found with very low prevalence had no pathological significance, except for two cases with brain tumor, which were immediately sent to a specialist. Incidental findings in MRI are common in the general population (40,36 %), being the sinusopathy, and intrasellar arachnoidocele the most common findings. Asymptomatic individuals who have any type of structural abnormality provide invaluable information on the prevalence of these abnormalities in a presumably healthy population, which may be used as references for epidemiological studies

  19. Macro optical projection tomography for large scale 3D imaging of plant structures and gene activity.

    Science.gov (United States)

    Lee, Karen J I; Calder, Grant M; Hindle, Christopher R; Newman, Jacob L; Robinson, Simon N; Avondo, Jerome J H Y; Coen, Enrico S

    2017-01-01

    Optical projection tomography (OPT) is a well-established method for visualising gene activity in plants and animals. However, a limitation of conventional OPT is that the specimen upper size limit precludes its application to larger structures. To address this problem we constructed a macro version called Macro OPT (M-OPT). We apply M-OPT to 3D live imaging of gene activity in growing whole plants and to visualise structural morphology in large optically cleared plant and insect specimens up to 60 mm tall and 45 mm deep. We also show how M-OPT can be used to image gene expression domains in 3D within fixed tissue and to visualise gene activity in 3D in clones of growing young whole Arabidopsis plants. A further application of M-OPT is to visualise plant-insect interactions. Thus M-OPT provides an effective 3D imaging platform that allows the study of gene activity, internal plant structures and plant-insect interactions at a macroscopic scale. © The Author 2016. Published by Oxford University Press on behalf of the Society for Experimental Biology.

  20. The duality of XML Markup and Programming notation

    DEFF Research Database (Denmark)

    Nørmark, Kurt

    2003-01-01

    In web projects it is often necessary to mix XML notation and program notation in a single document or program. In mono-lingual situations, the XML notation is either subsumed in the program or the program notation is subsumed in the XML document. As an introduction we analyze XML notation and pr...

  1. Pelvic digital subtraction catheter angiography-Are routine oblique projections necessary?

    International Nuclear Information System (INIS)

    Rane, Neil; Imam, Atique; Foley, Peter; Timmons, Grace; Uberoi, Raman

    2011-01-01

    The oblique projection is used widely in imaging of the lower vascular tree. Much of the evidence justifying the oblique projection is anecdotal. This study compares the sensitivity of the anteroposterior (AP) projection alone in lower limb vascular catheter angiography to that combined with the oblique projection. 110 digitally subtracted angiograms were analysed initially on AP and subsequently on oblique views. Oblique imaging increases confidence, demonstrates stenoses not seen on AP and changes the diagnosis. This supports the use of the oblique projection in lower limb vascular interventional imaging.

  2. Research on reconstruction of steel tube section from few projections

    International Nuclear Information System (INIS)

    Peng Shuaijun; Wu Haifeng; Wang Kai

    2007-01-01

    Most parameters of steel tube can be acquired from CT image of the section so as to evaluate its quality. But large numbers of projections are needed in order to reconstruct the section image, so the collection and calculation of the projections consume lots of time. In order to solve the problem, reconstruction algorithms of steel tube from few projections are researched and the results are validated with simulation data in the paper. Three iterative algorithms, ART, MAP and OSEM, are attempted to reconstruct the section of steel tube by using the simulation model. Considering the prior information distributing of steel tube, we improve the algorithms and get better reconstruction images. The results of simulation experiment indicate that ART, MAP and OSEM can reconstruct accurate section images of steel tube from less than 20 projections and approximate images from 10 projections. (authors)

  3. Tissue Harmonic Synthetic Aperture Imaging

    DEFF Research Database (Denmark)

    Rasmussen, Joachim

    The main purpose of this PhD project is to develop an ultrasonic method for tissue harmonic synthetic aperture imaging. The motivation is to advance the field of synthetic aperture imaging in ultrasound, which has shown great potentials in the clinic. Suggestions for synthetic aperture tissue...... system complexity compared to conventional synthetic aperture techniques. In this project, SASB is sought combined with a pulse inversion technique for 2nd harmonic tissue harmonic imaging. The advantages in tissue harmonic imaging (THI) are expected to further improve the image quality of SASB...

  4. Patient-specific 3D models created by 3D imaging system or bi-planar imaging coupled with Moiré-Fringe projections: a comparative study of accuracy and reliability on spinal curvatures and vertebral rotation data.

    Science.gov (United States)

    Hocquelet, Arnaud; Cornelis, François; Jirot, Anna; Castaings, Laurent; de Sèze, Mathieu; Hauger, Olivier

    2016-10-01

    The aim of this study is to compare the accuracy and reliability of spinal curvatures and vertebral rotation data based on patient-specific 3D models created by 3D imaging system or by bi-planar imaging coupled with Moiré-Fringe projections. Sixty-two consecutive patients from a single institution were prospectively included. For each patient, frontal and sagittal calibrated low-dose bi-planar X-rays were performed and coupled simultaneously with an optical Moiré back surface-based technology. The 3D reconstructions of spine and pelvis were performed independently by one radiologist and one technician in radiology using two different semi-automatic methods using 3D radio-imaging system (method 1) or bi-planar imaging coupled with Moiré projections (method 2). Both methods were compared using Bland-Altman analysis, and reliability using intraclass correlation coefficient (ICC). ICC showed good to very good agreement. Between the two techniques, the maximum 95 % prediction limits was -4.9° degrees for the measurements of spinal coronal curves and less than 5° for other parameters. Inter-rater reliability was excellent for all parameters across both methods, except for axial rotation with method 2 for which ICC was fair. Method 1 was faster for reconstruction time than method 2 for both readers (13.4 vs. 20.7 min and 10.6 vs. 13.9 min; p = 0.0001). While a lower accuracy was observed for the evaluation of the axial rotation, bi-planar imaging coupled with Moiré-Fringe projections may be an accurate and reliable tool to perform 3D reconstructions of the spine and pelvis.

  5. The ImageJ ecosystem: An open platform for biomedical image analysis.

    Science.gov (United States)

    Schindelin, Johannes; Rueden, Curtis T; Hiner, Mark C; Eliceiri, Kevin W

    2015-01-01

    Technology in microscopy advances rapidly, enabling increasingly affordable, faster, and more precise quantitative biomedical imaging, which necessitates correspondingly more-advanced image processing and analysis techniques. A wide range of software is available-from commercial to academic, special-purpose to Swiss army knife, small to large-but a key characteristic of software that is suitable for scientific inquiry is its accessibility. Open-source software is ideal for scientific endeavors because it can be freely inspected, modified, and redistributed; in particular, the open-software platform ImageJ has had a huge impact on the life sciences, and continues to do so. From its inception, ImageJ has grown significantly due largely to being freely available and its vibrant and helpful user community. Scientists as diverse as interested hobbyists, technical assistants, students, scientific staff, and advanced biology researchers use ImageJ on a daily basis, and exchange knowledge via its dedicated mailing list. Uses of ImageJ range from data visualization and teaching to advanced image processing and statistical analysis. The software's extensibility continues to attract biologists at all career stages as well as computer scientists who wish to effectively implement specific image-processing algorithms. In this review, we use the ImageJ project as a case study of how open-source software fosters its suites of software tools, making multitudes of image-analysis technology easily accessible to the scientific community. We specifically explore what makes ImageJ so popular, how it impacts the life sciences, how it inspires other projects, and how it is self-influenced by coevolving projects within the ImageJ ecosystem. © 2015 Wiley Periodicals, Inc.

  6. Improved JEM-X imaging

    DEFF Research Database (Denmark)

    Lund, Niels; Westergaard, Niels Jørgen Stenfeldt; Chenevez, Jérôme

    2010-01-01

    A new imaging method has been developed for JEM-X. The flux from each sky pixel is obtained from a fit to the observed shadowgram rather than from a back projected image. The fitting method is more direct than the standard back projection method used in the public OSA software and allows better...

  7. Complexity, Contract Design and Incentive Design in the Construction Management Industry

    OpenAIRE

    Beg, Zeshawn Afsari

    2015-01-01

    In this paper I examine how one construction management company uses contract design and incentive design to respond to aspects of task complexity and relationship complexity present in its construction projects. In terms of contract design, I find that the company is unable to increase its use of cost-plus pricing when faced with technically complex projects. Instead, the company uses increased pre-execution design modification and price markups when technically complex projects are contract...

  8. A simple method for 3D lesion reconstruction from two projected angiographic images: implementation to a stereotactic radiotherapy treatment planning system

    International Nuclear Information System (INIS)

    Theodorou, K.; Kappas, C.; Gaboriaud, G.; Mazal, A.D.; Petrascu, O.; Rosenwald, J.C.

    1997-01-01

    Introduction: The most used imaging modality for diagnosis and localisation of arteriovenous malformations (AVMs) treated with stereotactic radiotherapy is angiography. The fact that the angiographic images are projected images imposes the need of the 3D reconstruction of the lesion. This, together with the 3D head anatomy from CT images could provide all the necessary information for stereotactic treatment planning. We have developed a method to combine the complementary information provided by angiography and 2D computerized tomography, matching the reconstructed AVM structure with the reconstructed head of the patient. Materials and methods: The ISIS treatment planning system, developed at Institute Curie, has been used for image acquisition, stereotactic localisation and 3D visualisation. A series of CT slices are introduced in the system as well as two orthogonal angiographic projected images of the lesion. A simple computer program has been developed for the 3D reconstruction of the lesion and for the superposition of the target contour on the CT slices of the head. Results and conclusions: In our approach we consider that the reconstruction can be made if the AVM is approximated with a number of adjacent ellipses. We assessed the method comparing the values of the reconstructed and the actual volumes of the target using linear regression analysis. For treatment planning purposes we overlapped the reconstructed AVM on the CT slices of the head. The above feature is to our knowledge a feature that the majority of the commercial stereotactic radiotherapy treatment planning system could not provide. The implementation of the method into ISIS TPS shows that we can reliably approximate and visualize the target volume

  9. Acousto-optic laser projection systems for displaying TV information

    International Nuclear Information System (INIS)

    Gulyaev, Yu V; Kazaryan, M A; Mokrushin, Yu M; Shakin, O V

    2015-01-01

    This review addresses various approaches to television projection imaging on large screens using lasers. Results are presented of theoretical and experimental studies of an acousto-optic projection system operating on the principle of projecting an image of an entire amplitude-modulated television line in a single laser pulse. We consider characteristic features of image formation in such a system and the requirements for its individual components. Particular attention is paid to nonlinear distortions of the image signal, which show up most severely at low modulation signal frequencies. We discuss the feasibility of improving the process efficiency and image quality using acousto-optic modulators and pulsed lasers. Real-time projectors with pulsed line imaging can be used for controlling high-intensity laser radiation. (review)

  10. Acousto-optic laser projection systems for displaying TV information

    Science.gov (United States)

    Gulyaev, Yu V.; Kazaryan, M. A.; Mokrushin, Yu M.; Shakin, O. V.

    2015-04-01

    This review addresses various approaches to television projection imaging on large screens using lasers. Results are presented of theoretical and experimental studies of an acousto-optic projection system operating on the principle of projecting an image of an entire amplitude-modulated television line in a single laser pulse. We consider characteristic features of image formation in such a system and the requirements for its individual components. Particular attention is paid to nonlinear distortions of the image signal, which show up most severely at low modulation signal frequencies. We discuss the feasibility of improving the process efficiency and image quality using acousto-optic modulators and pulsed lasers. Real-time projectors with pulsed line imaging can be used for controlling high-intensity laser radiation.

  11. Acousto-optic laser projection systems for displaying TV information

    Energy Technology Data Exchange (ETDEWEB)

    Gulyaev, Yu V [V.A.Kotel' nikov Institute of Radio Engineering and Electronics, Russian Academy of Sciences, Moscow (Russian Federation); Kazaryan, M A [P N Lebedev Physics Institute, Russian Academy of Sciences, Moscow (Russian Federation); Mokrushin, Yu M [D.V. Efremov Scientific Research Institute of Electrophysical Apparatus (Russian Federation); Shakin, O V [Ioffe Physical Technical Institute, Russian Academy of Sciences, St. Petersburg (Russian Federation)

    2015-04-30

    This review addresses various approaches to television projection imaging on large screens using lasers. Results are presented of theoretical and experimental studies of an acousto-optic projection system operating on the principle of projecting an image of an entire amplitude-modulated television line in a single laser pulse. We consider characteristic features of image formation in such a system and the requirements for its individual components. Particular attention is paid to nonlinear distortions of the image signal, which show up most severely at low modulation signal frequencies. We discuss the feasibility of improving the process efficiency and image quality using acousto-optic modulators and pulsed lasers. Real-time projectors with pulsed line imaging can be used for controlling high-intensity laser radiation. (review)

  12. Comparison of pure and hybrid iterative reconstruction techniques with conventional filtered back projection: Image quality assessment in the cervicothoracic region

    International Nuclear Information System (INIS)

    Katsura, Masaki; Sato, Jiro; Akahane, Masaaki; Matsuda, Izuru; Ishida, Masanori; Yasaka, Koichiro; Kunimatsu, Akira; Ohtomo, Kuni

    2013-01-01

    Objectives: To evaluate the impact on image quality of three different image reconstruction techniques in the cervicothoracic region: model-based iterative reconstruction (MBIR), adaptive statistical iterative reconstruction (ASIR), and filtered back projection (FBP). Methods: Forty-four patients underwent unenhanced standard-of-care clinical computed tomography (CT) examinations which included the cervicothoracic region with a 64-row multidetector CT scanner. Images were reconstructed with FBP, 50% ASIR-FBP blending (ASIR50), and MBIR. Two radiologists assessed the cervicothoracic region in a blinded manner for streak artifacts, pixilated blotchy appearances, critical reproduction of visually sharp anatomical structures (thyroid gland, common carotid artery, and esophagus), and overall diagnostic acceptability. Objective image noise was measured in the internal jugular vein. Data were analyzed using the sign test and pair-wise Student's t-test. Results: MBIR images had significant lower quantitative image noise (8.88 ± 1.32) compared to ASIR images (18.63 ± 4.19, P 0.9 for ASIR vs. FBP for both readers). MBIR images were all diagnostically acceptable. Unique features of MBIR images included pixilated blotchy appearances, which did not adversely affect diagnostic acceptability. Conclusions: MBIR significantly improves image noise and streak artifacts of the cervicothoracic region over ASIR and FBP. MBIR is expected to enhance the value of CT examinations for areas where image noise and streak artifacts are problematic

  13. Data ontology and an information system realization for web-based management of image measurements

    Directory of Open Access Journals (Sweden)

    Dimiter eProdanov

    2011-11-01

    Full Text Available Image acquisition, processing and quantification of objects (morphometry require the integration of data inputs and outputs originating from heterogeneous sources. Management of the data exchange along this workflow in a systematic manner poses several challenges, notably the description of the heterogeneous meta data and the interoperability between the software used. The use of integrated software solutions for morphometry and management of imaging data in combination ontologies can reduce metadata data loss and greatly facilitate subsequent data analysis. This paper presents an integrated information system, called LabIS. The system has the objectives to automate (i the process of storage, annotation and querying of image measurements and (ii to provide means for data sharing with 3rd party applications consuming measurement data using open standard communication protocols. LabIS implements 3-tier architecture with a relational database back-end and an application logic middle tier realizing web-based user interface for reporting and annotation and a web service communication layer. The image processing and morphometry functionality is backed by interoperability with ImageJ, a public domain image processing software, via integrated clients. Instrumental for the latter was the construction of a data ontology representing the common measurement data model. LabIS supports user profiling and can store arbitrary types of measurements, regions of interest, calibrations and ImageJ settings. Integration of the stored measurements is facilitated by atlas mapping and ontology-based markup. The system can be used as an experimental workflow management tool allowing for description and reporting of the performed experiments. LabIS can be also used as a measurements repository that can be transparently accessed by computational environments, such as Matlab. Finally, the system can be used as a data sharing tool.

  14. 77 FR 20835 - National Customs Automation Program (NCAP) Test Concerning Automated Commercial Environment (ACE...

    Science.gov (United States)

    2012-04-06

    ... Interchange (EDI). This notice also describes test particulars including commencement date, eligibility... Electronic Data Interchange (EDI) as part of the Document Image System (DIS) test. DIS is currently a stand... with supporting information via EDI in an Extensible Markup Language (XML) format, in lieu of...

  15. PENGUKURAN KINERJA BEBERAPA SISTEM BASIS DATA RELASIONAL DENGAN KEMAMPUAN MENYIMPAN DATA BERFORMAT GML (GEOGRAPHY MARKUP LANGUAGE YANG DAPAT DIGUNAKAN UNTUK MENDASARI APLIKASI-APLIKASI SISTEM INFORMASI GEOGRAFIS

    Directory of Open Access Journals (Sweden)

    Adi Nugroho

    2009-01-01

    Full Text Available If we want to represent spatial data to user using GIS (Geographical Information System applications, we have 2 choices about the underlying database, that is general RDBMS (Relational Database Management System for saving general spatial data (number, char, varchar, etc., or saving spatial data in GML (Geography Markup Language format. (GML is an another XML’s special vocabulary for spatial data. If we choose GML for saving spatial data, we also have 2 choices, that is saving spatial data in XML Enabled Database (relational databases that can be use for saving XML data or we can use Native XML Database (NXD, that is special databases that can be use for saving XML data. In this paper, we try to make performance comparison for several XML Enabled Database when we do GML’s CRUD (Create-Read-Update-Delete operations to these databases. On the other side, we also want to see flexibility of XML Enabled Database from programmers view.

  16. The Age-ility Project (Phase 1): Structural and functional imaging and electrophysiological data repository.

    Science.gov (United States)

    Karayanidis, Frini; Keuken, Max C; Wong, Aaron; Rennie, Jaime L; de Hollander, Gilles; Cooper, Patrick S; Ross Fulham, W; Lenroot, Rhoshel; Parsons, Mark; Phillips, Natalie; Michie, Patricia T; Forstmann, Birte U

    2016-01-01

    Our understanding of the complex interplay between structural and functional organisation of brain networks is being advanced by the development of novel multi-modal analyses approaches. The Age-ility Project (Phase 1) data repository offers open access to structural MRI, diffusion MRI, and resting-state fMRI scans, as well as resting-state EEG recorded from the same community participants (n=131, 15-35 y, 66 male). Raw imaging and electrophysiological data as well as essential demographics are made available via the NITRC website. All data have been reviewed for artifacts using a rigorous quality control protocol and detailed case notes are provided. Copyright © 2015. Published by Elsevier Inc.

  17. Reflections from a Creative Community-Based Participatory Research Project Exploring Health and Body Image with First Nations Girls

    Directory of Open Access Journals (Sweden)

    Jennifer M. Shea PhD

    2013-02-01

    Full Text Available In Canada, Aboriginal peoples often experience a multitude of inequalities when compared with the general population, particularly in relation to health (e.g., increased incidence of diabetes. These inequalities are rooted in a negative history of colonization. Decolonizing methodologies recognize these realities and aim to shift the focus from communities being researched to being collaborative partners in the research process. This article describes a qualitative community-based participatory research project focused on health and body image with First Nations girls in a Tribal Council region in Western Canada. We discuss our project design and the incorporation of creative methods (e.g., photovoice to foster integration and collaboration as related to decolonizing methodology principles. This article is both descriptive and reflective as it summarizes our project and discusses lessons learned from the process, integrating evaluations from the participating girls as well as our reflections as researchers.

  18. Image quality and dose in mammography in 17 countries in Africa, Asia and Eastern Europe: Results from IAEA projects

    International Nuclear Information System (INIS)

    Ciraj-Bjelac, Olivera; Avramova-Cholakova, Simona; Beganovic, Adnan; Economides, Sotirios; Faj, Dario; Gershan, Vesna; Grupetta, Edward; Kharita, M.H.; Milakovic, Milomir; Milu, Constantin; Muhogora, Wilbroad E.; Muthuvelu, Pirunthavany; Oola, Samuel; Setayeshi, Saeid

    2012-01-01

    Purpose: The objective is to study mammography practice from an optimisation point of view by assessing the impact of simple and immediately implementable corrective actions on image quality. Materials and methods: This prospective multinational study included 54 mammography units in 17 countries. More than 21,000 mammography images were evaluated using a three-level image quality scoring system. Following initial assessment, appropriate corrective actions were implemented and image quality was re-assessed in 24 units. Results: The fraction of images that were considered acceptable without any remark in the first phase (before the implementation of corrective actions) was 70% and 75% for cranio-caudal and medio-lateral oblique projections, respectively. The main causes for poor image quality before corrective actions were related to film processing, damaged or scratched image receptors, or film-screen combinations that are not spectrally matched, inappropriate radiographic techniques and lack of training. Average glandular dose to a standard breast was 1.5 mGy (mean and range 0.59–3.2 mGy). After optimisation the frequency of poor quality images decreased, but the relative contributions of the various causes remained similar. Image quality improvements following appropriate corrective actions were up to 50 percentage points in some facilities. Conclusions: Poor image quality is a major source of unnecessary radiation dose to the breast. An increased awareness of good quality mammograms is of particular importance for countries that are moving towards introduction of population-based screening programmes. The study demonstrated how simple and low-cost measures can be a valuable tool in improving of image quality in mammography

  19. MR image-guided portal verification for brain treatment field

    International Nuclear Information System (INIS)

    Yin, F.-F.; Gao, Q.H.; Xie, H.; Nelson, D.F.; Yu, Y.; Kwok, W.E.; Totterman, S.; Schell, M.C.; Rubin, P.

    1996-01-01

    Purpose/Objective: Although MR images have been extensively used for the treatment planning of radiation therapy of cancers, especially for brain cancers, they are not effectively used for the portal verification due to lack of bone/air information in MR images and geometric distortions. Typically, MR images are utilized through correlation with CT images, and this procedure is usually very labor and time consuming. For many brain cancer patients to be treated using conventional external beam radiation, MR images with proper distortion correction provide sufficient information for treatment planning and dose calculation, and a projection images may be generated for each specific treatment port and to be used as a reference image for treatment verification. The question is how to transfer anatomical features in MR images to the projection image as landmarks which could be correlated automatically to those in the portal image. The goal of this study is to generate digitally reconstructed projection images from MR brain images with some important anatomical features (brain contour, skull and gross tumor) as well as their relative locations to be used as references for the development of computerized portal verification scheme. Materials/Methods: Compared to conventional digital reconstructed radiograph from CT images, generation of digitally reconstructed projection images from MR images is heavily involved with pixel manipulation of MR images to correlate information from two types of images (MR, portal x-ray images) which are produced based on totally different imaging principles. Initially a wavelet based multi-resolution adaptive thresholding method is used to segment the skull slice-by-slice in MR brain axial images, and identified skull pixels are re-assigned to relatively higher intensities so that projection images will have comparable grey-level information as that in typical brain portal images. Both T1- and T2-weighted images are utilized to eliminate fat

  20. A Photometric Stereo Using Re-Projected Images for Active Stereo Vision System

    Directory of Open Access Journals (Sweden)

    Keonhwa Jung

    2017-10-01

    Full Text Available In optical 3D shape measurement, stereo vision with structured light can measure 3D scan data with high accuracy and is used in many applications, but fine surface detail is difficult to obtain. On the other hand, photometric stereo can capture surface details but has disadvantages, in that its 3D data accuracy drops and it requires multiple light sources. When the two measurement methods are combined, more accurate 3D scan data and detailed surface features can be obtained at the same time. In this paper, we present a 3D optical measurement technique that uses re-projection of images to implement photometric stereo without an external light source. 3D scan data is enhanced by combining normal vector from this photometric stereo method, and the result is evaluated with the ground truth.

  1. Comparison of transaxial source images and 3-plane, thin-slab maximal intensity projection images for the diagnosis of coronary artery stenosis with using ECG-gated cardiac CT

    International Nuclear Information System (INIS)

    Choi, Jin Woo; Seo, Joon Beom; Do, Kyung Hyun

    2006-01-01

    We wanted to compare the transaxial source images with the optimized three plane, thin-slab maximum intensity projection (MIP) images from electrocardiographic (ECG)-gated cardiac CT for their ability to detect hemodynamically significant stenosis (HSS), and we did this by means of performing a receiver operating characteristic (ROC) analysis. Twenty-eight patients with a heart rate less than 66 beats per minute and who were undergoing both retrospective ECG-gated cardiac CT and conventional coronary angiography were included in this study. The contrast-enhanced CT scans were obtained with a collimation of 16 x 0.75-mm and a rotation time of 420 msec. The tranaxial images were reconstructed at the mid-diastolic phase with a 1-mm slice thickness and a 0.5-mm increment. Using the transaxial images, the slab MIP images were created with a 4-mm thickness and a 2-mm increment, and they covered the entire heart in the horizontal long axis (4 chamber view), in the vertical long axis (2 chamber view) and in the short axis. The transaxial images and MIP images were independently evaluated for their ability to detect HSS. Conventional coronary angiograms of the same study group served as the standard of reference. Four radiologists were requested to rank each image with using a five-point scale (1 = definitely negative, 2 = probably negative, 3 = indeterminate, 4 = probably positive, and 5 definitely positive) for the presence of HSS; the data were then interpreted using ROC analysis. There was no statistical difference in the area under the ROC curve between transaxial images and MIP images for the detection of HSS (0.8375 and 0.8708, respectively; ρ > 0.05). The mean reading time for the transaxial source images and the MIP images was 116 and 126.5 minutes, respectively. The diagnostic performance of the MIP images for detecting HSS of the coronary arteries is acceptable and this technique's ability to detect HSS is comparable to that of the transaxial source images

  2. Neutron Dark-Field Imaging

    Science.gov (United States)

    Mullins, David

    2017-09-01

    Neutron imaging is typically used to image and reconstruct objects that are difficult to image using X-Ray imaging techniques. X-Ray absorption is primarily determined by the electron density of the material. This makes it difficult to image objects within materials that have high densities such as metal. However, the neutron scattering cross section primarily depends on the strong nuclear force, which varies somewhat randomly across the periodic table. In this project, an imaging technique known as dark field imaging using a far-field interferometer has been used to study a sample of granite. With this technique, interferometric phase images are generated. The dispersion of the microstructure of the sample dephases the beam, reducing the visibility. Collecting tomographic projections at different autocorrelation lengths (from 100 nanometers to 1.74 micrometers) essentially creates a 3D small angle scattering pattern, enabling mapping of how the microstructure is distributed throughout the sample.

  3. Imaging systems in nuclear medicine and image evaluation

    International Nuclear Information System (INIS)

    Beck, R.; Charleston, D.; Metz, C.

    1980-01-01

    This project deals with imaging systems in nuclear medicine and image evaluation and is presented as four subprojects. The goal of the first subproject is to improve diagnositc image quality by development of a general computer code for optimizing collimator design. The second subproject deals with a secondary emission and fluorescence technique for thyroid scanning while the third subproject emphasizes the need for more sophisticated image processing systems such as coherent optical spatial filtering systems and digital image processing. The fourth subproject presents a new approach for processing image data by taking into account the energy of each detected gamma-ray photon

  4. Pre-analytic process control: projecting a quality image.

    Science.gov (United States)

    Serafin, Mark D

    2006-09-26

    Within the health-care system, the term "ancillary department" often describes the laboratory. Thus, laboratories may find it difficult to define their image and with it, customer perception of department quality. Regulatory requirements give laboratories who so desire an elegant way to address image and perception issues--a comprehensive pre-analytic system solution. Since large laboratories use such systems--laboratory service manuals--I describe and illustrate the process for the benefit of smaller facilities. There exist resources to help even small laboratories produce a professional service manual--an elegant solution to image and customer perception of quality.

  5. Anisotropic conductivity imaging with MREIT using equipotential projection algorithm

    Energy Technology Data Exchange (ETDEWEB)

    Degirmenci, Evren [Department of Electrical and Electronics Engineering, Mersin University, Mersin (Turkey); Eyueboglu, B Murat [Department of Electrical and Electronics Engineering, Middle East Technical University, 06531, Ankara (Turkey)

    2007-12-21

    Magnetic resonance electrical impedance tomography (MREIT) combines magnetic flux or current density measurements obtained by magnetic resonance imaging (MRI) and surface potential measurements to reconstruct images of true conductivity with high spatial resolution. Most of the biological tissues have anisotropic conductivity; therefore, anisotropy should be taken into account in conductivity image reconstruction. Almost all of the MREIT reconstruction algorithms proposed to date assume isotropic conductivity distribution. In this study, a novel MREIT image reconstruction algorithm is proposed to image anisotropic conductivity. Relative anisotropic conductivity values are reconstructed iteratively, using only current density measurements without any potential measurement. In order to obtain true conductivity values, only either one potential or conductivity measurement is sufficient to determine a scaling factor. The proposed technique is evaluated on simulated data for isotropic and anisotropic conductivity distributions, with and without measurement noise. Simulation results show that the images of both anisotropic and isotropic conductivity distributions can be reconstructed successfully.

  6. Projection-slice theorem based 2D-3D registration

    Science.gov (United States)

    van der Bom, M. J.; Pluim, J. P. W.; Homan, R.; Timmer, J.; Bartels, L. W.

    2007-03-01

    In X-ray guided procedures, the surgeon or interventionalist is dependent on his or her knowledge of the patient's specific anatomy and the projection images acquired during the procedure by a rotational X-ray source. Unfortunately, these X-ray projections fail to give information on the patient's anatomy in the dimension along the projection axis. It would be very profitable to provide the surgeon or interventionalist with a 3D insight of the patient's anatomy that is directly linked to the X-ray images acquired during the procedure. In this paper we present a new robust 2D-3D registration method based on the Projection-Slice Theorem. This theorem gives us a relation between the pre-operative 3D data set and the interventional projection images. Registration is performed by minimizing a translation invariant similarity measure that is applied to the Fourier transforms of the images. The method was tested by performing multiple exhaustive searches on phantom data of the Circle of Willis and on a post-mortem human skull. Validation was performed visually by comparing the test projections to the ones that corresponded to the minimal value of the similarity measure. The Projection-Slice Theorem Based method was shown to be very effective and robust, and provides capture ranges up to 62 degrees. Experiments have shown that the method is capable of retrieving similar results when translations are applied to the projection images.

  7. Collaborative Tracking of Image Features Based on Projective Invariance

    Science.gov (United States)

    Jiang, Jinwei

    -mode sensors for improving the flexibility and robustness of the system. From the experimental results during three field tests for the LASOIS system, we observed that most of the errors in the image processing algorithm are caused by the incorrect feature tracking. This dissertation addresses the feature tracking problem in image sequences acquired from cameras. Despite many alternatives to feature tracking problem, iterative least squares solution solving the optical flow equation has been the most popular approach used by many in the field. This dissertation attempts to leverage the former efforts to enhance feature tracking methods by introducing a view geometric constraint to the tracking problem, which provides collaboration among features. In contrast to alternative geometry based methods, the proposed approach provides an online solution to optical flow estimation in a collaborative fashion by exploiting Horn and Schunck flow estimation regularized by view geometric constraints. Proposed collaborative tracker estimates the motion of a feature based on the geometry of the scene and how the other features are moving. Alternative to this approach, a new closed form solution to tracking that combines the image appearance with the view geometry is also introduced. We particularly use invariants in the projective coordinates and conjecture that the traditional appearance solution can be significantly improved using view geometry. The geometric constraint is introduced by defining a new optical flow equation which exploits the scene geometry from a set drawn from tracked features. At the end of each tracking loop the quality of the tracked features is judged using both appearance similarity and geometric consistency. Our experiments demonstrate robust tracking performance even when the features are occluded or they undergo appearance changes due to projective deformation of the template. The proposed collaborative tracking method is also tested in the visual navigation

  8. A dual-view digital tomosynthesis imaging technique for improved chest imaging

    Energy Technology Data Exchange (ETDEWEB)

    Zhong, Yuncheng; Lai, Chao-Jen; Wang, Tianpeng; Shaw, Chris C., E-mail: cshaw@mdanderson.org [Department of Imaging Physics, The University of Texas MD Anderson Cancer Center, Houston, Texas 77054 (United States)

    2015-09-15

    Purpose: Digital tomosynthesis (DTS) has been shown to be useful for reducing the overlapping of abnormalities with anatomical structures at various depth levels along the posterior–anterior (PA) direction in chest radiography. However, DTS provides crude three-dimensional (3D) images that have poor resolution in the lateral view and can only be displayed with reasonable quality in the PA view. Furthermore, the spillover of high-contrast objects from off-fulcrum planes generates artifacts that may impede the diagnostic use of the DTS images. In this paper, the authors describe and demonstrate the use of a dual-view DTS technique to improve the accuracy of the reconstructed volume image data for more accurate rendition of the anatomy and slice images with improved resolution and reduced artifacts, thus allowing the 3D image data to be viewed in views other than the PA view. Methods: With the dual-view DTS technique, limited angle scans are performed and projection images are acquired in two orthogonal views: PA and lateral. The dual-view projection data are used together to reconstruct 3D images using the maximum likelihood expectation maximization iterative algorithm. In this study, projection images were simulated or experimentally acquired over 360° using the scanning geometry for cone beam computed tomography (CBCT). While all projections were used to reconstruct CBCT images, selected projections were extracted and used to reconstruct single- and dual-view DTS images for comparison with the CBCT images. For realistic demonstration and comparison, a digital chest phantom derived from clinical CT images was used for the simulation study. An anthropomorphic chest phantom was imaged for the experimental study. The resultant dual-view DTS images were visually compared with the single-view DTS images and CBCT images for the presence of image artifacts and accuracy of CT numbers and anatomy and quantitatively compared with root-mean-square-deviation (RMSD) values

  9. Clinical application and validation of an iterative forward projection matching algorithm for permanent brachytherapy seed localization from conebeam-CT x-ray projections

    Energy Technology Data Exchange (ETDEWEB)

    Pokhrel, Damodar; Murphy, Martin J.; Todor, Dorin A.; Weiss, Elisabeth; Williamson, Jeffrey F. [Department of Radiation Oncology, School of Medicine, Virginia Commonwealth University, Richmond, Virginia 23298 (United States)

    2010-09-15

    Purpose: To experimentally validate a new algorithm for reconstructing the 3D positions of implanted brachytherapy seeds from postoperatively acquired 2D conebeam-CT (CBCT) projection images. Methods: The iterative forward projection matching (IFPM) algorithm finds the 3D seed geometry that minimizes the sum of the squared intensity differences between computed projections of an initial estimate of the seed configuration and radiographic projections of the implant. In-house machined phantoms, containing arrays of 12 and 72 seeds, respectively, are used to validate this method. Also, four {sup 103}Pd postimplant patients are scanned using an ACUITY digital simulator. Three to ten x-ray images are selected from the CBCT projection set and processed to create binary seed-only images. To quantify IFPM accuracy, the reconstructed seed positions are forward projected and overlaid on the measured seed images to find the nearest-neighbor distance between measured and computed seed positions for each image pair. Also, the estimated 3D seed coordinates are compared to known seed positions in the phantom and clinically obtained VariSeed planning coordinates for the patient data. Results: For the phantom study, seed localization error is (0.58{+-}0.33) mm. For all four patient cases, the mean registration error is better than 1 mm while compared against the measured seed projections. IFPM converges in 20-28 iterations, with a computation time of about 1.9-2.8 min/iteration on a 1 GHz processor. Conclusions: The IFPM algorithm avoids the need to match corresponding seeds in each projection as required by standard back-projection methods. The authors' results demonstrate {approx}1 mm accuracy in reconstructing the 3D positions of brachytherapy seeds from the measured 2D projections. This algorithm also successfully localizes overlapping clustered and highly migrated seeds in the implant.

  10. Clinical application and validation of an iterative forward projection matching algorithm for permanent brachytherapy seed localization from conebeam-CT x-ray projections.

    Science.gov (United States)

    Pokhrel, Damodar; Murphy, Martin J; Todor, Dorin A; Weiss, Elisabeth; Williamson, Jeffrey F

    2010-09-01

    To experimentally validate a new algorithm for reconstructing the 3D positions of implanted brachytherapy seeds from postoperatively acquired 2D conebeam-CT (CBCT) projection images. The iterative forward projection matching (IFPM) algorithm finds the 3D seed geometry that minimizes the sum of the squared intensity differences between computed projections of an initial estimate of the seed configuration and radiographic projections of the implant. In-house machined phantoms, containing arrays of 12 and 72 seeds, respectively, are used to validate this method. Also, four 103Pd postimplant patients are scanned using an ACUITY digital simulator. Three to ten x-ray images are selected from the CBCT projection set and processed to create binary seed-only images. To quantify IFPM accuracy, the reconstructed seed positions are forward projected and overlaid on the measured seed images to find the nearest-neighbor distance between measured and computed seed positions for each image pair. Also, the estimated 3D seed coordinates are compared to known seed positions in the phantom and clinically obtained VariSeed planning coordinates for the patient data. For the phantom study, seed localization error is (0.58 +/- 0.33) mm. For all four patient cases, the mean registration error is better than 1 mm while compared against the measured seed projections. IFPM converges in 20-28 iterations, with a computation time of about 1.9-2.8 min/ iteration on a 1 GHz processor. The IFPM algorithm avoids the need to match corresponding seeds in each projection as required by standard back-projection methods. The authors' results demonstrate approximately 1 mm accuracy in reconstructing the 3D positions of brachytherapy seeds from the measured 2D projections. This algorithm also successfully localizes overlapping clustered and highly migrated seeds in the implant.

  11. Digital Display Integration Project Project Online 2.0

    International Nuclear Information System (INIS)

    Bardsley, J. N.

    1999-01-01

    The electronic display industry is changing in three important ways. First, the dominance of the cathode ray tube (CRT) is being challenged by the development of flat panel displays (FPDs). This will lead to the availability of displays of higher performance, albeit at greater cost. Secondly, the analog interfaces between displays that show data and the computers that generate the data are being replaced by digital connections. Finally, a high-resolution display is becoming the most expensive component in computer system for homes and small offices. It is therefore desirable that the useful lifetime of the display extend over several years and that the electronics allows the display to be used with many different image sources. Hopefully, the necessity of having three or four large CRTs in one office to accommodate different computer operating systems or communication protocols will soon disappear. Instead, we hope to see a set of flat panels that can be switched to show several independent images from multiple sources or a composite image from a single source. The more rapid rate of technological improvements and the higher cost of flat panel displays raise the incentive for greater planning and guidance in the acquisition and integration of high performance displays into large organizations, such as LLNL. The goal of the Digital Display Integration Project (DDIP) is to provide such support. This will be achieved through collaboration with leading suppliers of displays, communications equipment and image-processing products, and by greater exchange of information within the Laboratory. The project will start in October 1999. During the first two years (FY2000-1), the primary focus of the program will be upon: introducing displays with high information content (over 5M pixels); facilitating the transition from analog to digital interfaces; enabling data transfer from key computer platforms; incorporating optical communications to remove length restrictions on data

  12. MR tractography; Visualization of structure of nerve fiber system from diffusion weighted images with maximum intensity projection method

    Energy Technology Data Exchange (ETDEWEB)

    Kinosada, Yasutomi; Okuda, Yasuyuki (Mie Univ., Tsu (Japan). School of Medicine); Ono, Mototsugu (and others)

    1993-02-01

    We developed a new noninvasive technique to visualize the anatomical structure of the nerve fiber system in vivo, and named this technique magnetic resonance (MR) tractography and the acquired image an MR tractogram. MR tractography has two steps. One is to obtain diffusion-weighted images sensitized along axes appropriate for depicting the intended nerve fibers with anisotropic water diffusion MR imaging. The other is to extract the anatomical structure of the nerve fiber system from a series of diffusion-weighted images by the maximum intensity projection method. To examine the clinical usefulness of the proposed technique, many contiguous, thin (3 mm) coronal two-dimensional sections of the brain were acquired sequentially in normal volunteers and selected patients with paralyses, on a 1.5 Tesla MR system (Signa, GE) with an ECG-gated Stejskal-Tanner pulse sequence. The structure of the nerve fiber system of normal volunteers was almost the same as the anatomy. The tractograms of patients with paralyses clearly showed the degeneration of nerve fibers and were correlated with clinical symptoms. MR tractography showed great promise for the study of neuroanatomy and neuroradiology. (author).

  13. Estimation of error in maximal intensity projection-based internal target volume of lung tumors: a simulation and comparison study using dynamic magnetic resonance imaging.

    Science.gov (United States)

    Cai, Jing; Read, Paul W; Baisden, Joseph M; Larner, James M; Benedict, Stanley H; Sheng, Ke

    2007-11-01

    To evaluate the error in four-dimensional computed tomography (4D-CT) maximal intensity projection (MIP)-based lung tumor internal target volume determination using a simulation method based on dynamic magnetic resonance imaging (dMRI). Eight healthy volunteers and six lung tumor patients underwent a 5-min MRI scan in the sagittal plane to acquire dynamic images of lung motion. A MATLAB program was written to generate re-sorted dMRI using 4D-CT acquisition methods (RedCAM) by segmenting and rebinning the MRI scans. The maximal intensity projection images were generated from RedCAM and dMRI, and the errors in the MIP-based internal target area (ITA) from RedCAM (epsilon), compared with those from dMRI, were determined and correlated with the subjects' respiratory variability (nu). Maximal intensity projection-based ITAs from RedCAM were comparatively smaller than those from dMRI in both phantom studies (epsilon = -21.64% +/- 8.23%) and lung tumor patient studies (epsilon = -20.31% +/- 11.36%). The errors in MIP-based ITA from RedCAM correlated linearly (epsilon = -5.13nu - 6.71, r(2) = 0.76) with the subjects' respiratory variability. Because of the low temporal resolution and retrospective re-sorting, 4D-CT might not accurately depict the excursion of a moving tumor. Using a 4D-CT MIP image to define the internal target volume might therefore cause underdosing and an increased risk of subsequent treatment failure. Patient-specific respiratory variability might also be a useful predictor of the 4D-CT-induced error in MIP-based internal target volume determination.

  14. Objective and expert-independent validation of retinal image registration algorithms by a projective imaging distortion model.

    Science.gov (United States)

    Lee, Sangyeol; Reinhardt, Joseph M; Cattin, Philippe C; Abràmoff, Michael D

    2010-08-01

    Fundus camera imaging of the retina is widely used to diagnose and manage ophthalmologic disorders including diabetic retinopathy, glaucoma, and age-related macular degeneration. Retinal images typically have a limited field of view, and multiple images can be joined together using an image registration technique to form a montage with a larger field of view. A variety of methods for retinal image registration have been proposed, but evaluating such methods objectively is difficult due to the lack of a reference standard for the true alignment of the individual images that make up the montage. A method of generating simulated retinal images by modeling the geometric distortions due to the eye geometry and the image acquisition process is described in this paper. We also present a validation process that can be used for any retinal image registration method by tracing through the distortion path and assessing the geometric misalignment in the coordinate system of the reference standard. The proposed method can be used to perform an accuracy evaluation over the whole image, so that distortion in the non-overlapping regions of the montage components can be easily assessed. We demonstrate the technique by generating test image sets with a variety of overlap conditions and compare the accuracy of several retinal image registration models. Copyright 2010 Elsevier B.V. All rights reserved.

  15. High-speed technique based on a parallel projection correlation procedure for digital image correlation

    Science.gov (United States)

    Zaripov, D. I.; Renfu, Li

    2018-05-01

    The implementation of high-efficiency digital image correlation methods based on a zero-normalized cross-correlation (ZNCC) procedure for high-speed, time-resolved measurements using a high-resolution digital camera is associated with big data processing and is often time consuming. In order to speed-up ZNCC computation, a high-speed technique based on a parallel projection correlation procedure is proposed. The proposed technique involves the use of interrogation window projections instead of its two-dimensional field of luminous intensity. This simplification allows acceleration of ZNCC computation up to 28.8 times compared to ZNCC calculated directly, depending on the size of interrogation window and region of interest. The results of three synthetic test cases, such as a one-dimensional uniform flow, a linear shear flow and a turbulent boundary-layer flow, are discussed in terms of accuracy. In the latter case, the proposed technique is implemented together with an iterative window-deformation technique. On the basis of the results of the present work, the proposed technique is recommended to be used for initial velocity field calculation, with further correction using more accurate techniques.

  16. An image scanner for real time analysis of spark chamber images

    International Nuclear Information System (INIS)

    Cesaroni, F.; Penso, G.; Locci, A.M.; Spano, M.A.

    1975-01-01

    The notes describes the semiautomatic scanning system at LNF for the analysis of spark chamber images. From the projection of the images on the scanner table, the trajectory in the real space is reconstructed

  17. TOASTing Your Images With Montage

    Science.gov (United States)

    Berriman, G. Bruce; Good, John

    2017-01-01

    The Montage image mosaic engine is a scalable toolkit for creating science-grade mosaics of FITS files, according to the user's specifications of coordinates, projection, sampling, and image rotation. It is written in ANSI-C and runs on all common *nix-based platforms. The code is freely available and is released with a BSD 3-clause license. Version 5 is a major upgrade to Montage, and provides support for creating images that can be consumed by the World Wide Telescope (WWT). Montage treats the TOAST sky tessellation scheme, used by the WWT, as a spherical projection like those in the WCStools library. Thus images in any projection can be converted to the TOAST projection by Montage’s reprojection services. These reprojections can be performed at scale on high-performance platforms and on desktops. WWT consumes PNG or JPEG files, organized according to WWT’s tiling and naming scheme. Montage therefore provides a set of dedicated modules to create the required files from FITS images that contain the TOAST projection. There are two other major features of Version 5. It supports processing of HEALPix files to any projection in the WCS tools library. And it can be built as a library that can be called from other languages, primarily Python. http://montage.ipac.caltech.edu.GitHub download page: https://github.com/Caltech-IPAC/Montage.ASCL record: ascl:1010.036. DOI: dx.doi.org/10.5281/zenodo.49418 Montage is funded by the National Science Foundation under Grant Number ACI-1440620,

  18. Comparison of pure and hybrid iterative reconstruction techniques with conventional filtered back projection: Image quality assessment in the cervicothoracic region

    Energy Technology Data Exchange (ETDEWEB)

    Katsura, Masaki, E-mail: mkatsura-tky@umin.ac.jp [Department of Radiology, Graduate School of Medicine, The University of Tokyo, 7-3-1 Hongo, Bunkyo-ku, Tokyo 113-8655 (Japan); Sato, Jiro; Akahane, Masaaki; Matsuda, Izuru; Ishida, Masanori; Yasaka, Koichiro; Kunimatsu, Akira; Ohtomo, Kuni [Department of Radiology, Graduate School of Medicine, The University of Tokyo, 7-3-1 Hongo, Bunkyo-ku, Tokyo 113-8655 (Japan)

    2013-02-15

    Objectives: To evaluate the impact on image quality of three different image reconstruction techniques in the cervicothoracic region: model-based iterative reconstruction (MBIR), adaptive statistical iterative reconstruction (ASIR), and filtered back projection (FBP). Methods: Forty-four patients underwent unenhanced standard-of-care clinical computed tomography (CT) examinations which included the cervicothoracic region with a 64-row multidetector CT scanner. Images were reconstructed with FBP, 50% ASIR-FBP blending (ASIR50), and MBIR. Two radiologists assessed the cervicothoracic region in a blinded manner for streak artifacts, pixilated blotchy appearances, critical reproduction of visually sharp anatomical structures (thyroid gland, common carotid artery, and esophagus), and overall diagnostic acceptability. Objective image noise was measured in the internal jugular vein. Data were analyzed using the sign test and pair-wise Student's t-test. Results: MBIR images had significant lower quantitative image noise (8.88 ± 1.32) compared to ASIR images (18.63 ± 4.19, P < 0.01) and FBP images (26.52 ± 5.8, P < 0.01). Significant improvements in streak artifacts of the cervicothoracic region were observed with the use of MBIR (P < 0.001 each for MBIR vs. the other two image data sets for both readers), while no significant difference was observed between ASIR and FBP (P > 0.9 for ASIR vs. FBP for both readers). MBIR images were all diagnostically acceptable. Unique features of MBIR images included pixilated blotchy appearances, which did not adversely affect diagnostic acceptability. Conclusions: MBIR significantly improves image noise and streak artifacts of the cervicothoracic region over ASIR and FBP. MBIR is expected to enhance the value of CT examinations for areas where image noise and streak artifacts are problematic.

  19. Improved application of independent component analysis to functional magnetic resonance imaging study via linear projection techniques.

    Science.gov (United States)

    Long, Zhiying; Chen, Kewei; Wu, Xia; Reiman, Eric; Peng, Danling; Yao, Li

    2009-02-01

    Spatial Independent component analysis (sICA) has been widely used to analyze functional magnetic resonance imaging (fMRI) data. The well accepted implicit assumption is the spatially statistical independency of intrinsic sources identified by sICA, making the sICA applications difficult for data in which there exist interdependent sources and confounding factors. This interdependency can arise, for instance, from fMRI studies investigating two tasks in a single session. In this study, we introduced a linear projection approach and considered its utilization as a tool to separate task-related components from two-task fMRI data. The robustness and feasibility of the method are substantiated through simulation on computer data and fMRI real rest data. Both simulated and real two-task fMRI experiments demonstrated that sICA in combination with the projection method succeeded in separating spatially dependent components and had better detection power than pure model-based method when estimating activation induced by each task as well as both tasks.

  20. Correction for polychromatic aberration in computed tomography images

    International Nuclear Information System (INIS)

    Naparstek, A.

    1979-01-01

    A method and apparatus for correcting a computed tomography image for polychromatic aberration caused by the non-linear interaction (i.e. the energy dependent attenuation characteristics) of different body constituents, such as bone and soft tissue, with a polychromatic X-ray beam are described in detail. An initial image is conventionally computed from path measurements made as source and detector assembly scan a body section. In the improvement, each image element of the initial computed image representing attenuation is recorded in a store and is compared with two thresholds, one representing bone and the other soft tissue. Depending on the element value relative to the thresholds, a proportion of the respective constituent is allocated to that element location and corresponding bone and soft tissue projections are determined and stored. An error projection generator calculates projections of polychromatic aberration errors in the raw image data from recalled bone and tissue projections using a multidimensional polynomial function which approximates the non-linear interaction involved. After filtering, these are supplied to an image reconstruction computer to compute image element correction values which are subtracted from raw image element values to provide a corrected reconstructed image for display. (author)

  1. Objective and expert-independent validation of retinal image registration algorithms by a projective imaging distortion model

    NARCIS (Netherlands)

    Lee, Sangyeol; Reinhardt, Joseph M.; Cattin, Philippe C.; Abramoff, M.D.

    2010-01-01

    Fundus camera imaging of the retina is widely used to diagnose and manage ophthalmologic disorders including diabetic retinopathy, glaucoma, and age-related macular degeneration. Retinal images typically have a limited field of view, and multiple images can be joined together using an image

  2. CAUSE AND EFFECT IN PROMOTING A PROJECT

    Directory of Open Access Journals (Sweden)

    SEVERIAN-VLĂDUȚ IACOB

    2013-12-01

    Full Text Available For a project to be considered successful it is necessary, besides a proper coordination, to be also done a good and wide promotion. In view of communication, promotion and maintenance ensures the organization's image. Disturbances occurring in any type of project, as a result of poor promotion, affect the image of the team and highlight the weaknesses in its management. Therefore, the promotion should be permanently monitored and evaluated. Cause-effect analysis is one of the ways we can identify some of nonconformities of the promotion process within a project.

  3. X-ray CT core imaging of Oman Drilling Project on D/V CHIKYU

    Science.gov (United States)

    Michibayashi, K.; Okazaki, K.; Leong, J. A. M.; Kelemen, P. B.; Johnson, K. T. M.; Greenberger, R. N.; Manning, C. E.; Harris, M.; de Obeso, J. C.; Abe, N.; Hatakeyama, K.; Ildefonse, B.; Takazawa, E.; Teagle, D. A. H.; Coggon, J. A.

    2017-12-01

    We obtained X-ray computed tomography (X-ray CT) images for all cores (GT1A, GT2A, GT3A and BT1A) in Oman Drilling Project Phase 1 (OmanDP cores), since X-ray CT scanning is a routine measurement of the IODP measurement plan onboard Chikyu, which enables the non-destructive observation of the internal structure of core samples. X-ray CT images provide information about chemical compositions and densities of the cores and is useful for assessing sample locations and the quality of the whole-round samples. The X-ray CT scanner (Discovery CT 750HD, GE Medical Systems) on Chikyu scans and reconstructs the image of a 1.4 m section in 10 minutes and produces a series of scan images, each 0.625 mm thick. The X-ray tube (as an X-ray source) and the X-ray detector are installed inside of the gantry at an opposing position to each other. The core sample is scanned in the gantry with the scanning rate of 20 mm/sec. The distribution of attenuation values mapped to an individual slice comprises the raw data that are used for subsequent image processing. Successive two-dimensional (2-D) slices of 512 x 512 pixels yield a representation of attenuation values in three-dimensional (3-D) voxels of 512 x 512 by 1600 in length. Data generated for each core consist of core-axis-normal planes (XY planes) of X-ray attenuation values with dimensions of 512 × 512 pixels in 9 cm × 9 cm cross-section, meaning at the dimensions of a core section, the resolution is 0.176 mm/pixel. X-ray intensity varies as a function of X-ray path length and the linear attenuation coefficient (LAC) of the target material is a function of the chemical composition and density of the target material. The basic measure of attenuation, or radiodensity, is the CT number given in Hounsfield units (HU). CT numbers of air and water are -1000 and 0, respectively. Our preliminary results show that CT numbers of OmanDP cores are well correlated to gamma ray attenuation density (GRA density) as a function of chemical

  4. 3D computed tomography using a microfocus X-ray source: Analysis of artifact formation in the reconstructed images using simulated as well as experimental projection data

    International Nuclear Information System (INIS)

    Krimmel, S.; Stephan, J.; Baumann, J.

    2005-01-01

    The scope of this contribution is to identify and to quantify the influence of different parameters on the formation of image artifacts in X-ray computed tomography (CT) resulting for example, from beam hardening or from partial lack of information using 3D cone beam CT. In general, the reconstructed image quality depends on a number of acquisition parameters concerning the X-ray source (e.g. X-ray spectrum), the geometrical setup (e.g. cone beam angle), the sample properties (e.g. absorption characteristics) and the detector properties. While it is difficult to distinguish the influence of different effects clearly in experimental projection data, they can be selected individually with the help of simulated projection data by varying the parameter set. The reconstruction of the 3D data set is performed with the filtered back projection algorithm according to Feldkamp, Davis and Kress for experimental as well as for simulated projection data. The experimental data are recorded with an industrial microfocus CT system which features a focal spot size of a few micrometers and uses a digital flat panel detector for data acquisition

  5. Adaptive statistical iterative reconstruction versus filtered back projection in the same patient: 64 channel liver CT image quality and patient radiation dose

    International Nuclear Information System (INIS)

    Mitsumori, Lee M.; Shuman, William P.; Busey, Janet M.; Kolokythas, Orpheus; Koprowicz, Kent M.

    2012-01-01

    To compare routine dose liver CT reconstructed with filtered back projection (FBP) versus low dose images reconstructed with FBP and adaptive statistical iterative reconstruction (ASIR). In this retrospective study, patients had a routine dose protocol reconstructed with FBP, and again within 17 months (median 6.1 months), had a low dose protocol reconstructed twice, with FBP and ASIR. These reconstructions were compared for noise, image quality, and radiation dose. Nineteen patients were included. (12 male, mean age 58). Noise was significantly lower in low dose images reconstructed with ASIR compared to routine dose images reconstructed with FBP (liver: p <.05, aorta: p < 0.001). Low dose FBP images were scored significantly lower for subjective image quality than low dose ASIR (2.1 ± 0.5, 3.2 ± 0.8, p < 0.001). There was no difference in subjective image quality scores between routine dose FBP images and low dose ASIR images (3.6 ± 0.5, 3.2 ± 0.8, NS).Radiation dose was 41% less for the low dose protocol (4.4 ± 2.4 mSv versus 7.5 ± 5.5 mSv, p < 0.05). Our initial results suggest low dose CT images reconstructed with ASIR may have lower measured noise, similar image quality, yet significantly less radiation dose compared with higher dose images reconstructed with FBP. (orig.)

  6. Adaptive statistical iterative reconstruction versus filtered back projection in the same patient: 64 channel liver CT image quality and patient radiation dose

    Energy Technology Data Exchange (ETDEWEB)

    Mitsumori, Lee M.; Shuman, William P.; Busey, Janet M.; Kolokythas, Orpheus; Koprowicz, Kent M. [University of Washington School of Medicine, Department of Radiology, Seattle, WA (United States)

    2012-01-15

    To compare routine dose liver CT reconstructed with filtered back projection (FBP) versus low dose images reconstructed with FBP and adaptive statistical iterative reconstruction (ASIR). In this retrospective study, patients had a routine dose protocol reconstructed with FBP, and again within 17 months (median 6.1 months), had a low dose protocol reconstructed twice, with FBP and ASIR. These reconstructions were compared for noise, image quality, and radiation dose. Nineteen patients were included. (12 male, mean age 58). Noise was significantly lower in low dose images reconstructed with ASIR compared to routine dose images reconstructed with FBP (liver: p <.05, aorta: p < 0.001). Low dose FBP images were scored significantly lower for subjective image quality than low dose ASIR (2.1 {+-} 0.5, 3.2 {+-} 0.8, p < 0.001). There was no difference in subjective image quality scores between routine dose FBP images and low dose ASIR images (3.6 {+-} 0.5, 3.2 {+-} 0.8, NS).Radiation dose was 41% less for the low dose protocol (4.4 {+-} 2.4 mSv versus 7.5 {+-} 5.5 mSv, p < 0.05). Our initial results suggest low dose CT images reconstructed with ASIR may have lower measured noise, similar image quality, yet significantly less radiation dose compared with higher dose images reconstructed with FBP. (orig.)

  7. Enhanced imaging of microcalcifications in digital breast tomosynthesis through improved image-reconstruction algorithms

    International Nuclear Information System (INIS)

    Sidky, Emil Y.; Pan Xiaochuan; Reiser, Ingrid S.; Nishikawa, Robert M.; Moore, Richard H.; Kopans, Daniel B.

    2009-01-01

    Purpose: The authors develop a practical, iterative algorithm for image-reconstruction in undersampled tomographic systems, such as digital breast tomosynthesis (DBT). Methods: The algorithm controls image regularity by minimizing the image total p variation (TpV), a function that reduces to the total variation when p=1.0 or the image roughness when p=2.0. Constraints on the image, such as image positivity and estimated projection-data tolerance, are enforced by projection onto convex sets. The fact that the tomographic system is undersampled translates to the mathematical property that many widely varied resultant volumes may correspond to a given data tolerance. Thus the application of image regularity serves two purposes: (1) Reduction in the number of resultant volumes out of those allowed by fixing the data tolerance, finding the minimum image TpV for fixed data tolerance, and (2) traditional regularization, sacrificing data fidelity for higher image regularity. The present algorithm allows for this dual role of image regularity in undersampled tomography. Results: The proposed image-reconstruction algorithm is applied to three clinical DBT data sets. The DBT cases include one with microcalcifications and two with masses. Conclusions: Results indicate that there may be a substantial advantage in using the present image-reconstruction algorithm for microcalcification imaging.

  8. The Multidimensional Integrated Intelligent Imaging project (MI-3)

    International Nuclear Information System (INIS)

    Allinson, N.; Anaxagoras, T.; Aveyard, J.; Arvanitis, C.; Bates, R.; Blue, A.; Bohndiek, S.; Cabello, J.; Chen, L.; Chen, S.; Clark, A.; Clayton, C.; Cook, E.; Cossins, A.; Crooks, J.; El-Gomati, M.; Evans, P.M.; Faruqi, W.; French, M.; Gow, J.

    2009-01-01

    MI-3 is a consortium of 11 universities and research laboratories whose mission is to develop complementary metal-oxide semiconductor (CMOS) active pixel sensors (APS) and to apply these sensors to a range of imaging challenges. A range of sensors has been developed: On-Pixel Intelligent CMOS (OPIC)-designed for in-pixel intelligence; FPN-designed to develop novel techniques for reducing fixed pattern noise; HDR-designed to develop novel techniques for increasing dynamic range; Vanilla/PEAPS-with digital and analogue modes and regions of interest, which has also been back-thinned; Large Area Sensor (LAS)-a novel, stitched LAS; and eLeNA-which develops a range of low noise pixels. Applications being developed include autoradiography, a gamma camera system, radiotherapy verification, tissue diffraction imaging, X-ray phase-contrast imaging, DNA sequencing and electron microscopy.

  9. The Multidimensional Integrated Intelligent Imaging project (MI-3)

    Energy Technology Data Exchange (ETDEWEB)

    Allinson, N.; Anaxagoras, T. [Vision and Information Engineering, University of Sheffield (United Kingdom); Aveyard, J. [Laboratory for Environmental Gene Regulation, University of Liverpool (United Kingdom); Arvanitis, C. [Radiation Physics, University College, London (United Kingdom); Bates, R.; Blue, A. [Experimental Particle Physics, University of Glasgow (United Kingdom); Bohndiek, S. [Radiation Physics, University College, London (United Kingdom); Cabello, J. [Centre for Vision, Speech and Signal Processing, University of Surrey, Guildford (United Kingdom); Chen, L. [Electron Optics, Applied Electromagnetics and Electron Optics, University of York (United Kingdom); Chen, S. [MRC Laboratory for Molecular Biology, Cambridge (United Kingdom); Clark, A. [STFC Rutherford Appleton Laboratories (United Kingdom); Clayton, C. [Vision and Information Engineering, University of Sheffield (United Kingdom); Cook, E. [Radiation Physics, University College, London (United Kingdom); Cossins, A. [Laboratory for Environmental Gene Regulation, University of Liverpool (United Kingdom); Crooks, J. [STFC Rutherford Appleton Laboratories (United Kingdom); El-Gomati, M. [Electron Optics, Applied Electromagnetics and Electron Optics, University of York (United Kingdom); Evans, P.M. [Institute of Cancer Research, Sutton, Surrey SM2 5PT (United Kingdom)], E-mail: phil.evans@icr.ac.uk; Faruqi, W. [MRC Laboratory for Molecular Biology, Cambridge (United Kingdom); French, M. [STFC Rutherford Appleton Laboratories (United Kingdom); Gow, J. [Imaging for Space and Terrestrial Applications, Brunel University, London (United Kingdom)] (and others)

    2009-06-01

    MI-3 is a consortium of 11 universities and research laboratories whose mission is to develop complementary metal-oxide semiconductor (CMOS) active pixel sensors (APS) and to apply these sensors to a range of imaging challenges. A range of sensors has been developed: On-Pixel Intelligent CMOS (OPIC)-designed for in-pixel intelligence; FPN-designed to develop novel techniques for reducing fixed pattern noise; HDR-designed to develop novel techniques for increasing dynamic range; Vanilla/PEAPS-with digital and analogue modes and regions of interest, which has also been back-thinned; Large Area Sensor (LAS)-a novel, stitched LAS; and eLeNA-which develops a range of low noise pixels. Applications being developed include autoradiography, a gamma camera system, radiotherapy verification, tissue diffraction imaging, X-ray phase-contrast imaging, DNA sequencing and electron microscopy.

  10. A comparative study between matched and mis-matched projection/back projection pairs used with ASIRT reconstruction method

    International Nuclear Information System (INIS)

    Guedouar, R.; Zarrad, B.

    2010-01-01

    For algebraic reconstruction techniques both forward and back projection operators are needed. The ability to perform accurate reconstruction relies fundamentally on the forward projection and back projection methods which are usually, the transpose of each other. Even though the mis-matched pairs may introduce additional errors during the iterative process, the usefulness of mis-matched projector/back projector pairs has been proved in image reconstruction. This work investigates the performance of matched and mis-matched reconstruction pairs using popular forward projectors and their transposes when used in reconstruction tasks with additive simultaneous iterative reconstruction techniques (ASIRT) in a parallel beam approach. Simulated noiseless phantoms are used to compare the performance of the investigated pairs in terms of the root mean squared errors (RMSE) which are calculated between reconstructed slices and the reference in different regions. Results show that mis-matched projection/back projection pairs can promise more accuracy of reconstructed images than matched ones. The forward projection operator performance seems independent of the choice of the back projection operator and vice versa.

  11. A comparative study between matched and mis-matched projection/back projection pairs used with ASIRT reconstruction method

    Energy Technology Data Exchange (ETDEWEB)

    Guedouar, R., E-mail: raja_guedouar@yahoo.f [Higher School of Health Sciences and Techniques of Monastir, Av. Avicenne, 5060 Monastir, B.P. 128 (Tunisia); Zarrad, B., E-mail: boubakerzarrad@yahoo.f [Higher School of Health Sciences and Techniques of Monastir, Av. Avicenne, 5060 Monastir, B.P. 128 (Tunisia)

    2010-07-21

    For algebraic reconstruction techniques both forward and back projection operators are needed. The ability to perform accurate reconstruction relies fundamentally on the forward projection and back projection methods which are usually, the transpose of each other. Even though the mis-matched pairs may introduce additional errors during the iterative process, the usefulness of mis-matched projector/back projector pairs has been proved in image reconstruction. This work investigates the performance of matched and mis-matched reconstruction pairs using popular forward projectors and their transposes when used in reconstruction tasks with additive simultaneous iterative reconstruction techniques (ASIRT) in a parallel beam approach. Simulated noiseless phantoms are used to compare the performance of the investigated pairs in terms of the root mean squared errors (RMSE) which are calculated between reconstructed slices and the reference in different regions. Results show that mis-matched projection/back projection pairs can promise more accuracy of reconstructed images than matched ones. The forward projection operator performance seems independent of the choice of the back projection operator and vice versa.

  12. Optical design of ultrashort throw liquid crystal on silicon projection system

    Science.gov (United States)

    Huang, Jiun-Woei

    2017-05-01

    An ultrashort throw liquid crystal on silicon (LCoS) projector for home cinema, virtual reality, and automobile heads-up display has been designed and fabricated. To achieve the best performance and highest-quality image, this study aimed to design wide-angle projection optics and optimize the illumination for LCoS. Based on the telecentric lens projection system and optimized Koehler illumination, the optical parameters were calculated. The projector's optical system consisted of a conic aspheric mirror and image optics using either symmetric double Gauss or a large-angle eyepiece to achieve a full projection angle larger than 155 deg. By applying Koehler illumination, image resolution was enhanced and the modulation transfer function of the image in high spatial frequency was increased to form a high-quality illuminated image. The partial coherence analysis verified that the design was capable of 2.5 lps/mm within a 2 m×1.5 m projected image. The throw ratio was less than 0.25 in HD format.

  13. From data to analysis: linking NWChem and Avogadro with the syntax and semantics of Chemical Markup Language.

    Science.gov (United States)

    de Jong, Wibe A; Walker, Andrew M; Hanwell, Marcus D

    2013-05-24

    Multidisciplinary integrated research requires the ability to couple the diverse sets of data obtained from a range of complex experiments and computer simulations. Integrating data requires semantically rich information. In this paper an end-to-end use of semantically rich data in computational chemistry is demonstrated utilizing the Chemical Markup Language (CML) framework. Semantically rich data is generated by the NWChem computational chemistry software with the FoX library and utilized by the Avogadro molecular editor for analysis and visualization. The NWChem computational chemistry software has been modified and coupled to the FoX library to write CML compliant XML data files. The FoX library was expanded to represent the lexical input files and molecular orbitals used by the computational chemistry software. Draft dictionary entries and a format for molecular orbitals within CML CompChem were developed. The Avogadro application was extended to read in CML data, and display molecular geometry and electronic structure in the GUI allowing for an end-to-end solution where Avogadro can create input structures, generate input files, NWChem can run the calculation and Avogadro can then read in and analyse the CML output produced. The developments outlined in this paper will be made available in future releases of NWChem, FoX, and Avogadro. The production of CML compliant XML files for computational chemistry software such as NWChem can be accomplished relatively easily using the FoX library. The CML data can be read in by a newly developed reader in Avogadro and analysed or visualized in various ways. A community-based effort is needed to further develop the CML CompChem convention and dictionary. This will enable the long-term goal of allowing a researcher to run simple "Google-style" searches of chemistry and physics and have the results of computational calculations returned in a comprehensible form alongside articles from the published literature.

  14. Calibration, Projection, and Final Image Products of MESSENGER's Mercury Dual Imaging System

    Science.gov (United States)

    Denevi, Brett W.; Chabot, Nancy L.; Murchie, Scott L.; Becker, Kris J.; Blewett, David T.; Domingue, Deborah L.; Ernst, Carolyn M.; Hash, Christopher D.; Hawkins, S. Edward; Keller, Mary R.; Laslo, Nori R.; Nair, Hari; Robinson, Mark S.; Seelos, Frank P.; Stephens, Grant K.; Turner, F. Scott; Solomon, Sean C.

    2018-02-01

    We present an overview of the operations, calibration, geodetic control, photometric standardization, and processing of images from the Mercury Dual Imaging System (MDIS) acquired during the orbital phase of the MESSENGER spacecraft's mission at Mercury (18 March 2011-30 April 2015). We also provide a summary of all of the MDIS products that are available in NASA's Planetary Data System (PDS). Updates to the radiometric calibration included slight modification of the frame-transfer smear correction, updates to the flat fields of some wide-angle camera (WAC) filters, a new model for the temperature dependence of narrow-angle camera (NAC) and WAC sensitivity, and an empirical correction for temporal changes in WAC responsivity. Further, efforts to characterize scattered light in the WAC system are described, along with a mosaic-dependent correction for scattered light that was derived for two regional mosaics. Updates to the geometric calibration focused on the focal lengths and distortions of the NAC and all WAC filters, NAC-WAC alignment, and calibration of the MDIS pivot angle and base. Additionally, two control networks were derived so that the majority of MDIS images can be co-registered with sub-pixel accuracy; the larger of the two control networks was also used to create a global digital elevation model. Finally, we describe the image processing and photometric standardization parameters used in the creation of the MDIS advanced products in the PDS, which include seven large-scale mosaics, numerous targeted local mosaics, and a set of digital elevation models ranging in scale from local to global.

  15. Enhancement of Stereo Imagery by Artificial Texture Projection Generated Using a LIDAR

    Science.gov (United States)

    Veitch-Michaelis, Joshua; Muller, Jan-Peter; Walton, David; Storey, Jonathan; Foster, Michael; Crutchley, Benjamin

    2016-06-01

    Passive stereo imaging is capable of producing dense 3D data, but image matching algorithms generally perform poorly on images with large regions of homogenous texture due to ambiguous match costs. Stereo systems can be augmented with an additional light source that can project some form of unique texture onto surfaces in the scene. Methods include structured light, laser projection through diffractive optical elements, data projectors and laser speckle. Pattern projection using lasers has the advantage of producing images with a high signal to noise ratio. We have investigated the use of a scanning visible-beam LIDAR to simultaneously provide enhanced texture within the scene and to provide additional opportunities for data fusion in unmatched regions. The use of a LIDAR rather than a laser alone allows us to generate highly accurate ground truth data sets by scanning the scene at high resolution. This is necessary for evaluating different pattern projection schemes. Results from LIDAR generated random dots are presented and compared to other texture projection techniques. Finally, we investigate the use of image texture analysis to intelligently project texture where it is required while exploiting the texture available in the ambient light image.

  16. ENHANCEMENT OF STEREO IMAGERY BY ARTIFICIAL TEXTURE PROJECTION GENERATED USING A LIDAR

    Directory of Open Access Journals (Sweden)

    J. Veitch-Michaelis

    2016-06-01

    Full Text Available Passive stereo imaging is capable of producing dense 3D data, but image matching algorithms generally perform poorly on images with large regions of homogenous texture due to ambiguous match costs. Stereo systems can be augmented with an additional light source that can project some form of unique texture onto surfaces in the scene. Methods include structured light, laser projection through diffractive optical elements, data projectors and laser speckle. Pattern projection using lasers has the advantage of producing images with a high signal to noise ratio. We have investigated the use of a scanning visible-beam LIDAR to simultaneously provide enhanced texture within the scene and to provide additional opportunities for data fusion in unmatched regions. The use of a LIDAR rather than a laser alone allows us to generate highly accurate ground truth data sets by scanning the scene at high resolution. This is necessary for evaluating different pattern projection schemes. Results from LIDAR generated random dots are presented and compared to other texture projection techniques. Finally, we investigate the use of image texture analysis to intelligently project texture where it is required while exploiting the texture available in the ambient light image.

  17. HELICoiD project: a new use of hyperspectral imaging for brain cancer detection in real-time during neurosurgical operations

    Science.gov (United States)

    Fabelo, Himar; Ortega, Samuel; Kabwama, Silvester; Callico, Gustavo M.; Bulters, Diederik; Szolna, Adam; Pineiro, Juan F.; Sarmiento, Roberto

    2016-05-01

    Hyperspectral images allow obtaining large amounts of information about the surface of the scene that is captured by the sensor. Using this information and a set of complex classification algorithms is possible to determine which material or substance is located in each pixel. The HELICoiD (HypErspectraL Imaging Cancer Detection) project is a European FET project that has the goal to develop a demonstrator capable to discriminate, with high precision, between normal and tumour tissues, operating in real-time, during neurosurgical operations. This demonstrator could help the neurosurgeons in the process of brain tumour resection, avoiding the excessive extraction of normal tissue and unintentionally leaving small remnants of tumour. Such precise delimitation of the tumour boundaries will improve the results of the surgery. The HELICoiD demonstrator is composed of two hyperspectral cameras obtained from Headwall. The first one in the spectral range from 400 to 1000 nm (visible and near infrared) and the second one in the spectral range from 900 to 1700 nm (near infrared). The demonstrator also includes an illumination system that covers the spectral range from 400 nm to 2200 nm. A data processing unit is in charge of managing all the parts of the demonstrator, and a high performance platform aims to accelerate the hyperspectral image classification process. Each one of these elements is installed in a customized structure specially designed for surgical environments. Preliminary results of the classification algorithms offer high accuracy (over 95%) in the discrimination between normal and tumour tissues.

  18. The ACTwatch project: methods to describe anti-malarial markets in seven countries.

    Science.gov (United States)

    Shewchuk, Tanya; O'Connell, Kathryn A; Goodman, Catherine; Hanson, Kara; Chapman, Steven; Chavasse, Desmond

    2011-10-31

    Policy makers, governments and donors are faced with an information gap when considering ways to improve access to artemisinin-based combination therapy (ACT) and malaria diagnostics including rapid diagnostic tests (RDTs). To help address some of these gaps, a five-year multi-country research project called ACTwatch was launched. The project is designed to provide a comprehensive picture of the anti-malarial market to inform national and international anti-malarial drug policy decision-making. The project is being conducted in seven malaria-endemic countries: Benin, Cambodia, the Democratic Republic of Congo, Madagascar, Nigeria, Uganda and Zambia from 2008 to 2012.ACTwatch measures which anti-malarials are available, where they are available and at what price and who they are used by. These indicators are measured over time and across countries through three study components: outlet surveys, supply chain studies and household surveys. Nationally representative outlet surveys examine the market share of different anti-malarials passing through public facilities and private retail outlets. Supply chain research provides a picture of the supply chain serving drug outlets, and measures mark-ups at each supply chain level. On the demand side, nationally representative household surveys capture treatment seeking patterns and use of anti-malarial drugs, as well as respondent knowledge of anti-malarials. The research project provides findings on both the demand and supply side determinants of anti-malarial access. There are four key features of ACTwatch. First is the overlap of the three study components where nationally representative data are collected over similar periods, using a common sampling approach. A second feature is the number and diversity of countries that are studied which allows for cross-country comparisons. Another distinguishing feature is its ability to measure trends over time. Finally, the project aims to disseminate findings widely for decision

  19. The ACTwatch project: methods to describe anti-malarial markets in seven countries

    Directory of Open Access Journals (Sweden)

    Chapman Steven

    2011-10-01

    Full Text Available Abstract Background Policy makers, governments and donors are faced with an information gap when considering ways to improve access to artemisinin-based combination therapy (ACT and malaria diagnostics including rapid diagnostic tests (RDTs. To help address some of these gaps, a five-year multi-country research project called ACTwatch was launched. The project is designed to provide a comprehensive picture of the anti-malarial market to inform national and international anti-malarial drug policy decision-making. Methods The project is being conducted in seven malaria-endemic countries: Benin, Cambodia, the Democratic Republic of Congo, Madagascar, Nigeria, Uganda and Zambia from 2008 to 2012. ACTwatch measures which anti-malarials are available, where they are available and at what price and who they are used by. These indicators are measured over time and across countries through three study components: outlet surveys, supply chain studies and household surveys. Nationally representative outlet surveys examine the market share of different anti-malarials passing through public facilities and private retail outlets. Supply chain research provides a picture of the supply chain serving drug outlets, and measures mark-ups at each supply chain level. On the demand side, nationally representative household surveys capture treatment seeking patterns and use of anti-malarial drugs, as well as respondent knowledge of anti-malarials. Discussion The research project provides findings on both the demand and supply side determinants of anti-malarial access. There are four key features of ACTwatch. First is the overlap of the three study components where nationally representative data are collected over similar periods, using a common sampling approach. A second feature is the number and diversity of countries that are studied which allows for cross-country comparisons. Another distinguishing feature is its ability to measure trends over time. Finally, the

  20. The ACTwatch project: methods to describe anti-malarial markets in seven countries

    Science.gov (United States)

    2011-01-01

    Background Policy makers, governments and donors are faced with an information gap when considering ways to improve access to artemisinin-based combination therapy (ACT) and malaria diagnostics including rapid diagnostic tests (RDTs). To help address some of these gaps, a five-year multi-country research project called ACTwatch was launched. The project is designed to provide a comprehensive picture of the anti-malarial market to inform national and international anti-malarial drug policy decision-making. Methods The project is being conducted in seven malaria-endemic countries: Benin, Cambodia, the Democratic Republic of Congo, Madagascar, Nigeria, Uganda and Zambia from 2008 to 2012. ACTwatch measures which anti-malarials are available, where they are available and at what price and who they are used by. These indicators are measured over time and across countries through three study components: outlet surveys, supply chain studies and household surveys. Nationally representative outlet surveys examine the market share of different anti-malarials passing through public facilities and private retail outlets. Supply chain research provides a picture of the supply chain serving drug outlets, and measures mark-ups at each supply chain level. On the demand side, nationally representative household surveys capture treatment seeking patterns and use of anti-malarial drugs, as well as respondent knowledge of anti-malarials. Discussion The research project provides findings on both the demand and supply side determinants of anti-malarial access. There are four key features of ACTwatch. First is the overlap of the three study components where nationally representative data are collected over similar periods, using a common sampling approach. A second feature is the number and diversity of countries that are studied which allows for cross-country comparisons. Another distinguishing feature is its ability to measure trends over time. Finally, the project aims to disseminate